Page 6 of 8 FirstFirst ... 45678 LastLast
Results 51 to 60 of 73

Thread: AI: Superintelligence - paths, dangers and strategies

  1. #51
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,138
    Quote Originally Posted by Utisz View Post
    There's a decent documentary on AlphaGo on netflix now. It doesn't talk much about technical aspects but the human aspects are interesting.
    Cool, something to watch.

    I agree with a lot of what you've posted in this thread, but I don't see how you jump to understand that such a hypothetical GAI would be empathetic? The only way I see that you can come to that conclusion (even as a general prediction) about GAI is by definition. It really gets down to the question of what the GAI's objective would be (if any); obviously that objective (if any) would require being very general, but I struggle to think of clear objectives that would necessitate empathy other than the objective including "empathy" itself by definition.
    Human empathy probably has a survival function because that's how are created. The only other alternative is it's a side effect that falls out of our other traits, but in a just so story way it isn't a stretch for me to think empathy is useful.

    Rather than the objective includes empathy, it might include the same ingredients which helped empathy evolve in humans - the need for cooperative social groups with other humans etc. A tricky part here is trying to ensure the GAI doesn't develop some of our non-desirable traits. Perhaps domestication is the best route - breeding in a sandbox for reduced aggression before letting it out.

    One of the major problems is that when us humans speak, we assume a certain common "qualia" that machines will never have.

    My guess is that probably, into the future, through voice recognition and so forth, we will learn something like a control language that somehow meets machines and humans in the middle. Probably that language will become a lingua franca in technical discussions where ambiguity must be avoided. Where in the middle that point is ... I don't know.
    For the G in GAI, I'd expect a lot of generalization and abstraction which qualia facilitates. For GAI that are useful to us, I'd expect a common environment for reference even if the GAI's environment is virtual. Their internal representations would be different since the architecture is different and perhaps more powerful.

    The idea of an intermediate language is interesting. I suppose we'd try to make them use human languages, but perhaps the gap would be too great.

    Quote Originally Posted by P-O View Post
    I think this already exists in a basic form. This is how we search for things on google. Some people are very bad at finding what they want; others are very good. ... In any case I think this problem can be overcome without tooo much trouble. As long as there are a few key words that we agree on for sure, we can communicate with it the same way we communicate with little kids.
    Or vice versa ?

    On a light note - this is a button pushing game with an AI theme (Universal Paperclips), found in this interesting article that's tangential to this thread.

  2. #52
    Pull the strings! Architect's Avatar
    Type
    INTP
    Join Date
    Feb 2014
    Location
    West Coast
    Posts
    494
    Quote Originally Posted by Utisz View Post
    There's a decent documentary on AlphaGo on netflix now. It doesn't talk much about technical aspects but the human aspects are interesting.
    I'm sure it's good, but keep in mind the Google brand advertising machine. Um, I know it first hand and learned the hard way it's subtle and pernicious. They care very much how you think of them.

    I agree with a lot of what you've posted in this thread, but I don't see how you jump to understand that such a hypothetical GAI would be empathetic? The only way I see that you can come to that conclusion (even as a general prediction) about GAI is by definition. It really gets down to the question of what the GAI's objective would be (if any); obviously that objective (if any) would require being very general, but I struggle to think of clear objectives that would necessitate empathy other than the objective including "empathy" itself by definition.
    How do you know I'm empathic? How do you know anybody is? We don't, like intelligence or porn though, while we can't define it we know it when we see it. I believe anything able to pass the Turing test will also be able to pass the Utisz Empathy test, and convince you it's empathic. Is it really? Who knows? At any rate I hypothesize that empathy is necessary but not sufficient component of GAI

    NLP is only a superficial issue. I mean of course NLP is important for various applications. If you could "solve" natural language understanding, that would be a major milestone but probably not towards GAI since natural language understanding probably requires strong AI to solve. I think putting emphasis on NLP is just shifting the goal posts. Most recent advances in NLP come from better data and better machine learning methods (and not, for example, some new linguistic insight).
    Chicken and egg? I believe Chomsky (surely an INTP) believes language to be the core of thought, wasn't that his central thesis? At any rate it's a common idea.

    Here's an experiment that gives a clue. Back in the 70's they took some chimps and raised them as human children. Literally, they got families with young children who took them in as new human children. Treated them in every respect as a human child. They grew up and acted more like humans than chimps, with one important difference - they never learned to speak. We have sign language and such, but at this point have determined it's pretty much not real language, more X=Y kind of thinking. There's some fundamental spark missing from chimps, who otherwise shared 99%+ of our genome (and most of the 100 odd MiB of our genome is about how to make a brain). What is it? Otherwise we consider our hirsute cousins probably conscious, probably sentient, but probably not intelligent, and interestingly while they clearly have emotions they have no language.

    Oh by the way I have a new job which is in AI, so now it's for fun and profit.

  3. #53
    know nothing pensive_pilgrim's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    4,411
    Quote Originally Posted by Architect View Post
    I'm sure it's good, but keep in mind the Google brand advertising machine. Um, I know it first hand and learned the hard way it's subtle and pernicious. They care very much how you think of them.
    It seems like they're doing a good job at it. It scares me how widespread and deep the trust in Google is.

  4. #54
    Pull the strings! Architect's Avatar
    Type
    INTP
    Join Date
    Feb 2014
    Location
    West Coast
    Posts
    494
    Quote Originally Posted by pensive_pilgrim View Post
    It seems like they're doing a good job at it. It scares me how widespread and deep the trust in Google is.
    Depending on what you mean, what I've seen from the inside is I think trust is deserved, but not necessarily the adoration.

  5. #55
    Utisz's Avatar
    Type
    INxP
    Join Date
    Dec 2013
    Location
    Ayer
    Posts
    3,122
    INTPx Award Winner
    Quote Originally Posted by Starjots View Post
    Human empathy probably has a survival function because that's how are created. The only other alternative is it's a side effect that falls out of our other traits, but in a just so story way it isn't a stretch for me to think empathy is useful.

    Rather than the objective includes empathy, it might include the same ingredients which helped empathy evolve in humans - the need for cooperative social groups with other humans etc. A tricky part here is trying to ensure the GAI doesn't develop some of our non-desirable traits. Perhaps domestication is the best route - breeding in a sandbox for reduced aggression before letting it out.
    I think you are perhaps taking a much more "human" view of GAI than I am.

    Perhaps though it will gain "empathy" if it values survival and knows we'll pull "the plug" if it doesn't play nice. Actually that's a problem in AI safety anyways ... how to create a GAI that doesn't vacuously satisfy its purpose by simply destroying itself and its purpose ... if you add in a survival objective, how do you train it to not stop humans from pulling the plug when it starts going all murdery.

    Quote Originally Posted by Architect View Post
    How do you know I'm empathic? How do you know anybody is? We don't, like intelligence or porn though, while we can't define it we know it when we see it. I believe anything able to pass the Turing test will also be able to pass the Utisz Empathy test, and convince you it's empathic. Is it really? Who knows? At any rate I hypothesize that empathy is necessary but not sufficient component of GAI
    "The question of whether machines can think is like the question of if submarines can swim" or something like that, I think it was Dijkstra.

    Chicken and egg? I believe Chomsky (surely an INTP) believes language to be the core of thought, wasn't that his central thesis? At any rate it's a common idea.

    Here's an experiment that gives a clue. Back in the 70's they took some chimps and raised them as human children. Literally, they got families with young children who took them in as new human children. Treated them in every respect as a human child. They grew up and acted more like humans than chimps, with one important difference - they never learned to speak. We have sign language and such, but at this point have determined it's pretty much not real language, more X=Y kind of thinking. There's some fundamental spark missing from chimps, who otherwise shared 99%+ of our genome (and most of the 100 odd MiB of our genome is about how to make a brain). What is it? Otherwise we consider our hirsute cousins probably conscious, probably sentient, but probably not intelligent, and interestingly while they clearly have emotions they have no language.
    Human language is core to human thought. Doesn't mean machines have to "understand" human language to "think". Also doesn't mean that humans cannot learn another language and one that is closer to how machines process data (e.g., less verbose, more precise, more logical). But I would say it's not really GAI until it can navigate human language because it would otherwise be unable to recognise too many inputs (unless we go easy on it with an intermediate language), so yeah, maybe it's chicken and egg or two sides of the same coin. On the other hand, i think GAI is the more general aim, ML is perhaps the technical direction, and NLP (or more specifically NLU) is an application of all that.

  6. #56
    Pull the strings! Architect's Avatar
    Type
    INTP
    Join Date
    Feb 2014
    Location
    West Coast
    Posts
    494
    Quote Originally Posted by Utisz View Post
    I think you are perhaps taking a much more "human" view of GAI than I am.
    Because GAI is human level AI by definition. Look, computers are already far more logical and can reason better than any of us. They store vast amounts of information, can reason from it, solve problems, beat the best humans at Jeopardy and Go. These aren't small accomplishments, both of them involve a high degree of intuition and inference. What computers don't have is a sense of humor, semantic understanding or empathy.

    Perhaps though it will gain "empathy" if it values survival and knows we'll pull "the plug" if it doesn't play nice.
    Do people have empathy because they're afraid others will pull the plug (jail them)? No of course not. Data is clear that the threat of retribution/incarceration is not a deterrence.

    "The question of whether machines can think is like the question of if submarines can swim" or something like that, I think it was Dijkstra.
    Actually it's a semantics problem in that sense, we don't have a good definition of what it means to think. So again back at the Turing test as the only answer we have.


    Human language is core to human thought. Doesn't mean machines have to "understand" human language to "think".
    So? Prove it. You assumed your conclusion there. Fact is we don't know what it means to 'think' as I said.

    At any rate the idea of language being core to thought is a hypothesis I don't know if I believe or not. TBD.

  7. #57
    Utisz's Avatar
    Type
    INxP
    Join Date
    Dec 2013
    Location
    Ayer
    Posts
    3,122
    INTPx Award Winner
    Quote Originally Posted by Architect View Post
    Because GAI is human level AI by definition. Look, computers are already far more logical and can reason better than any of us.
    I don't see current computers learning language like a two year old. Even in terms of deduction, their ability to prove mathematical truths is pretty limited to brute-force style reasoning. I don't buy your claim that machines are already better at reasoning than humans. There are some things humans are good at; there are other things that machines are good at.

    Do people have empathy because they're afraid others will pull the plug (jail them)? No of course not. Data is clear that the threat of retribution/incarceration is not a deterrence.
    Humans have empathy because they evolved empathy in social groups because that was what was evolutionarily preferred for survival of the genes in question. So your argument seems to be that GAI would evolve empathy for the same advantageous reasons we evolved empathy. The general form of your argument is that GAI would evolve X for the same advantageous reasons we evolved X.

    We also evolved nipples though ...

    Actually it's a semantics problem in that sense, we don't have a good definition of what it means to think. So again back at the Turing test as the only answer we have.
    The Turing test is also ill-defined. Who's the human evaluator? Who's the human adversary? Does the language matter? Perhaps we could say that doing well in the Turing test is a necessary condition for GAI, but not sufficient. Or perhaps the Turing test is the only "answer" we have to a non-sensical question.

    So? Prove it. You assumed your conclusion there. Fact is we don't know what it means to 'think' as I said.
    Which is why I put "think" in quotes and said it doesn't require human language. I would say for example that "apes" think. But I don't think we're really disagreeing on anything interesting here other than what "think" means.



    I dunno, maybe we're just going around in semantic circles. But in summary, I don't agree that machines can currently reason better than humans, I don't think it makes sense to suggest that GAI will require empathy (and I don't think it makes sense to predicate "thinking" on human language).

  8. #58
    tsuj a notelpmis QuickTwist's Avatar
    Type
    INTP
    Join Date
    Oct 2016
    Posts
    899
    OK so here is my question...

    I have been looking into mathematics recently. It seems that the universe may be a symmetrical pattern (or at least the rules that govern the universe are symmetrical)
    If you know me well (like my family) you will know I take psychometrics a little too seriously without knowing much about them. Psychometrics deal with the pareto distribution.

    They are different. One is perfectly symmetrical and one is completely lopsided. The question is, what model will AI use for its means of accomplishing things?

    This could be one of the only ways we will know how AI thinks. Think of it like a litmus test for the properties of the AI's intelligence. Or I could just be insane and have no idea what I am talking about at all.

    Think of it like this: will the AI take in information like an intuitive or like a sensor? If AI takes in information like a sensor, it is likely going to copy the model that we have been operating under for... a long long time. On the other hand, if the AI takes in information like an intuitive, it is going to be more wholistic in its interpretation. The question that the AI has to ask itself is "Am I correct in my understanding?" It's a question of whether AI will use heuristics and use a dominance hierarchy, or something that is more along the lines of evenly distributing it's workload. It's a question of effective vs. efficiency.

    The weird thing about the way I see AI working is that they don't really operate like we do. In my mind, they will either operate like an NT, or like an SP. And I am going to go out on a limb and say that AI won't even register on the E/I axis.

    Curious what you think about this @Architect. And I don't mind you being blunt, even if is to say I am so far away from reality that what I am saying doesn't even make any sense at all.
    Last edited by QuickTwist; 01-21-2018 at 12:45 AM.
    But your individuality and your present need will be swept away by change,
    and what you now ardently desire will one day become the object of abhorrence.
    ~ Schiller - 'Psychological Types'

  9. #59
    Pull the strings! Architect's Avatar
    Type
    INTP
    Join Date
    Feb 2014
    Location
    West Coast
    Posts
    494
    Quote Originally Posted by Utisz View Post
    I don't see current computers learning language like a two year old. Even in terms of deduction, their ability to prove mathematical truths is pretty limited to brute-force style reasoning. I don't buy your claim that machines are already better at reasoning than humans. There are some things humans are good at; there are other things that machines are good at.
    That's fine, we haven't done it yet so you could be right. Historically though taking that side of the bet has lost every time.

    The Turing test is also ill-defined.
    Sure, I didn't say it was good, just that it's the best evaluation we have

    I dunno, maybe we're just going around in semantic circles.
    No, I think it was worthwhile

    But in summary, I don't agree that machines can currently reason better than humans
    I didn't say they reason better than humans, they don't. Specifically I'm saying they are already better at making logical deductions (classical computing), probabilistic decisions (DNN's) and inferential choices (Jeopardy - Blackboard pattern probabilistic decision engine)

    Quote Originally Posted by QuickTwist View Post
    This could be one of the only ways we will know how AI thinks. Think of it like a litmus test for the properties of the AI's intelligence. Or I could just be insane and have no idea what I am talking about at all.

    Think of it like this: will the AI take in information like an intuitive or like a sensor? If AI takes in information like a sensor, it is likely going to copy the model that we have been operating under for... a long long time. On the other hand, if the AI takes in information like an intuitive, it is going to be more wholistic in its interpretation. The question that the AI has to ask itself is "Am I correct in my understanding?" It's a question of whether AI will use heuristics and use a dominance hierarchy, or something that is more along the lines of evenly distributing it's workload. It's a question of effective vs. efficiency.

    The weird thing about the way I see AI working is that they don't really operate like we do. In my mind, they will either operate like an NT, or like an SP. And I am going to go out on a limb and say that AI won't even register on the E/I axis.

    Curious what you think about this @Architect. And I don't mind you being blunt, even if is to say I am so far away from reality that what I am saying doesn't even make any sense at all.
    Actually this is a subtle and important point nobody has so far brought up, so congratulations on that. On the flip side drop the physics part, that's just the run of the mill thinking from somebody untrained in the field (can't let you get too puffed up).

    Anyhow ... yes the structural aspects of psyche are something I think deeply about in regards to GAI, and we're just beginning to get to the doorstep of this. Do a search on Hinton's Capsule Network theory to see where this is going (the first papers with a backprop algorithm are going to peer review as we speak). You jumped ahead to an important aspect of this, which is information stream decimation, which I think plays an important role in the architecture of consciousness, but this is a few years down the road yet in my view. I'd submit a paper but it would get rejected out of hand at this point.

  10. #60
    Utisz's Avatar
    Type
    INxP
    Join Date
    Dec 2013
    Location
    Ayer
    Posts
    3,122
    INTPx Award Winner
    Quote Originally Posted by Architect View Post
    I didn't say they reason better than humans, they don't.
    Quote Originally Posted by Architect View Post
    Look, computers are already far more logical and can reason better than any of us.
    ... but anyways.

Similar Threads

  1. Tea strategies
    By Blorg in forum The Pub
    Replies: 46
    Last Post: 05-05-2017, 02:54 PM
  2. People's Rubic's Cube strategies
    By BarIII in forum The Pub
    Replies: 20
    Last Post: 08-09-2016, 04:28 PM
  3. Foolishly reveal your password creation strategies
    By notdavidlynch in forum The Pub
    Replies: 41
    Last Post: 09-15-2014, 09:34 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •