Page 5 of 8 FirstFirst ... 34567 ... LastLast
Results 41 to 50 of 73

Thread: AI: Superintelligence - paths, dangers and strategies

  1. #41
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,131
    Quote Originally Posted by Architect View Post
    The belief I tend to have is that because humans discovered and use the Golden Rule to a great degree that GAI's will necessarily do so also, just because it works better. Or perhaps better said 'it's a more optimal winning strategy'. Democracy works better because it allows competing agents all succeed to the best of their circumstances and abilities, and when others succeed in a democracy that helps all. With AlphaGo zero we might see an indication of this idea, consider the following analysis by Michael Redmond and Chris Garlock



    AlphaGo Zero self trained in the absence of any human data to 9d level. In Go this is a remarkable situation, it's an alien professional player, what moves would it come up with? For example the opening play is usually to the four star points but that's human convention, we don't know if there are other openings that are better. Turns out Zero's play is much like human play, to Mr Redmond's disappointment. It appears that playing the star points might actually be optimal opening play (Redmond isn't willing to say that but Garlach goes in that direction).

    The point? Given a performance metric to succeed the computer necessarily learns good play much like humans. The discussion should then turn to politics or philosophy, what is the winning course of action, the destruction of human kind or to help human kind? The idea that humans are irrelevant I think is immature, my believe is GAI's will necessarily see the mutual benefit that comes with cooperation.
    Doesn't it seem the Golden Rule is an incomplete strategy? It doesn't say what to do when the other entity doesn't treat you as you want to be treated.

    If you are treated badly and know it and the bad treatment continues, you might get a wee bit resentful or see the relationship as...unfulfilling. So even if the AI is golden-rule centric, it's clients are apt to not be as such from time to time for whatever reason. This doesn't seem stable if the AI is more than a virtual dog.

    A more stable situation, as long as we are near equals, is tit for tat, where both sides train each other to be nice.

    Which could make for interesting times.

    To recap, AIs may be better than us, but we still have us to deal with.

  2. #42
    Pull the strings! Architect's Avatar
    Type
    INTP
    Join Date
    Feb 2014
    Location
    West Coast
    Posts
    468
    Quote Originally Posted by Starjots View Post
    Doesn't it seem the Golden Rule is an incomplete strategy? It doesn't say what to do when the other entity doesn't treat you as you want to be treated.
    Yes good point, but morality isn't a neat system. I'd put it this way; like Democracy and other ideas, I believe the Golden Rule is an idea humans have developed and consistently practice because it works better than others. Regardless of the specific technologies we use for eventual GAI they will be optimizing strategies, so I think that necessarily an intelligent agent will also arrive at this. An agent trained for evil will (I suspect) still develop the concepts of self preservation, which leads to the GR.

    Compare this to Asimovs three rules of robotics, even in those books the robots get into all sorts of logical hairpins, and I think one story was about a robot that malfunctioned trying to implement them.

  3. #43
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,131
    AlphaZero Annihilates World’s Best Chess Bot After Just Four Hours of Practicing

    Just an interesting follow up. Computers have been beating the best humans at chess for 20 years, a similar program to the program mentioned in the OP taught itself chess by playing itself with no human input other than the rules in four hours and became the best chess program in the world.

    I read a book on chess programs a few years back. The traditional chess program has three parts - the opening, where move response sequences are well known and somewhat scripted, the middle game where tons of possible moves are evaluated on the fly, and the end game where simpler positions have also been calculated out to inevitable conclusions in advance.

    The program had none of that either, just taught itself chess by playing itself for four hours and then dominated the (previous) best chess program in the world. It didn't lose a game in 125 matches (lots of draws of course - it's chess).

    Another demonstration of architecture trumping experience.

    Obvious caveats apply - this isn't sentient, it's not a general intelligent AI, chess has simple rules.

    Similar results happened for shogi, a game I'm not familiar with.

  4. #44
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,131
    So this is the quick speculative meta post, no new ideas, but easier to visualize with the example above.

    IF you have a flexible architecture DDN or AI - that is computational hardware (I'm being fuzzy because I'm not just spinning this, just use the analogy of EEPROMs), then you could employ a two tiered strategy.

    At the highest level you view architecture creation as game 1 and at a lower level the specific architectures teach themselves game 2.

    Game 2 is what you want to do in real life (play chess), game 1 is game of learning which architecture is best for playing the best chess game.

    No humans need apply except building the system.

    This might be a way to work toward more complex 'games' with more ill defined rules.

    This isn't an original thought, valid or not, it's too obvious.

  5. #45
    know nothing pensive_pilgrim's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    4,345
    Quote Originally Posted by Starjots View Post
    So this is the quick speculative meta post, no new ideas, but easier to visualize with the example above.

    IF you have a flexible architecture DDN or AI - that is computational hardware (I'm being fuzzy because I'm not just spinning this, just use the analogy of EEPROMs), then you could employ a two tiered strategy.

    At the highest level you view architecture creation as game 1 and at a lower level the specific architectures teach themselves game 2.

    Game 2 is what you want to do in real life (play chess), game 1 is game of learning which architecture is best for playing the best chess game.

    No humans need apply except building the system.

    This might be a way to work toward more complex 'games' with more ill defined rules.

    This isn't an original thought, valid or not, it's too obvious.
    I still have trouble wrapping my head around this. My line of thinking is: doesn't it follow that game 0 is the game of learning which system is best for learning to play game 1 and thus game 2? And therefore isn't the construction of the rules for game 1 entirely dependent on the rules that humans decide upon for game 0? In this case you have human-defined rules governing the two games at the end of the pipeline, with the AI just determining the best path between them.

  6. #46
    facta non verba interprétation erronée's Avatar
    Type
    INxx
    Join Date
    Aug 2015
    Posts
    1,224
    Quote Originally Posted by Limes View Post
    Jan Bonclay
    “If you're not careful, the newspapers will have you hating the people who are being oppressed, and loving the people who are doing the oppressing.” ― Malcolm X

  7. #47
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,131
    Quote Originally Posted by pensive_pilgrim View Post
    I still have trouble wrapping my head around this. My line of thinking is: doesn't it follow that game 0 is the game of learning which system is best for learning to play game 1 and thus game 2? And therefore isn't the construction of the rules for game 1 entirely dependent on the rules that humans decide upon for game 0? In this case you have human-defined rules governing the two games at the end of the pipeline, with the AI just determining the best path between them.
    Not quite following you (or me for that matter), but this video is pretty good explanation of algorithms that are base level. Higher level algorithms could connect these together in different ways.

    This also speaks to the 'inscrutability' thing I mentioned early on.



    should be pretty obvious this is just a general evolution algorithm

  8. #48
    know nothing pensive_pilgrim's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    4,345
    Quote Originally Posted by Starjots View Post
    should be pretty obvious this is just a general evolution algorithm
    Yeah I watched before you added this and that's what I was thinking - like an evolutionary algorithm to create better evolutionary algorithms. Cool video though. Reminds me of this old thing.

  9. #49
    Utisz's Avatar
    Type
    INxP
    Join Date
    Dec 2013
    Location
    Ayer
    Posts
    3,093
    INTPx Award Winner
    There's a decent documentary on AlphaGo on netflix now. It doesn't talk much about technical aspects but the human aspects are interesting.

    Quote Originally Posted by Architect View Post
    To give an answer though I'd take it in reverse, which is to consider any intelligence capable of instantaneous self learning and access to the worlds knowledge would necessarily become more sane, more empathetic and more understanding than your average human. In other words I'd posit that human aberration is the result of a narrow mindset*, such as shooters, Hitler, serial killers, etc. From this I'd expect the future GAI to be teaching us the rules.

    * excluding for the moment those with physical psychiatric issues such as a chemical imbalance.
    I agree with a lot of what you've posted in this thread, but I don't see how you jump to understand that such a hypothetical GAI would be empathetic? The only way I see that you can come to that conclusion (even as a general prediction) about GAI is by definition. It really gets down to the question of what the GAI's objective would be (if any); obviously that objective (if any) would require being very general, but I struggle to think of clear objectives that would necessitate empathy other than the objective including "empathy" itself by definition.

    Quote Originally Posted by Architect View Post
    So we wound around to the nub of the problem, which is how to encode semantic understanding. Once we have that I think we'll have the next step. NLP seems to be the best field to work on this.
    NLP is only a superficial issue. I mean of course NLP is important for various applications. If you could "solve" natural language understanding, that would be a major milestone but probably not towards GAI since natural language understanding probably requires strong AI to solve. I think putting emphasis on NLP is just shifting the goal posts. Most recent advances in NLP come from better data and better machine learning methods (and not, for example, some new linguistic insight).

    Semantics is important, but it's also a fucking nightmare because in most areas of discourse (exceptions being medical terminologies, scientific taxonomies, some legal language, etc., where misunderstandings must be avoided), there is no agreed-upon semantics. Rather humans have evolved language that's just about good enough for humans to understand well enough. In truth, semantics is like the coastline paradox. Try defining rigorously what a "movie" is to a machine. Or consider trying to create a general algorithm that can figure out that the "she" in the sentence "Jane was about to cook the recipe of her late mother but she had forgotten to buy milk" is almost certainly referring to Jane.

    One of the major problems is that when us humans speak, we assume a certain common "qualia" that machines will never have.

    My guess is that probably, into the future, through voice recognition and so forth, we will learn something like a control language that somehow meets machines and humans in the middle. Probably that language will become a lingua franca in technical discussions where ambiguity must be avoided. Where in the middle that point is ... I don't know.

  10. #50
    Amen P-O's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    755
    Quote Originally Posted by Utisz View Post
    One of the major problems is that when us humans speak, we assume a certain common "qualia" that machines will never have.

    My guess is that probably, into the future, through voice recognition and so forth, we will learn something like a control language that somehow meets machines and humans in the middle. Probably that language will become a lingua franca in technical discussions where ambiguity must be avoided. Where in the middle that point is ... I don't know.
    I think this already exists in a basic form. This is how we search for things on google. Some people are very bad at finding what they want; others are very good. ... In any case I think this problem can be overcome without tooo much trouble. As long as there are a few key words that we agree on for sure, we can communicate with it the same way we communicate with little kids.
    Violence is never the right answer, unless used against heathens and monsters.

Similar Threads

  1. Tea strategies
    By Blorg in forum The Pub
    Replies: 46
    Last Post: 05-05-2017, 02:54 PM
  2. People's Rubic's Cube strategies
    By BarIII in forum The Pub
    Replies: 20
    Last Post: 08-09-2016, 04:28 PM
  3. Foolishly reveal your password creation strategies
    By notdavidlynch in forum The Pub
    Replies: 41
    Last Post: 09-15-2014, 09:34 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •