Page 1 of 6 123 ... LastLast
Results 1 to 10 of 73

Thread: AI: Superintelligence - paths, dangers and strategies

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,162

    AI: Superintelligence - paths, dangers and strategies

    Request POSTS to this thread adhere generally to the Title.

    The thread title is borrowed from the book of the same name which I read a few years back. The book itself is an attempt to explore possible problems and possible solutions by using analysis, logic etc.

    I'm sure we all know computers and their software make the modern world go round. Sure humans matter, but our collective nervous system is silicon based and its inner workings are unknown to most. We rely on this nervous system to be fast, accurate and compliant.

    And we ask more of it all the time. There are strong incentives to make our silicon nervous system more intelligent, even give it a brain you might say. And we little cells, who are no sort of in charge, may not be for long. OOOOhhhhh. Spooky.

    ---

    Google's Latest Self-Learning AI Is Like an "Alien Civilisation Inventing Its Own Mathematics"


    A new version of AlphaGo Zero trounced an older version 100-0. The older version, which beat the best humans, learned by studying old games between humans and then playing itself. AlphaGo Zero just had the rules and played itself.

    Don't know if it's mentioned in the article, but I'd bet $2 no human understands AlphaGo Zero's play. If they did, they might have a chance against it, right? Instead, it is described as a 'Go God.'

    Now Go has a solution space far too large to simply look ahead more than a few moves. AlphaGo Zero must operate using many strategies. It cannot know the best move for sure in most cases.

    So some random thoughts/questions:
    1. AlphaGo Zero had to have a set of rules in order to teach itself winning strategies/gameplay. Humans fed AlphaGo the rules. For AI's let loose on the public (i.e., their actions effect the real world), what sort of rules do you think are being fed AIs today? What is the ramifications? What sort of rules might be better? <----important questions

    2. AlphaGo Zero's workings are now -inscrutable-. The workings of any moderately complex learning neural network are also -inscrutable-. Which is to say, no human knows how to predict [given input A, processed through, what will result B be?] the first time around. It is not 100% predictable because the inner workings are not understood. Is this a problem?

    2a. Human inner workings are also inscrutable. We have some glimpses of how we work, but nobody understands enough to predict given input A what is output B. And we've developed a TON of hardware/software to deal with this so we don't all murder each other. These compensation mechanisms include body language, laws, pecking orders and language.

    Given 2 and 2a: Is their parallels between AI and humans vis a vis compensation mechanisms?

    A picture so this doesn't look like a wall of text:

  2. #2
    Utisz's Avatar
    Type
    INxP
    Join Date
    Dec 2013
    Location
    Ayer
    Posts
    3,160
    INTPx Award Winner
    With the proviso that machine learning is something of a blind spot of mine at the moment, regarding 1:

    1) It depends what you want them to do I guess. Playing games like Go is one thing since the rules are still "fixed" and quite simple in some ways and, more importantly, there is a clear objective. So I guess the question is what you mean by "better" because the precise problem is when it's not clear what "better" means. I think the idea of AI's let loose ... it's important to be more careful in that since we're still nowhere close to a "general AI". Instead we have "AIs" that are good at doing one thing in particular, like recommending ads for example.

    2) Yep, depending on the application, this is a problem but I think there's ongoing work on trying to extract explanations from neural networks. I think it's not a problem when it is easy to verify or test a neural networks output (e.g., let it play Go). But some explanation is important when it is harder to verify or test (e.g., invest a billion bucks on this stock). Also explanations can enhance our own understanding.

    2a) I think we tend to anthropomorphise technology too much (or perhaps the other way around ... we tend to model technology in our own image).

  3. #3
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,162
    Quote Originally Posted by Utisz View Post
    With the proviso that machine learning is something of a blind spot of mine at the moment, regarding 1:
    First off, whoa, someone answered, thanks. After posting this turgid mass, I figured it would sink beneath the waves with nary a trace. I am only an interested observer and dabbler. I think the initial questions were more in the applied philosophy category - if such a category exists.

    1) It depends what you want them to do I guess. Playing games like Go is one thing since the rules are still "fixed" and quite simple in some ways and, more importantly, there is a clear objective. So I guess the question is what you mean by "better" because the precise problem is when it's not clear what "better" means. I think the idea of AI's let loose ... it's important to be more careful in that since we're still nowhere close to a "general AI". Instead we have "AIs" that are good at doing one thing in particular, like recommending ads for example.
    Good point - better for who? Obviously specialized AIs are deployed because those using them expect better results. On the commercial side this would be $$, on the government side the initial deployments would be by spooks (I would think) and maybe applied science. In both cases I find my personal space invaded and it's not better for me as far as I can tell.

    I found this article just now, and it's just some dude's opinion, but it states more concisely part of what I'm getting at.

    "The biggest risk isn’t that AI will develop a will of its own, but that it will follow the will of people that establish its optimization function, and if that is not well thought out — even if intent is benign — it could have quite a bad outcome," Musk said.
    How he arrived at this, I don't know. But I arrive at it by noting a particular AI system (AlphaGo) performed best at optimizing game play when freed from constraints of human experience. Thus, for an AI which has significant impact, those that feed it the rules have the potential to benefit themselves and not others. The efficient market hypothesis waves all of this away with the notion someone else will step in and give us better cable TV. Oh wait, that didn't happen here in the US, cable TV suck the rankest diseased goat balls in the cosmos. But that's monopoly, not AI. But Amazon, for example, is headed or has arrived at monopoly status in many areas. And the government in it's domain is a monopoly of sorts.


    2) Yep, depending on the application, this is a problem but I think there's ongoing work on trying to extract explanations from neural networks. I think it's not a problem when it is easy to verify or test a neural networks output (e.g., let it play Go). But some explanation is important when it is harder to verify or test (e.g., invest a billion bucks on this stock). Also explanations can enhance our own understanding.

    2a) I think we tend to anthropomorphise technology too much (or perhaps the other way around ... we tend to model technology in our own image).
    On the anthro-thing, I tentatively posit it's the other way around since we create the technology. The holy grail is human-like ability just better. If I encounter a person who is unreadable and tells me nothing (except in his actions), at the very least I proceed with caution.

    --

    How does this analogy strike you... Is AI like the automobile? A good thing, creating a new world so to speak. Yet, thousands are now getting run over or killed in accidents. Eventually highway regulations come into being, highway engineering and later car safety standards to keep the damage to an acceptable tens of thousands of deaths annually in the US.

    But this leads to another quote from article.

    Musk wants the government to set regulations in place to root out threats early. "AI is a rare case where I think we need to be proactive in regulation than reactive," said Musk. "By the time we’re reactive in AI regulation, it’s too late."

  4. #4
    Goon Roolz itch's Avatar
    Type
    INFJ
    Join Date
    May 2017
    Posts
    790
    INTPx Award Winner
    Have you seen these? https://thoughtmaybe.com/all-watched...-loving-grace/

    All Watched Over By Machines Of Loving Grace is a series of films about how this culture itself has been colonised by the machines it has has built. The series explores and connects together some of the myriad ways in which the emergence of cybernetics—a mechanistic perspective of the natural world that particularly emerged in the 1970s along with emerging computer technologies—intersects with various historical events and visa-versa. The series variously details the interplay between the mechanistic perspective and the catastrophic consequences it has in the real world.

  5. #5
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,162
    Quote Originally Posted by itch View Post
    No, but they look quite interesting! Will give them a look

  6. #6
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,162
    To recap lines of inquiry:

    1. Rules in = output
    2. We are creating a black box

    Additional line of inquiry:

    Does it bother anyone that once a computer gets better than the best human at some game, the humans never can reclaim being best? Does this imply anything?

    Keep in mind that computers that play games are the public face of AI. They are not the serious end of the stick.

    Checkers - Chinook 1994
    Chess - Deep Blue 1997
    Jeopardy - Watson 2011
    Arimaa - Sharp 2015
    Go - AlphaGo (an earlier version than discussed in OP) 2017
    Poker - Texas Hold'em - DeepStack 2017

  7. #7
    know nothing pensive_pilgrim's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    4,584
    Quote Originally Posted by Starjots View Post
    Does it bother anyone that once a computer gets better than the best human at some game, the humans never can reclaim being best? Does this imply anything?
    No more than it bothers me that John Henry lost to the steam drill.

  8. #8
    Sysop Ptah's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Location
    Chicago
    Posts
    4,890
    Quote Originally Posted by Starjots View Post
    2. AlphaGo Zero's workings are now -inscrutable-. The workings of any moderately complex learning neural network are also -inscrutable-. Which is to say, no human knows how to predict [given input A, processed through, what will result B be?] the first time around. It is not 100% predictable because the inner workings are not understood. Is this a problem?
    Great OP and great topic. But this is where you lose me. It is possible to decompose a neural network, examine its inner workings/states, determine the chains of causality which determine its output. Complex? Yes. Time-consuming? Perhaps. Difficult? No doubt. But inscrutable? Qua preclusive to analysis? I don't think so.

  9. #9
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,162
    Quote Originally Posted by Ptah View Post
    Great OP and great topic. But this is where you lose me. It is possible to decompose a neural network, examine its inner workings/states, determine the chains of causality which determine its output. Complex? Yes. Time-consuming? Perhaps. Difficult? No doubt. But inscrutable? Qua preclusive to analysis? I don't think so.
    A fair point. And i confess possessing an inscrutable brain that spits out thoughts that come from where I now not, I have to say this was more an intuitive statement than a well thought out one.

    A quick fishing expedition to back this up found this article. So I present it as my evidence, ex post facto.


  10. #10
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,162
    Quote Originally Posted by Starjots View Post
    To amplify a bit further with metaphor, how easy is it to explain intuition and hunches?

    An interesting development: The field of AI research is about to get way bigger than code

    Kate Crawford, principal researcher at Microsoft Research, and Meredith Whittaker, founder of Open Research at Google, want to change that. They announced today the AI Now Institute, an research organization to explore how AI is affecting society at large. AI Now will be cross-disciplinary, bridging the gap between data scientists, lawyers, sociologists, and economists studying the implementation of artificial intelligence.

    AI Now will focus on four major themes:

    1. Bias and inclusion (how can bad data can disadvantage people)
    2. Labor and automation (who doesn’t get hired when AI chooses)
    3. Rights and liberties (how does government use of AI impact the way it interacts with citizens)
    4. Safety and critical infrastructure (how can we make sure healthcare decisions are made safely and without bias)
    Well, being serious, when I read such things I tend to play down the actual impact this might have. Another learned body warning us about things in the future to be sure, ultimately mostly ignored. These are important people in industry but not the most important.

    Another thing - three of the four focus areas are affects and not fundamental IMO. Only rights and liberties in the broadest sense strikes me as a fundamental issue, how humans stand in relation to machines and for that matter, how humans stand in relation to other humans.

Similar Threads

  1. Tea strategies
    By Blorg in forum The Pub
    Replies: 46
    Last Post: 05-05-2017, 02:54 PM
  2. People's Rubic's Cube strategies
    By BarIII in forum The Pub
    Replies: 20
    Last Post: 08-09-2016, 04:28 PM
  3. Foolishly reveal your password creation strategies
    By notdavidlynch in forum The Pub
    Replies: 41
    Last Post: 09-15-2014, 09:34 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •