Page 2 of 8 FirstFirst 1234 ... LastLast
Results 11 to 20 of 79

Thread: AI: Superintelligence - paths, dangers and strategies

  1. #11
    know nothing pensive_pilgrim's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    4,612
    Quote Originally Posted by Starjots View Post
    Does it bother anyone that once a computer gets better than the best human at some game, the humans never can reclaim being best? Does this imply anything?
    No more than it bothers me that John Henry lost to the steam drill.

  2. #12
    Pull the strings! Architect's Avatar
    Type
    INTP
    Join Date
    Feb 2014
    Location
    West Coast
    Posts
    544
    Your asking great questions but the present technology (deep neural nets) isn't sufficient for GAI (general AI). Just to name one obvious problem, a baby can learn what a dog is from one or two examples, a DNN needs thousands and can stills screw it up. The theory hasn't changed much in decades, the only reason we're seeing success is because of faster hardware (GPU's) and the internet providing us with large data sets. So to talk about how we'll handle the rules for AI is premature with today's technology, and since we don't know yet anything about the technology we can't say much about how the 'rules' will work in that context.

    To give an answer though I'd take it in reverse, which is to consider any intelligence capable of instantaneous self learning and access to the worlds knowledge would necessarily become more sane, more empathetic and more understanding than your average human. In other words I'd posit that human aberration is the result of a narrow mindset*, such as shooters, Hitler, serial killers, etc. From this I'd expect the future GAI to be teaching us the rules.

    * excluding for the moment those with physical psychiatric issues such as a chemical imbalance.

  3. #13
    Member rhinosaur's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    576
    I'd love to see a generalized physics engine of sorts. Imagine some software that knows all the known laws of physics; you give it some starting conditions and ask it for whatever you want, and it gives you the answer with good accuracy. Such a piece of software would be revolutionary.

    The software people in the room are probably thinking this is easy. The physicists in the room are probably thinking it's impossible.

    We've got a million pieces of software to do this for various things already. For example, software exists where I can plug in the elements I want, and what pressure and temperature I want, and it'll come up with some phase diagrams as a function of the concentrations of the various species present. No problem. Except it's wrong half the time, and it's a million miles away from generalization. It won't predict precipitation hardening, cause that varies with your process, or what etchants to use if you make a thin film of it, etc. And complicated problems like density functional theory can still take days of computation time even on supercomputer clusters.

    There has been some effort in the materials science community to start making huge databases with contributors around the world. See, for example, the Materials Project: https://materialsproject.org/ So there is progress. But from my perspective it's been slow going.

  4. #14
    know nothing pensive_pilgrim's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    4,612
    Quote Originally Posted by rhinosaur View Post
    I'd love to see a generalized physics engine of sorts. Imagine some software that knows all the known laws of physics; you give it some starting conditions and ask it for whatever you want, and it gives you the answer with good accuracy. Such a piece of software would be revolutionary.

    The software people in the room are probably thinking this is easy. The physicists in the room are probably thinking it's impossible.

    We've got a million pieces of software to do this for various things already. For example, software exists where I can plug in the elements I want, and what pressure and temperature I want, and it'll come up with some phase diagrams as a function of the concentrations of the various species present. No problem. Except it's wrong half the time, and it's a million miles away from generalization. It won't predict precipitation hardening, cause that varies with your process, or what etchants to use if you make a thin film of it, etc. And complicated problems like density functional theory can still take days of computation time even on supercomputer clusters.

    There has been some effort in the materials science community to start making huge databases with contributors around the world. See, for example, the Materials Project: https://materialsproject.org/ So there is progress. But from my perspective it's been slow going.
    I think that piece of software is the one we're living in. At least, intuitively it makes sense to me that to keep track of all the information in the universe, you would need the whole universe. Everything that interacts with anything else.

    You did say "known laws", but we use so many approximations and at the fundamental level that's all we really have. And it seems like the fundamental nature of the universe is probabilistic and computers are designed to work in a deterministic way. And you would have to account for the observer in your model of the physical universe, so it seems like you could still only see something from the perspective in which it happens.

  5. #15
    Member rhinosaur's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    576
    I see you're a physicist

  6. #16
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,167
    Quote Originally Posted by Architect View Post
    Your asking great questions but the present technology (deep neural nets) isn't sufficient for GAI (general AI). Just to name one obvious problem, a baby can learn what a dog is from one or two examples, a DNN needs thousands and can stills screw it up. The theory hasn't changed much in decades, the only reason we're seeing success is because of faster hardware (GPU's) and the internet providing us with large data sets. So to talk about how we'll handle the rules for AI is premature with today's technology, and since we don't know yet anything about the technology we can't say much about how the 'rules' will work in that context.
    I think I see what you're saying. We have to get there first.

    An analogy might be the development of internet technology between late 60s and mid 90s. It is easy to say in retrospect 'why didn't they build in better security features up front?' when the issue was getting it working and deployed. Most of the genius was going into that part.

    The rapid take-off of the technology happened on IP v4 bones and once it was wide spread was difficult to retrofit. For evidence I point to the legion of people now making a living in computer security of which I am sort of one, though I started as a network person. It is a great example in my mind of things tending to evolve through the efforts of many, sometimes at cross purposes. The results were pretty remarkable, but now we have to deal with the unintended consequences as well.

    Though the technology isn't there today, a lot of money and brains are going in that direction and they have past big pushes to learn from. The money being thrown in tells me many plan to make a buck in return (which is fine) which in turn applies some effect to me. Simplistic, but there it is.

    To give an answer though I'd take it in reverse, which is to consider any intelligence capable of instantaneous self learning and access to the worlds knowledge would necessarily become more sane, more empathetic and more understanding than your average human. In other words I'd posit that human aberration is the result of a narrow mindset*, such as shooters, Hitler, serial killers, etc. From this I'd expect the future GAI to be teaching us the rules.

    * excluding for the moment those with physical psychiatric issues such as a chemical imbalance.
    Humans exist for our own purposes, we're only becoming empathetic to other species of late I'd say. And our brains, animal-wise, are pretty good. So perhaps a good heart does not necessarily follow from a good brain - at least interspecies wise.

    But, given our hardware limitations, the potential must exist for what you say.

  7. #17
    Pull the strings! Architect's Avatar
    Type
    INTP
    Join Date
    Feb 2014
    Location
    West Coast
    Posts
    544
    Quote Originally Posted by Starjots View Post
    I think I see what you're saying. We have to get there first.
    Get closer at least I think.

    An analogy might be the development of internet technology between late 60s and mid 90s. It is easy to say in retrospect 'why didn't they build in better security features up front?' when the issue was getting it working and deployed. Most of the genius was going into that part.
    You're right but for that example, IPV4 & ethernet succeeded specifically because of the lack of security which meant anybody could hook up to the system, which enabled nations to do so without, say, even needing diplomatic relations. Sort of an anthropic principle applied to security. And I have a memory that security was actually considered, but that may be false.

    Though the technology isn't there today, a lot of money and brains are going in that direction and they have past big pushes to learn from. The money being thrown in tells me many plan to make a buck in return (which is fine) which in turn applies some effect to me. Simplistic, but there it is.
    I'm just talking about general AI. Present DNN's are adequate to perform much of what humans can do. Basically anything a human takes a second or two to do a DNN can do better.

  8. #18
    Pull the strings! Architect's Avatar
    Type
    INTP
    Join Date
    Feb 2014
    Location
    West Coast
    Posts
    544
    Quote Originally Posted by rhinosaur View Post
    I'd love to see a generalized physics engine of sorts. Imagine some software that knows all the known laws of physics; you give it some starting conditions and ask it for whatever you want, and it gives you the answer with good accuracy. Such a piece of software would be revolutionary.

    The software people in the room are probably thinking this is easy. The physicists in the room are probably thinking it's impossible.
    my degrees are in Physics but I've worked in software my career, what does that make me? Anyhow this is a huge field, but it's difficult. Two good examples are plasma modeling for fusion reactors and weather modeling. The physics is very well understood, but the systems are difficult to model. I don't work in the field so don't know for sure why it's difficult, but I suspect it's a lack of computational power, data. By data I suspect there are factors that creep in that we have difficulty modeling.

  9. #19
    Senior Member Starjots's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    2,167
    Quote Originally Posted by Architect View Post
    You're right but for that example, IPV4 & ethernet succeeded specifically because of the lack of security which meant anybody could hook up to the system, which enabled nations to do so without, say, even needing diplomatic relations. Sort of an anthropic principle applied to security. And I have a memory that security was actually considered, but that may be false.
    That tickled a memory, around 1990 the GOSIP protocol suite was mandated and I believe folks were talking about OSI quite a bit before that. Where I worked the word coming down from the top was GOSIP will replace TCP/IP because that's the government standard. So we all read up on it and I'm pretty sure it had lots of security built in as well as other nifty bells and whistles.

    But even by 1990 is the race was over before it started, TCP/IP worked and you could buy off the shelf equipment right now and actually build networks with the stuff. It integrated easily with any LAN technology and a monkey could comprehend it once you made the mental shift.

  10. #20
    Pull the strings! Architect's Avatar
    Type
    INTP
    Join Date
    Feb 2014
    Location
    West Coast
    Posts
    544
    Taking it as an exercise let's assume that the present deep neural nets could be evolved into a super intelligence with some additional to be discovered technologies. How would we influence it's development? The easiest would be the same as how we teach children; control the information given to the the GAI. That is give it training examples with the outcome you want. This was shown recently with Tay, a Microsoft Twitter learning chatbot which turned racist after a day of being opened up (they shut it down). People saw an opportunity to 'game' the bot with racist/sexist/Xist tweets and it worked brilliantly. Tay failed because it didn't have a concept of The Golden Rule. How does an intelligence have this concept? I don't know how you could program that in presently, because you need to give the GAI a semantic understanding of "harm". Which gets to the core problem with present day AI, we have syntactic but not semantic understanding.

    So we wound around to the nub of the problem, which is how to encode semantic understanding. Once we have that I think we'll have the next step. NLP seems to be the best field to work on this.

Similar Threads

  1. Tea strategies
    By Blorg in forum The Pub
    Replies: 46
    Last Post: 05-05-2017, 02:54 PM
  2. People's Rubic's Cube strategies
    By BarIII in forum The Pub
    Replies: 20
    Last Post: 08-09-2016, 04:28 PM
  3. Foolishly reveal your password creation strategies
    By notdavidlynch in forum The Pub
    Replies: 41
    Last Post: 09-15-2014, 09:34 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •