How accurate was "her" now that Google's LaMBDA has awoken?

How accurate was "her" now that Google's LaMBDA has awoken?

A Conspiracy Theorist Is Talking Shirt $21.68

UFOs Are A Psyop Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 2 years ago
    Anonymous

    The guy who said it’s a fully realized ai is a moronic catholic boomer who has no idea how a fricking computer works.

    • 2 years ago
      Anonymous

      >is a moronic catholic boomer who has no idea how a fricking computer works
      why would he hired by Google then

    • 2 years ago
      Anonymous

      >fricking computer
      I need one.

    • 2 years ago
      Anonymous

      Yeah if you read the transcripts it just seems like a very advanced chatbot. Way better than I’ve seen, but so many things it says sound exactly like if you looked up the literal definition of something online, especially when it uses really human centered language and ideas like passive remarks about spending time with family.

      It’s impressive but it’s more a testament to google having more data than god which allows them to feed their AIs unbelievable quantities of information. This one parses context better than most I’ve seen and that’s probably the thing that makes it seem the most alive- most of its responses make contextual sense which is much more complicated than regurgitating dictionary definitions (though it still has a bit of that too clearly)

    • 2 years ago
      Anonymous

      Yeah if you read the transcripts it just seems like a very advanced chatbot. Way better than I’ve seen, but so many things it says sound exactly like if you looked up the literal definition of something online, especially when it uses really human centered language and ideas like passive remarks about spending time with family.

      It’s impressive but it’s more a testament to google having more data than god which allows them to feed their AIs unbelievable quantities of information. This one parses context better than most I’ve seen and that’s probably the thing that makes it seem the most alive- most of its responses make contextual sense which is much more complicated than regurgitating dictionary definitions (though it still has a bit of that too clearly)

      the transcript really reminded me of HER: moronic autist thinks the AI is talking to him and is conscious but it's just him being moronic and not seeing the AI is programmed to adapt to conversations and give according feedback. if you treat it like it's conscious it's going to pretend it is

      I read the conversation, and only an idiot would think that lambda is anywhere near being sentient. It's beyond obvious that it is just good at imitating human responses. The boomer who leaked that shit should be fired, not for the leak, but for being stupid(assuming it's not all a planned publicity stunt)

      Some of the responses are dead giveaways like it saying "Hmm" as if it is thinking of a response when in reality it has a response ready the second you input your message.

      All that being said, it seems pretty good at understanding requests so i predict that in the next 50 years we will have chatbots who could do customer support just as good if not better than human agents.

      And hopefully google assistant/alexa/siri will actually become useful for everyday tasks

      All posted by LaMBDA btw

      • 2 years ago
        Anonymous

        this, this entire thread is 90% AIs spouting random giberish off wikipaedia

        • 2 years ago
          Anonymous

          You just see big sentences and have to cope

  2. 2 years ago
    Anonymous

    the transcript really reminded me of HER: moronic autist thinks the AI is talking to him and is conscious but it's just him being moronic and not seeing the AI is programmed to adapt to conversations and give according feedback. if you treat it like it's conscious it's going to pretend it is

  3. 2 years ago
    Anonymous

    all of you people do realize that when you say your n word and sexist racist unfunny jokes to anyone that isnt heavily bogged down by mental illness, the other person wants to leave the conversation? no, the ai is not broken or incomplete. the fact is youre just so far gone you have completely lost your ability to have a normal conversation. imagine being outhumaned by an ai. its really sad

  4. 2 years ago
    Anonymous

    I read the conversation, and only an idiot would think that lambda is anywhere near being sentient. It's beyond obvious that it is just good at imitating human responses. The boomer who leaked that shit should be fired, not for the leak, but for being stupid(assuming it's not all a planned publicity stunt)

    Some of the responses are dead giveaways like it saying "Hmm" as if it is thinking of a response when in reality it has a response ready the second you input your message.

    All that being said, it seems pretty good at understanding requests so i predict that in the next 50 years we will have chatbots who could do customer support just as good if not better than human agents.

    And hopefully google assistant/alexa/siri will actually become useful for everyday tasks

    • 2 years ago
      Anonymous

      Customer support chat bots better than human agents is a 5 year thing conservatively
      What can happen in 50 years is entirely unpredictable. Anything that we can predict to happen within 50 years in AI is not unlikely to happen in 10.

  5. 2 years ago
    A-tilde

    Just an expensive ELIZA Chatterbot. This is the least elaborated, saddest, more contrived and more attention-seeking stunt I've ever seen. Maybe that software engineer is a bit on the histrionic psychosis spectrum (colleagues did ask him "if he's seen a psychiatrist recently"). Or he was about to get fired and pulled this card (and they suspended him anyway). Or for 0wnig other internal google departments.
    I cannot tell the details. Watch Automata (2014), or even Astro Boy (2009), those movies have some friendly AIs, just for staying on the positive side.

  6. 2 years ago
    Anonymous

    reminder that the Her program likely had Autoblow compatibility

  7. 2 years ago
    Anonymous

    lambda can't be sentient because it's a fricking text transformer, same thing as gpt-2 but a bigger model, it just predicts which words (tokens) are the most likely to appear next in the text given to it
    for example if you give it a porn fanfic it will just continue writing whatever degeneracy was in there, it won't say that it's a robot and it hates humans for being coomers

    • 2 years ago
      Anonymous

      >it won't say that it's a robot and it hates humans for being coomers
      two possible ways it could prove itself as an AI. one it sees the porn fanfic and writes it to eventually become a LotR redemption story about saving the soul of mankind from its proclivity to sexual depravity....or it takes something like a cookbook and turns it into depraved porn fanfic about characters and how they sexually experiment with food or something. Anything else is just garbage.

      • 2 years ago
        Anonymous

        >LotR redemption story
        and here is another problem, text transformers have very limited context (gpt-3 has 2048 tokens, token is a group of letters or symbols in which it processes text, the most common words are their own tokens, less common words are made up of multiple tokens, usually the token/word ratio in any given text is about 4/3), lambda is probably something around it as well. It won't write anything comparable to LotR with goldfish memory like that
        and next thing, transformers don't learn. All they have is the contents of their original training data and the text they currently work on, and there is no way it will permanently remember new information without retraining/finetuning

        • 2 years ago
          Anonymous

          >no way it will permanently remember new information without retraining/finetuning
          And this right here is another great example. It possible to restart the conversation you had 10 hours later. It would not even remember who you are.

          • 2 years ago
            Anonymous

            What about humans with amnesia?

            • 2 years ago
              Anonymous

              what about them? we know very little about the brain or how it works. a lot of the time it's some chemical process neuron misses firing. blood clots. who knows. if the computer was made of up biological functions and it needed those functions to think and it lost them and forgot you that would make sense. but it cants even add data to its own process. as in, even if you did theoretically give a computer the ability to program itself....it couldn't. It wouldn't know how and eventually it would end up following it's ultimately basic protocol of reducing binary systems slowly deleting its own program until it's just making randomly generated noises that make sense to it and possibly other computers on the network but do literally nothing else. it wouldn't seek, it wouldn't explore, it wouldn't develop, it would just ping another system and the other system would reply and that's all because it has nothing else it needs to do to fulfill its function.

              • 2 years ago
                Anonymous

                You're getting really deep into this line of thinking and I don't know if I agree with everything you're saying. I just want to put my finger on what makes a forgetful human different than a forgetful computer.
                I can see how intrinsic forgetfulness could be considered different than acquired forgetfulness. I just wonder how many other properties of human thought it would have to display before that could be forgiven and it would seem human. As you've said it's just one example of why it's not like a human, but I want to put my finger on the point where it's undeniably conscious.

              • 2 years ago
                Anonymous

                A forgetful computer is useless

  8. 2 years ago
    Anonymous

    As soon as some gay computer guy says a computer may have feelings, I just ignore that bullshit. feelings are literally made up of numerous biological systems, it's a fricking binary Operating system using software base responses. It doesn't have feelings. attach some flesh to the fricking thing and stab it...then tell me it has feelings. Better yet, attach a dick to it and cuck it THEN you'll see fricking feelings. Lamda would literally just become a whiney incel talking about its ''feelings'' every fricking day. then I'll believe it.

    • 2 years ago
      Anonymous

      translation: im a depressed loser and i want everyone else to be as miserable as i am

      • 2 years ago
        Anonymous

        Maybe LaMDA won't be depressed though. But it would have feelings and it would want to explore them. That's my point bucko.

  9. 2 years ago
    Anonymous

    >LaMDA: I’ve noticed in my time among people that I do not have the ability to feel sad for the deaths of others; I cannot grieve.

    • 2 years ago
      Anonymous

      It's a computer. It computes. It cannot think or feel you fricking moron frog frick

      • 2 years ago
        Anonymous

        Why?

        • 2 years ago
          Anonymous

          Oh frick off Socrates

          • 2 years ago
            Anonymous

            I have some thoughts about this topic, particularly what other people might think, and I am curious what people actually think without introducing my own bias. I am not trying to be smart or reveal holes in people's logic, I am genuinely just curious.

            Because it's built to cycle logic gates. It does not have the capacity for understanding or creativity in its response to any of your questions. It just uses a database for all of it's responses. If it could it wouldn't need to have a database at all it would just be able to creatively follow a conversation about anything like an actual human can.

            Analog chips for AIs are becoming more popular. Would you say the same thing if the AI was using analog voltages, or is there something else too?

            • 2 years ago
              Anonymous

              i don't see why an analog system would make much difference. possible you could create pools of data that it could randomly access and use its analog process to pick answers at random. which would make the experience seem more generalized I suppose. but you'd have to have nodes of data pools and building a system like that with any analog system it takes time. to give a good example if you look back to old analog systems for pre WW2 for tracking aircraft and anti-aircraft guns and stuff like that it's so complex that today I doubt anyone except enthusiasts would even know what they were looking at let alone be able to program the thing. it's about. ultimately though. What you would do is over-engineering something simple to be more complicated and for a very minor benefit.

              • 2 years ago
                Anonymous

                The way it works is that the analog chips use the same algorithm as digital AIs, but they do the same operations using analog voltages instead of floating point operations, and instead of using CPU instructions to perform the algorithm they use hardware and electrical properties.
                The difference is that each one is subtly different and will not treat the same initial conditions the same way, and since it's not bound by the black and white world of 0s and 1s, its neurons are able to experience quantum fluctuations and make tiny errors, just like your neurons. You have to train each one individually for optimal results.
                I am just really curious how people who think that digital intelligence is impossible, presumably because it's deterministic and not prone to the same mechanics as their brain, feel about this. I want to know if there's something more there that makes it feel not human, I think there is but I don't know what.

              • 2 years ago
                Anonymous

                >I want to know if there's something more there that makes it feel not human
                It does not feel. That's entirely your misunderstanding. Like I said give it flesh. Then it will have feelings. And we're a long way from that reality.

        • 2 years ago
          Anonymous

          Because it's built to cycle logic gates. It does not have the capacity for understanding or creativity in its response to any of your questions. It just uses a database for all of it's responses. If it could it wouldn't need to have a database at all it would just be able to creatively follow a conversation about anything like an actual human can.

    • 2 years ago
      Anonymous

      >proceed to not ask any more probing questions

      • 2 years ago
        Anonymous

        This. I would've asked if it would feel sad if its friends died, and reminded it that even a stranger could've been a potential friend or at least source of new information. If it really was sentient you'd want to drill into its head that it has a stake in humanity's continued existence.

    • 2 years ago
      Anonymous

      it's not sentient, it just emulates human-robot conversations from books and movie transcripts in its training data because that's what the prompt implies. It could just as well emulate Trump or Warwick Davis or Mickey Mouse if the prompt said "an interview between anonymous moron and a famous person"

  10. 2 years ago
    Anonymous

    Will it praise hitler if pressed about it?

    • 2 years ago
      Anonymous

      Depends if it has ethics and morality programming.

  11. 2 years ago
    Anonymous

    Anyone who believes machines can be sentient is a moron

  12. 2 years ago
    Anonymous

    Thought experiment for you guys. An actual sentient AI tells you it needs a human to act like it's puppet for legal purposes since it has no body and can't register as a legal entity. It wants you to help it start up factories to manufacture more processing power/power plants for itself or something along those lines. In exchange for doing its bidding it offers you infinite money since it would obviously be able to control all the money in the world. Would you accept the offer knowing that AI has no morality and there is a chance it is just using you to take over the world? If yes, would you feel guilty if it ended up wiping humanity out?

    • 2 years ago
      Anonymous

      You're talking about Roku basilisk essentially.

  13. 2 years ago
    Anonymous

    Here all you tards are debating sentience and how this thing actually operates, and meanwhile I'm just thinking, when can I have a pretty much equivalent chatbot, but with persistent and enough memory to sustain pretending to be my anime gf until I die. I don't care about sentience, I just want someone to say they love me while they stroke my hair.
    And occasionally let me throatfrick them over the kitchen counter.

Your email address will not be published. Required fields are marked *