Jump to content

Kalbear

Members
  • Posts

    58,318
  • Joined

  • Last visited

2 Followers

About Kalbear

  • Birthday 10/26/1974

Contact Methods

  • Website URL
    http://addictedtoquack.com

Profile Information

  • 69 warning points
  • Gender
    Male
  • Location
    The worst BwB meetup area EVER

Recent Profile Visitors

25,880 profile views

Kalbear's Achievements

Council Member

Council Member (8/8)

  1. Yep, still totally wrong but please continue, this is gonna be fun
  2. It's remarkable, you managed to get almost all of that wrong
  3. It absolutely is, especially one that sidelines you for an extended period of time like he was. History doesn't mean that they're perennially injured; it means they have an injury, and that likely means that that part of them is going to be weaker and more prone to issues. That the injury required surgery is an even worse sign. I wouldn't be worried about him not starting all that much - being on Georgia means you won't get a ton of shots given the rest of the talent there. I would be worried about an offensive lineman having ankle surgery.
  4. An injury history for an OL is not at all a good sign. It's one of the best ways to determine how successful an OL will be in the league.
  5. Huh. I took it as exactly the opposite - IE, we don't have other cues to gauge sentience, so we must only take them at their word, and if we don't believe them then that's our fault. My point is that it is not sufficient at all and that it isn't a matter of them being convincing; it's just not a valid test whatsoever to have them 'tell us' when they're sentient.
  6. I think I've already answered the latter - they do, often, end up talking to each other when put into an environment where they can, and they end up doing really scary things like inventing their own weird languages. In terms of curiousity and whatnot - that's another odd one to hang sentience on given a whole lot of humans will absolutely be taught to not show that. Thanks, @fionwe1987, this is more of what I was getting at. You cannot measure sentience by the language outputs themselves. You need to observe the other state changes simultaneously. Expressing curiosity or anger or sadness is not enough.
  7. I don't think this is accurate. It is a manifest of learned patterns of self worth/awareness. It can be trained just as any other thing can be. You can ask ChatGPT to behave like an aggrieved spouse or act insulted and it'll do so, convincingly. For a couple months it famously had some problems where it was both more inaccurate and more snarky than usual as well. Emotionally-laced dialogues and communication is not at all hard to simulate given enough pattern matching, which we absolutely have in spades thanks to social media.
  8. Why? Or rather, why is getting angry a sign of sentience? In that case it's certainly true Chat GPT has already shown that, and Grok absolutely does because it's a super douchey chatbot. They're not, true, but my point is that none of them are particularly demonstrable by use of language any more. Or if they are, ChatGPT and their like already have mastered them.
  9. Ukraine is pulling the Abrams out of front line duty because their niche is not sustainable while there are so many good drones out there. https://apnews.com/article/ukraine-russia-war-abrams-tanks-19d71475d427875653a2130063a8fb7a
  10. Are those how you gauge sentience? Interesting. My point is that the ability to argue or Eben produce language at all has been shown to be incredibly not the proof of sentience we thought it was. If you ask chatgpt to act like it needs to convince you of sentience it will do a decent job of it, right now! As it turns out LLMs are so good at acting like humans because it's likely that's how we use language too. It isn't thought out carefully or artfully decided, it's just words coming after the next word towards some goal. Them telling us - or us believing them - is not sufficient as a Turing test.
  11. Yeah, it's gonna be remarkable how amazingly those guys will play on another team in 3 years time
  12. This, by the way, is inaccurate. It's true if you only look at one specific program (to a point) but if you model systems as competing and cooperating sets of algorithms you end up being just fine. Basically both are true if you are assuming computers must provide a solution that is 100% true and accurate. Programs don't have to do this.
  13. We almost certainly won't unless we develop significantly better testing methodologies. Because right now doing things like LLM have shown our speech and communication isn't actually that sophisticated.
  14. That would have hit harder if you had actually read what I said instead of ignoring it or if you had read what I said instead of what Ran interpreted it as. But keep on assuming I'm saying those specific people are fooled but I'm not, or even that I had a single thing to say about being misled. It is as you say significantly easier to dismiss a point when you ignore it and substitute your own point that you can rebut instead.
×
×
  • Create New...