Jump to content

fionwe1987

Members
  • Posts

    3,855
  • Joined

  • Last visited

Everything posted by fionwe1987

  1. I really don't understand the folks who make the "this is the only way" argument about the staggering civilian losses, and violations of international law. Are we really saying that "this is the only way" is an acceptable reason to violate legal and moral proscriptions? If so, plenty of terrorists can make similar claims. They often don't have the firepower and military might to properly declare war and choose inhumane actions as their "only option". Why is it ok for a state to do this, if it is not ok for the terrorists to?
  2. Given the alternative is a legit insurrectionist, I think there's hundreds of millions of Americans more qualified to be president.
  3. Image generation and copyright violations Lotsa ways to violate copyright using image generation models. And these companies don't seem to care.
  4. You're right. I read an article which implied otherwise, and based my statement on that. Yeah that can work, though as noted, the amendment doesn't specify.
  5. There was a bench trial in the Colorado case that looked at his actions, and the determination was made they amounted to insurrection. He was allowed his defense there. None of the 3 judges who dissented with the ruling had anything to say against the determination that he was an insurrectionist. If the Supreme Court wants to challenge that, they'll have to do so on the facts, and I don't think they'll want to go there. ETA: I have no fucking idea how this reply got here. Gonna leave it as is since folks replied anyway.
  6. What due process argument? That he got a bench trial for the insurrection claim?
  7. I'm honestly curious on what made up grounds they'll give Trump a pass. There seems very little legal grounds, honestly. The best case I've seen is about the law barring holding office, not running for it, but even that seems like a ludicrous distinction. If you're barred from holding office, you can't run for it, otherwise, foreign nationals barred from holding office could run for them all the time.
  8. Not yet, though we're nipping at the heels of that. Already, there are successful attempts to use one LLM to fine-tune another. And LLMs can write code. We're not that far from LLMs tweaking themselves. It should be noted that LLMs aren't comprised of lines of code in the traditional sense. They're black boxes. I cannot take an LLM a code and read it, and make any kind of sense of what it will do. Certainly, no one can predict exactly how an LLM will behave. ChatGPT, for instance, got "lazy" briefly, a few months ago. This happened without any update to the code itself. People noticed it was giving briefer answers, that were less thorough. I noticed this myself. Some folks noticed this happens right around the start of daylight savings. And there was some data showing that if you fooled the LLM into thinking it was summer, it did better. Dunno if that held up. OpenAI doesn't reveal what tweaks they make to their model or how they resolve such issues. But the very fact that LLMs have behavior we have to infer and that we can probe, but not diagnose by reading some code, should tell you this is a different beast than any old computer program. LLMs do not have constrained behavior, in the sense that they are not bound to produce specific types of text, based on coded rules. And indeed, the same prompt can result in wildly different responses, at different times. What patterns an LLM sees in its training data is multidimensional to the point of incomprehensibility to us in any currently used language. Which is why controlling them is so damned difficult. And why not wholly unintelligent people lose their minds sometimes and see the ghosts of sentence in them. They do inexplicable things.
  9. Yep. And to me, these are all signs of intelligence. Just that, though. I have no idea where they come from. No one does. But you can definitely get genuinely insightful and interesting questions pertaining to a subject. And also gobbledygook. Right, which is why none of this says this iteration of AI is human level intelligent. And definitely, any statements about sentence or consciousness are blather, at this point, too. But they're certainly intelligent, in a meaningful way, well past the kind of semantic rules based AI we used to have before this past decade.
  10. They do not simultaneously learn and function. There's the training period, when they do indeed have access to tons of text data. Then there's refinement of the model with human feedback. By the time you interact with it and ask it questions, the model is locked, and no longer "learning". Nor does it have access to training data, just what it larned from the training data. They can certainly synthesize new information. That's kinda how they end up "hallucinating" or making mistakes. There are examples galore of these models confidently assigning revenue and profit figures to companies that are nowhere to be found, for instance. Or making up names and titles, or inventing historical events. And yes, they can ask questions that haven't been asked before (and were not in their training data), if what you prompt them to do is ask questions.
  11. Again, this is absolutely not true. They do not pull data when they function. You can download smaller models to your computer yourself, disconnect the internet, give them access to no files in your computer except the files that the model comes with, and test this out yourself. They can be taught. They just won't do it 100% of the time, or do it accurately all the time. They'll especially miss new and sneaky ways people can be racist that they may not have encountered in their training. But isn't this true of humans, too? Again, I really don't think they're sentient or conscious. But the reasons you bring up are not why.
  12. LLM's often don't do what they're programmed to do. This is because they're programmed to give out stochastic results, and also programmed to "not be racist", for instance. Again, none of this implies they're necessarily sentient or conscious. But they are not "bound" by their programming in a way typical computer programs are.
  13. I think that says more about those powerful nations than that a veto system is good or useful. Similar arguments were made to give the P5 vetoes, and look where we are.
  14. Should there even be a veto system?
  15. Yes but its worth noting that Go is still a much more constrained system than "linguistic communication". When it comes to language models, AI will always be behind the curve, because humans change language all the time, and no amount of synthetic data will allow current language model architectures to divine new slang, or the ways we change the meanings of words as events unfold. I take great comfort in that. I wouldn't call any of that proof of sentience, no matter how you define sentience. I wouldn't call any of it the beginnings of sentience, either. They're excellent facsimiles of something close to sentience, though. For sure, the idea of human primacy in the definition of intelligence and sentience is due for some major knocks. I highly recommend Meghan O'Gieblyn's "God, Human, Animal, Machine" for a great exploration of this. Yes, but a representation of intelligence is not, however, intelligence, and this shows in the way AI hallucinates. As several AI researchers have pointed out, it can be argued everything AI does is "hallucination", and what we call "right/truthful" output is only based on the human feedback that is used to shape the output of any given AI model. A lot of this reminds me about how kids speak, sometimes confabulating all kinds of plausible sounding nonsense. But while the human feedback they receive allows them to separate truth (as they know it) from fiction (as they intend it), LLM's are not capable of this. All we can do is restrain some common kinds of untruths and mistakes, with no guarantee that every instance of such will therefore be distinguished by the model. Hence our abilities to make these models buy into total nonsense, or encourage them to produce nonsense, despite the thousands of hours of human labor to try teach them the difference between what's acceptable and what is not. They have definitely been designed and deployed as theft machines. Theft of human writings and images, theft of human labor to give them the feedback that makes them usable (because such labor is ludicrously underpaid), and soon, theft of human time as we have to sift through the dross they produce because of the unexamined ways they've been trained. But I'd push back on the underlying technology not being even "dumb" AI. They are definitely capable of exploring the probability space for a given task very well. There's a lot of intelligence they're able to deploy at a scale and speed humans cannot. Is it anything like even the Star Trek computer, let alone sentient intelligence? No. But its a solid step towards that. Guessing what's next based on statistics is definitely part of how humans work. And AI doesn't actually output the statistically likeliest next word, even though it calculates it. One of the innovations in the transformer architecture was recognition that if you always pick the most likely word, you end up with very dry and boring sounding text, so there is deliberate randomness introduced, so the model picks somewhat lower ranked "next tokens". This is what allows for whatever creativity we see from them.
  16. Be prepared for "It will never happen". Because if it could, perhaps then the full throated defense of Israel's actions, and the supposed moral rectitude of its military we've heard about would need to be reexamined. And the cognitive dissonance of that is too much. So you'll continue to hear denials of this, which, of course, will only enable the dumbfucks who want this.
  17. This isn't true. It really isn't "searching" for anything. While AI can and does reproduce its training text, sometimes, it isn't working by indexing and retrieving that text. Fundamentally, this iteration of AI is great at observing patterns, then recreating similar (but not identical) patterns when prompted. Its just good at predicting the next word in a growing chain of words based on the chains of words it was trained on. That's definitely nothing like the Google Search algorithm, which is actually more precise and reliable, and makes up shit a lot less.
  18. While this iteration of AI is most definitely not sentient, it is artificial, and possesses some intelligence, so I'm fine with it being called AI. Whether we ever will, or want to, get to artificial sentience is a whole other question. Maybe we'll stumble upon it by accident... but I don't think current architectures are anywhere close to that. As others have pointed out, the biggest risks that seem real are half-baked AI implementations in opaque systems like job matching, crime fighting, etc. And also the fact that we've now made it possible for junk, polarizing content to be produced at scale at exponentially cheaper costs. I think the most useful take on AI I've seen is from Ted Chiang, who said our greatest fears about AI are actually fears about capitalism. And what this iteration of AI does, is solve optimization problems in a way eerily reflected in the short-term profit optimizing way capitalism works. Its that combo that haunts my dystopian nightmares, not AI taking over the world and nuking us all. All that said, the use of these systems in medicine is something I've been working on. Again, harnessed to blind profit seeking, this can get dystopian, but enabling precision medicine at scale in a way that allows better treatment for more people is genuinely something AI can enable... in the right hands.
  19. Thought it would be nice to have a thread to discuss it. For me personally, AI, its issues, and fears of what its introduction can do to society dominate my thoughts quite a lot. I'm not quite a doomer, but I can see so many pathways to doom, of one kind or another. So what does everyone here think? Do you use it already? Find it overwhelming and unreliable? Do you think these systems are more hype than reality? Do you see this leading to a good future, or the dystopian nightmares science fiction has so richly imagined?
  20. Perhaps yes, since that's about the number of times you demanded the rest of us affirm that Hamas was a piece of shit? No, you have been far from nuanced, and have insinuated antisemitism multiple times when issues with Israel's actions have been brought up. When you say "this war" what exactly are you referring to?
  21. There seems to be an attempt at this narrative that anything unpalatable about Israel is entirely the fault of external actors and external pressures. All positive things, though, are because Israel is a lovely democratic country, an island of progress among the uncivilized Arabs. Gee, I wonder where that kind of language comes from.
  22. And since these new cells were definitely born in the US, no more Birtherism!
  23. No it is not. It will be scary bad if you see these numbers around August next year, when the choice is closer, and the electorate has been saturated with information. Right now, this reflects the fact that Biden is a dissatisfactory candidate. And on his own, doesn't drum up much enthusiasm. But set against Trump, for an actual election? If he gets 37% of the vote, I'll eat all my shoes.
  24. Yeah, but wow, Ed really is coming across as a total moron. Kelly is going to correctly be incensed at what he used her son for. I wonder if
  25. Can you tell me if they've volunteered for the current bombing campaign? Have they volunteered to be driven from their homes? Then they cannot volunteer to bargain, or leave Gaza. If Gaza is rebuilt, and these refugees are resetteled, and then this bargain is proposed, they would have a chance to volunteer for it. This is definitely not Tywin's plan, since he's proposing moving them to the West Bank in (stated) part because it would apparently be cheaper than trying to rebuild Gaza, in his calculations.
×
×
  • Create New...