Jump to content

Artificial Intelligence


fionwe1987
 Share

Recommended Posts

On 1/11/2024 at 6:05 AM, Ser Scot A Ellison said:

That’s an oversimplification and it ignores the fact that “corporations” have always been “people”… hence the term “corporations”.  What the SCOTUS did in Hobby Lobby (which I strongly disagree with) is allow the corporations shareholders to express their personal free speech rights… through the corporation.  

Fair enough, but that's the issue, here. There are AI folks who say "I can read the New York times, and then come up with text that uses my knowledge of the Times without violating copyright, so why not an AI?". Leaving aside that regurgitating NYT's text word for word wouldn't be covered under fair use even for individuals, the bigger question is, can an AI be allowed to "gain knowledge" the way humans do, at the cost of a subscription to the NYT, or should using NYT articles to train an AI require royalty sharing with the NYT?

If SCOTUS holds corporations have free speech rights, it doesn't seem like a stretch to say their AI can read and use the knowledge they glean from the NYT without having to pay royalties.

Link to comment
Share on other sites

7 hours ago, fionwe1987 said:

Fair enough, but that's the issue, here. There are AI folks who say "I can read the New York times, and then come up with text that uses my knowledge of the Times without violating copyright, so why not an AI?". Leaving aside that regurgitating NYT's text word for word wouldn't be covered under fair use even for individuals, the bigger question is, can an AI be allowed to "gain knowledge" the way humans do, at the cost of a subscription to the NYT, or should using NYT articles to train an AI require royalty sharing with the NYT?

If SCOTUS holds corporations have free speech rights, it doesn't seem like a stretch to say their AI can read and use the knowledge they glean from the NYT without having to pay royalties.

Using the NYT to generate an income does seem like copyright infringement. I doubt an AI reads for pleasure.

Link to comment
Share on other sites

DeepMind AI solves geometry problems at star-student level (nature.com)

Quote

When tested on a set of 30 geometry problems from the International Mathematical Olympiad (IMO), AlphaGeometry could solve 25. This is approaching the performance of the competitions’ gold medallists — at least in geometry

Its a pretty impressive feat, even unthinkable a few years ago, but as the article goes on to point out:

Quote

Still, mathematicians’ jobs will probably be safe for a while longer. “I can imagine in a few years’ time that these or other techniques in machine learning may be solving mathematics problems at undergraduate level which only the smartest undergraduates can solve,” says Buzzard. “But right now, I have seen no evidence of machines autonomously engaging with modern research-level mathematics.”

I'm surprised that AI isn't as good at number theory, but I need to dig deeper as to why that is the case. 

Link to comment
Share on other sites

2 hours ago, IheartIheartTesla said:

I'm surprised that AI isn't as good at number theory, but I need to dig deeper as to why that is the case. 

This is why:

Quote

Instead of using natural language, Trinh and his collaborators developed a language for writing geometry proofs that has a rigid syntax similar to that of a computer programming language. Its answers can therefore be checked easily by a computer, while still making sense to humans.

The team focused on problems in Euclidean geometry in which the goal is to write a mathematical proof of a given statement. They embedded into their custom language several dozen basic rules of geometry, such as ‘if one straight line intersects a second straight line, it will also intersect a line that is parallel to the second line’.

They then wrote a program to automatically generate 100 million ‘proofs’. Essentially, these consisted of random sequences of simple but logically unassailable steps — such as “given two points A and B, construct the square ABCD”.

AlphaGeometry was trained on these machine-generated proofs. This meant that the AI was able to solve problems by guessing one step after the other, in the same way as chatbots produce text. But it also meant that its output was machine-readable and easy to check for accuracy. For every problem, AlphaGeometry generated many attempts at a solution. Because the AI could automatically weed out the incorrect ones, it was able to reliably produce correct results, including to geometry problems from the IMO.

This wasn't a typical LLM. It didn't just read regular human proofs then come up with this level of skill. Instead, it was trained on a more curated dataset, that used specialized language that had some of the symbolic rules of geometry embedded, so it could check it's results. 

That is, a lot of the cleverness comes from the custom language that humans developed. 

I'd be interested to know how students do it they're trained in this custom language, as well. Because sometimes it feels to me that as we learn how to educate these models, we may stumble upon ways to better educate humans too. 

Link to comment
Share on other sites

5 hours ago, fionwe1987 said:

This is why:

This wasn't a typical LLM. It didn't just read regular human proofs then come up with this level of skill. Instead, it was trained on a more curated dataset, that used specialized language that had some of the symbolic rules of geometry embedded, so it could check it's results. 

That is, a lot of the cleverness comes from the custom language that humans developed. 

I'd be interested to know how students do it they're trained in this custom language, as well. Because sometimes it feels to me that as we learn how to educate these models, we may stumble upon ways to better educate humans too. 

For humans this is not a viable method. Once a problem becomes hard enough, the increase in time needed to solve it becomes exponentially long.  Think of factoring a large number to see if it is prime. With 'n' digits the time is X to the exponent n times long. 

Link to comment
Share on other sites

Okay. I'm fucking angry at myself for trying to do a programming project with one of my graduating classes in the era of ChatGTP. I tried to address the issue with particularly strict demands that recording their path to finding a solution was the task I made it depending upon whether the solution is theirs or attempted fraud. To give a perspective, this was the task:

They should program a short, easy game in python using only console input/output. For example Mastermind, where the computer thinks up a code of 4 colors, the players have to guess the code and after every attempt they should get the number of correct colors they already have. The students should analyze the gameplay loop, write a class card (since they were tasked to wrap everything into a class) and write two Nassi-Shneiderman diagrams for individual methods. Then they should attempt to implement it (where I wasn't even going to be particularly strict about whether they succeed) and while doing so write a protocol about every problem they encountered and how they went about trying to tackle it. They had three weeks of time for it.

That last bit... only one student did. The others just commented their code saying what each line does, apparently thinking that if they do this, they can copy as much as they want. Nobody of these students wrapped their solution into a class, which either means after a whole semester of object-oriented programming they still haven't figured out how to make classes... or that ChatGTP ignored this part of the task. Two of the students just blatantly 1:1 copied internet pages, so that was easy to find. However the solutions of the others I couldn't find and so have to assume an AI wrote this because they had perfect code, not fitting with their class diagram obviously because of the absence of classes, but the Nassi-Sneiderman diagrams were obviously just backwards interpreting their code and the code itself was using various shortcuts I had never shown in class, again, without any shred of a mention what divine inspiration caused them to write their code this way.

I at first was compelled to give the benefit of the doubt and still give points for the stuff that could be of their own making, but that would punish those who copied internet pages and reward those using ChatGTP. Also it would be unfair to previous classes where I made no exceptions towards plagiarism and would give the impression that cheating is okay when everybody all at once do it. I discussed it with their class teacher and she also agreed that I should give everyone except that one girl a fail and be done with it. Bloody hell, I'm never doing a graded project again, which is just such a sad state of affairs...

Link to comment
Share on other sites

I’m listening to an audiobook on the topic titled The Coming Wave by Mustafa Suleyman. It’s 50% interesting and insightful and 50% doomsayer so far. I have learned from Covid and the Ukrainian war, I allow no room in my mind to worry about this vague thing I have zero control over. 

AI is also the designated over-caffeinated buzzword of the 2023-2024 business years. Agile is kinda slowly becoming the boomer in the room. Anyway, for this reason I’m in an AI related project and I kinda like that for once I’m working on something people give a shit about. So I’m still in the positive fascination stage.

I play with chat gpt when I have some time in between meetings and I’ve got to say I’m seriously underwhelmed. It translates better than Google translate had, but it’s rubbish at mostly everything else I tried. The fiction it generates is bone dry and it fails to convince me that it’s able to imitate known writing styles. (I tried to make it write extra scenes for Harry Potter, just because I’m rereading the novels right now) When I tried prompting it to write me a dialogue or a scene for a story idea I have (yes I have always had amateur/hobby fiction writing ambitions but no time to actually do it anymore) takes as much time as writing the actual scene and I would do a job so much better. I guess I would need to prompt the thing for weeks on end with world building and characters and milestones and then after that it could perhaps write a plot out? With awful dialogue? So no, not impressed so far.

In terms of assistance, it doesn’t do better than a good old Google search at providing house cleaning tips. It’s terrible at vacation planning and it’s a very boring dietitian that doesn’t even come close to the Pinterest recipe rabbit hole experience. 

That’s been my short and lackluster AI journey thus far. 

Link to comment
Share on other sites

Its interesting the amount of companies I'm hearing who want to jump on AI to improve their workflows. In reality the options for much of it are pretty limited and over hyped. I'm seeing a lot of chatter about using things like Microsoft Co-Pilot, but when you look at what it can actually do, it barely seems to be the massive time saver it purports to be. 

Being able to check an email thread and give a summary is sort of useful, but also a pretty dangerous way to miss important information. The same as getting summaries from documents. I don't think there is a level of trust in AI to actually understand what it's reading and pull out an accurate summary.

I've tried to use AI functions in some of my work with other tools and while it saves a little bit of time, I still need to double and triple check everything it says because it will often just make shit up and throw in statements out of nowhere. 

I think we will hit a point this year where the hype will die down and those people who've used the technology currently available will lose interest while they wait for something genuinely powerful to arrive. 

Link to comment
Share on other sites

The problem with incorporating AI right now is that all the reinforcement learning from human feedback (RLHF) and other techniques used to shape and constrain the AI responses are processes that leave giant gaping holes for all kinds of attacks, and don't end up coming close to eliminating the propensity of AI to hallucinate. 

This means you can integrate an LLM into your workflow, but someone is going to have to go through the output with a fine toothcomb. But that's a fundamentally different job that coming up with text of your own, so you're left with a major component of your workflow that you cannot trust.

It's like hiring a super well read employee who is (relatively) inexhaustible, but you have no guarantee said employee won't reveal your secrets, add bad code or exploits based on cleverly worded prompts, or straight up make up shit.

The hype machine would have it that these are trivial problems, but I've read nothing to support that view, and there's nothing published by the big AI companies in this direction. A lot in the opposite, especially from Anthropic, whose recent work seems to show that deceitful behavior in current LLMs is not something you can train out:

https://www.anthropic.com/news/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training

Quote

Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.

So yeah, I don't think these systems are ready to be reliable parts of any critical workflow. 

Link to comment
Share on other sites

2 hours ago, fionwe1987 said:

This means you can integrate an LLM into your workflow, but someone is going to have to go through the output with a fine toothcomb. But that's a fundamentally different job that coming up with text of your own, so you're left with a major component of your workflow that you cannot trust.


And the further problem with this is the same as with the 99% reliable driverless cars with a driver responsible for handling the 1% of cases where the AI gets it wrong.

Someone whose job is to check the AI is going to be less motivated and less skilled than someone who does the job from scratch. There will be the ever present temptation to just nod stuff though.

On top of that, the management team is going to view the role as an overhead to be done as cheaply as possible and to be done away with entirely as soon as there is any excuse to. The usual scenario of looking good by reducing costs and making sure you have moved on before the impact is felt.

Edited by A wilding
Link to comment
Share on other sites

19 minutes ago, A wilding said:

 

On top of that, the management team is going to view the role as an overhead to be done as cheaply as possible and to be done away with entirely as soon as there is any excuse to. The usual scenario of looking good by reducing costs and making sure you have moved on before the impact is felt.

That might be true in some cases, depending on how AI is being used. In the scenarios I am seeing currently it is more that AI is being seen as a way of improving efficiency and removing the tedious jobs from people's lives so they can move on to doing 'proper' work. There is definitely a lot of merit in that as many people's jobs are full of wasted time doing menial shit that takes too long.

I haven't actually seen much talk of using AI to replace people or as a device to cut worker numbers in the companies I'm working with. The assumption is actually that each worker will be able to achieve more and the productivity gain will be substantial.

I'm sceptical at the moment, I just don't think LLM tools are trustworthy enough to speed things up all that much, and it will only take 1 or 2 high profile disasters to make people resistant to using them.

Link to comment
Share on other sites

37 minutes ago, A wilding said:


And the further problem with this is the same as with the 99% reliable driverless cars with a driver responsible for handling the 1% of cases where the AI gets it wrong.

Someone whose job is to check the AI is going to be less motivated and less skilled than someone who does the job from scratch. There will be the ever present temptation to just nod stuff though.

On top of that, the management team is going to view the role as an overhead to be done as cheaply as possible and to be done away with entirely as soon as there is any excuse to. The usual scenario of looking good by reducing costs and making sure you have moved on before the impact is felt.

Whatever management's intentions, finding fibs buried in well written sentences is hard, and humans just haven't had to do it at such scale before, nor does past evidence suggest we're particularly great at it.

13 minutes ago, Heartofice said:

That might be true in some cases, depending on how AI is being used. In the scenarios I am seeing currently it is more that AI is being seen as a way of improving efficiency and removing the tedious jobs from people's lives so they can move on to doing 'proper' work. There is definitely a lot of merit in that as many people's jobs are full of wasted time doing menial shit that takes too long.

I haven't actually seen much talk of using AI to replace people or as a device to cut worker numbers in the companies I'm working with. The assumption is actually that each worker will be able to achieve more and the productivity gain will be substantial.

I'm sceptical at the moment, I just don't think LLM tools are trustworthy enough to speed things up all that much, and it will only take 1 or 2 high profile disasters to make people resistant to using them.

Yeah, whether replacing humans, or merely helping them be more efficient, a lying language model is not going to cut it.

Link to comment
Share on other sites

Well I just came across an online article stating that Danish researchers have proved mathematically that AI algorithms are inherently unstable for all but the simplest tasks. Many years ago Roger Penrose stated that AI is a pipe dream due to it being a violation of Godel's incompleteness theorem. Good thing I had the sense to believe Penrose all these years and not waste my time and money.

Link to comment
Share on other sites

Same experience with Design/Build and Custom Manafacturing. 

AI might be a great tool if we were always working off a mature design or faced with making a lot of repetitve parts but the reality is out on the shop floor and erection bays theres nearly always some deviation from things like shrinkage, warpage, grey areas where the builders have to rely on asterics like "in vicinity of", a lot of cumulative factors that make each of our ships custom not cookie cutter.

The consequences being a human is still needed on the design build end to go in and measure and build a lot of parts that can be different even in the same space from a previous ship.

We are not even generationally close to not needing the skilled tradesman that have to pick up where the robotics and engineering falls short. Not even remotely close.

Lets try the AI out on Executive Compensation for starters and see how well upper mgt thinks its working out as the paycuts pour in.:lmao:

Link to comment
Share on other sites

10 hours ago, maarsen said:

Well I just came across an online article stating that Danish researchers have proved mathematically that AI algorithms are inherently unstable for all but the simplest tasks.

Um, did you happen to read beyond the headline? This "instability" is about how they respond to noise, and while "inherently unstable" sounds like a deathknell, the level of instability is variable (as proved by this paper), and the paper also says it can be reduced/the replicability boosted. 

10 hours ago, maarsen said:

Many years ago Roger Penrose stated that AI is a pipe dream due to it being a violation of Godel's incompleteness theorem. Good thing I had the sense to believe Penrose all these years and not waste my time and money.

Goedel's theorem is for formal systems. AI/ML, in this current iteration, are not formal systems, in that sense. They're probabilistic, not formal logic based.

Edited by fionwe1987
Link to comment
Share on other sites

On 1/26/2024 at 2:13 PM, DireWolfSpirit said:

Same experience with Design/Build and Custom Manafacturing. 

AI might be a great tool if we were always working off a mature design or faced with making a lot of repetitve parts but the reality is out on the shop floor and erection bays theres nearly always some deviation from things like shrinkage, warpage, grey areas where the builders have to rely on asterics like "in vicinity of", a lot of cumulative factors that make each of our ships custom not cookie cutter.

The consequences being a human is still needed on the design build end to go in and measure and build a lot of parts that can be different even in the same space from a previous ship.

We are not even generationally close to not needing the skilled tradesman that have to pick up where the robotics and engineering falls short. Not even remotely close.

Lets try the AI out on Executive Compensation for starters and see how well upper mgt thinks its working out as the paycuts pour in.:lmao:

So I did a search of "What jobs will AI not take over?"

Jobs AI Just Can’t Do https://www.forbes.com/sites/eliamdur/2023/11/25/jobs-ai-just-cant-do/?sh=1c4014f531a2

And my impressions were confirmed, impression being that AI cannot do my job or that my skill set will still be necessary to industries.

Skilled Trades

Jobs like electricians, plumbers, and craftsmen involve hands-on skills, adaptability to diverse situations, the ability to imagine complex systems and to detect what’s going on in current systems, are challenging for AI to approach.

Other critical skills that are not likely to disappear-

Therapists and counselors, Leadership, Healthcare, Caregivers for elderly and disabled, R&D, creative problem solving and a host of other skills that will likely always be needed. (AI isnt going to fix that hole in your roof anytime soon, let alone replace your community hospital or elementary school).

I think I feel safe tuneing out the folks like Elon Musk that are prone to overstating the dramatics of where this new tech will lead to.

 

 

 
 
Edited by DireWolfSpirit
Link to comment
Share on other sites

  • 3 weeks later...
2 hours ago, IFR said:

Impressive capabilities of Openai's new video generating AI, Sora:

https://openai.com/sora

Apparently this AI was trained exclusively on licensed material, which perhaps will appease a major complaint lodged against AI in this thread.

You can tell its AI generated if you go in knowing what you're looking for. That said, scary impressive stuff. Especially in an election year, which is probably why it hasn't been released yet. Till they can make it refuse to make videos of real people in an airtight manner, they shouldn't release this to the public. One minute is more than enough to screw up elections.

 

Link to comment
Share on other sites

1 hour ago, fionwe1987 said:

You can tell its AI generated if you go in knowing what you're looking for. That said, scary impressive stuff. Especially in an election year, which is probably why it hasn't been released yet. Till they can make it refuse to make videos of real people in an airtight manner, they shouldn't release this to the public. One minute is more than enough to screw up elections.

 

It's hard to say how one can appropriately handle this paradigm shift. The genie is out of the bottle. Local LLMs will catch up. Antagonistic states will pursue this technology - they may already have it. 

I just don't see how it will be possible to effectively censor or police this technology on a global scale.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...