Jump to content
Washington Football Team Logo
Extremeskins

The Artificial Intelligence (AI) Thread


China

Recommended Posts

4 minutes ago, Renegade7 said:

 

Never said everything that has limited AI will become selfaware jus because one program does. 

 

A machine that can pull a trigger without human intervention doesnt need a reason, it jus needs the command to do it, regardless of where that command comes from.  The T-1000s weren't selfaware, Skynet was.  

 

This gets touched on in different ways as technology improves and becomes more connected via the internet.  Something malicious can go through a far more thorough process of exploitation then humans can, scifi has transitioned to the threat of a selfaware AI having internet access.

 

AI by itself isnt a threat, selfaware AI is.  My specific issue with this plan isnt the system deciding spontsneously to kill all humans, it's that it's designed to make decisions on it's own when theres no one left to pull the trigger, it's a way around people making the final call if we are actually still around and will either be targeted or could make mistakes.

 

You seem to be arguing a few different things here that aren't necessarily related.

 

1) A machine only needs a command to do it. True. That's why you do bug testing, verification, etc. and you program in failsafe mechanisms. This sort of code isn't going to be an alpha release. It would be tested countless times in countless scenarios, and be scrubbed for bugs countless times before even going partially live in lab or QA type of environment. 

 

2) Something malicious getting through. Yes, that's a legit concern if it is on a connected network. That would have to be taken into account and extreme security and counter-intrusion measures built into it. But it's still a risk...so that's a very valid point.

 

3) Self aware AI is a threat. This is a completely separate subject that I touched on above. Self aware AI isn't a thing, and we don't even know if it's possible for us to build. There are researchers working on it but it's not there yet. We've made programs that can pass a generic Turing test but that's a very outdated test and it doesn't indicate actual sentience...just very advanced algorithms and programming. 

 

The first two are legit concerns. The last one is science fiction, and even if it becomes science fact, nobody would put a sentient AI in charge of a military system. Though I suppose it might be able to take it over on its own were it smart enough and had enough access to certain systems. But that's a ton of speculation about a situation that's beyond unlikely. 

  • Like 1
Link to comment
Share on other sites

Admiring the notion that we're thinking about creating a self aware intelligence to run rhis thing.  

 

I don't see anything in the product specs that calls for self awareness.  

 

A simple "If you register nuclear explosions in Washington, New York, Omaha, and one of the following cities, then launch everything" doesn't require self awareness  

Link to comment
Share on other sites

52 minutes ago, Renegade7 said:

Software is made by people so software can be wrong, too, imagine what a false positive on that would look like? Is that worth it?

 

The likely reason you would put nuclear launch protocols under the control of an AI in the future is with the assumption that AI has less chance of false positives than humans, especially in circumstances that involve decision making in a few minutes.

 

Edited by No Excuses
Link to comment
Share on other sites

5 minutes ago, No Excuses said:

 

The likely reason you would put nuclear launch protocols under the control of an AI in the future is with the assumption that AI has less chance of false positives than humans, especially in circumstances that involve decision making in a few minutes.

 

 

Fair point but debatable, theres a scary number of examples were humans following protocol could've done that but decided not to.  You cant program a computer to understand that theres no going back from a decision like that the way humans do.

 

https://en.m.wikipedia.org/wiki/List_of_nuclear_close_calls

Link to comment
Share on other sites

On a more serious note, This is why I really think we need to have a second-strike capability.  The notion being that we want those responsble, when WOPR reports multiple nukes inbound, to have the option of sitting it out, and letting them hit.  In the confidence that we will have enough weapons left, after they hit, that we can respond, or not, after the supposed nukes have hit.  (When we can be certain that it's not a false alarm.)  

Link to comment
Share on other sites

10 minutes ago, Renegade7 said:

 

Fair point but debatable, theres a scary number of examples were humans following protocol could've done that but decided not to.  You cant program a computer to understand that theres no going back from a decision like that the way humans do.

 

https://en.m.wikipedia.org/wiki/List_of_nuclear_close_calls

 

Disagree with this. I think an ML/AI system absolutely can learn to "understand" those sorts of consequences. Difference is it wouldn't be on an emotional level, it would be on a logical level.

 

One issue with this is that the AI could be placed in a no-win situation where it had to decide the least worst outcome. That decision and outcome at first may seem awful to us, hence an evil decision by an evil AI, but in a long term strategic sense (that involved far too many variables for our human minds to process and foresee) it was the best option.

 

Something like calculating an acceptable number of casualties to save a certain number of people. So a limited nuclear exchange results in 5 million people dead but the way it was handled saved an almost certain all out exchange in the future which would have led to 1 billion people dead. The 5 million people who died, and their families, certainly won't be comforted by that logic and would likely blame the cold hearted machine that made the decision. 

Link to comment
Share on other sites

53 minutes ago, mistertim said:

 

The first two are legit concerns. The last one is science fiction, and even if it becomes science fact, nobody would put a sentient AI in charge of a military system. Though I suppose it might be able to take it over on its own were it smart enough and had enough access to certain systems. But that's a ton of speculation about a situation that's beyond unlikely. 

 

I'm sure they tested the hell out of that drone that got getting hacked and captured in the middle east.  

 

Heres the thing about cybersecurity, all the testing in the world doesnt catch everything.  Many times risk is calculated based on the capabilities of the threat or the time it would take, how hard it is, not whether the vulnerability is real or not, theres always a vulnerability somewhere.  Stuxnet attacked a system that was air gapped, state actors dont work from their moms basement.

 

You ever made a buffer overflow from scratch?  I have, it takes patience.  No computer I know of can put that all together on it's on yet, but the one thing I do know is computers dont know what patience is and can do things way faster then we can once they understand how to do it.

 

You know, at the end of day people can make mistakes and be hacked too via social engineering.  Jus if the world is going to end let it be because we chose to, not some computer chose it for us.

Edited by Renegade7
Link to comment
Share on other sites

2 minutes ago, mistertim said:

 

Disagree with this. I think an ML/AI system absolutely can learn to "understand" those sorts of consequences. Difference is it wouldn't be on an emotional level, it would be on a logical level.

 

Would you rather have a system making that call be only capable of logic or both?  I jus posted a link with multiple cases where the logical thing to do was not the right thing to do.  Only a human would understand that the logical thing to do is not always the right thing to do.

 

2 minutes ago, mistertim said:

One issue with this is that the AI could be placed in a no-win situation where it had to decide the least worst outcome. That decision and outcome at first may seem awful to us, hence an evil decision by an evil AI, but in a long term strategic sense (that involved far too many variables for our human minds to process and foresee) it was the best option.

 

Something like calculating an acceptable number of casualties to save a certain number of people. So a limited nuclear exchange results in 5 million people dead but the way it was handled saved an almost certain all out exchange in the future which would have led to 1 billion people dead. The 5 million people who died, and their families, certainly won't be comforted by that logic and would likely blame the cold hearted machine that made the decision. 

 

What country do you know of that would respond to a limited nuclear response with a limited nuclear response?  There are no winners in nuclear war, only losers.

Link to comment
Share on other sites

1 hour ago, Renegade7 said:

 

Would you rather have a system making that call be only capable of logic or both?  I jus posted a link with multiple cases where the logical thing to do was not the right thing to do.  Only a human would understand that the logical thing to do is not always the right thing to do.

 

 

What country do you know of that would respond to a limited nuclear response with a limited nuclear response?  There are no winners in nuclear war, only losers.

 

For the first...I think it really depends. They both have their place. But an emotional decision will be worse than a logical one just as many times in the long run because emotion is by nature short sighted. Some of this goes back to the classic trolley dilemma: letting one person die (or killing one person) to save a much larger number of people. Pure emotion would generally say it's not permissible to take one person's life if it saves five. But pure logic would say it is (as Spock himself said "the needs of the many outweigh the needs of the few").

 

How do we judge what the "right" thing to do is? Is killing 1 to save 5 right? Is killing 1 million to save 1 billion right? Is allowing 5 people to die so 1 could live right? Is allowing 1 billion to die to save 1 million right? It's a tough question. Ideally a combination of logic an emotion...so I think we agree there. But in really difficult no-win situations it could be very difficult to reconcile the two which could lead to indecision in a crucial situation. 

 

For the second...I was just using a nuclear exchange as an example. But I think a limited nuclear exchange would be possible, especially if we're talking about super intelligent AIs. They would be able to calculate far more variables and moves ahead than we would. I might look at a chess grandmaster's move and find it to be completely perplexing given what's on the board, but ten moves later I see it unfold. Theoretically AIs would be far better at calculating and predicting and modeling than we ever could be so could see and take into account variables we never would. 

Link to comment
Share on other sites

3 hours ago, Larry said:

On a more serious note, This is why I really think we need to have a second-strike capability.  The notion being that we want those responsble, when WOPR reports multiple nukes inbound, to have the option of sitting it out, and letting them hit.  In the confidence that we will have enough weapons left, after they hit, that we can respond, or not, after the supposed nukes have hit.  (When we can be certain that it's not a false alarm.)  

 

We have second strike capability, just the Boomers alone have a bunch

Link to comment
Share on other sites

2 hours ago, mistertim said:

 

For the first...I think it really depends. They both have their place. But an emotional decision will be worse than a logical one just as many times in the long run because emotion is by nature short sighted. Some of this goes back to the classic trolley dilemma: letting one person die (or killing one person) to save a much larger number of people. Pure emotion would generally say it's not permissible to take one person's life if it saves five. But pure logic would say it is (as Spock himself said "the needs of the many outweigh the needs of the few").

 

How do we judge what the "right" thing to do is? Is killing 1 to save 5 right? Is killing 1 million to save 1 billion right? Is allowing 5 people to die so 1 could live right? Is allowing 1 billion to die to save 1 million right? It's a tough question. Ideally a combination of logic an emotion...so I think we agree there. But in really difficult no-win situations it could be very difficult to reconcile the two which could lead to indecision in a crucial situation. 

 

I agree theres a point where our emotions can work against us. But that's what makes us human, comes with the territory.  Take that out of the equation, what are protecting anymore?

 

I enjoy having this conversation even if we dont agree on everything in context that I cant have this conversation with my laptop.  The world is grey, I'd rather two countries try to hash that out and do everything to avoid this, can computers do that? Is that their place? That hesitation may be a weakness, but it's why we arent all already dead.

 

This wouldnt happen in a vacuum, we wouldnt be the only country to have computers making the decision jus in case theres no one left to make it if we go this route.  Will they have the same security guards as ours does? Pandora's Box.

 

Quote

For the second...I was just using a nuclear exchange as an example. But I think a limited nuclear exchange would be possible, especially if we're talking about super intelligent AIs. They would be able to calculate far more variables and moves ahead than we would. I might look at a chess grandmaster's move and find it to be completely perplexing given what's on the board, but ten moves later I see it unfold. Theoretically AIs would be far better at calculating and predicting and modeling than we ever could be so could see and take into account variables we never would. 

 

I offer a compromise: instead of letting the computer fire nukes and say "dont worry, it'll make more sense after I make 9 more moves", jus print out the 10 moves and let the people decide to do it or not. 

 

I dont have a problem with AI augmenting decision making, that's what were seeing in a lot of practical AI today, like education and medical research. I jus cant get down with letting it make life and death decisions.  In the future when we are all driving autonomous cars, car crashes wont be impossible they'll jus be different, there will always he car crashes. Autonomous car crashes should not end the world.

Edited by Renegade7
  • Like 1
Link to comment
Share on other sites

9 minutes ago, twa said:

 

We have second strike capability, just the Boomers alone have a bunch

 

I wouldn't want to base our decision maling capability around the assumption that any element of our technology is invincible.  I don't believe there's any Divine Law declaring that subs are always immune.  But yeah, it's certainly a part.  And why I want to keep them.  

 

I also wonder - Do our subs have the ability to launch, withut Washington giving them a code or something?  At least from what I've read, our Minutemen can't.  From what I've read, one command silo has the controls for 10 missiles.  But there's also two other silos (and they don't know which two), which don't have the power to launch, but which do have the power to countermand a launch.  Even if both officers in silo D-7 decide to launch, silos D-2 and D-9 each have the power to countermand the launch.  

 

Obviously, we can't do that with subs.  We can't give one sub the power to countermand another sub's launch decision, because the subs can't talk to each other. As such, I could see why we might want a system where each individual sub can't launch unless Washington tells them what the password for the launch computer is.  (And nobody on the sub knows it.) 

 

Still, Im certain that somebody who knows a lot more about it than me has spent a lot of effort on the problem of reducing the chance of a rogue sub performing an unauthoriozed launch, versus the possibility that we might want the sub to be able to launch after the US has been wiped out  

 

Link to comment
Share on other sites

  • 2 months later...

Artificial Intelligence ‘too dangerous' to be released to world rolled out

 

An AI that was deemed "too dangerous" to be released to the world has been rolled out.

 

GPT-2 is an AI that can take text in any style and create convincing new material.

 

For example, when it was fed George Orwell's classic novel 1984 it produced a plausible science fiction story set in China.

 

While makers OpenAI, which was originally co-founded by Elon Musk, have released all of their other algorithms for public use, they said they would never release GPT-2 because of the dangers of it being misused.

 

It would be too easy for it to completely flood the internet with "fake news" content.

 

It would be impossible for anyone to tell what was real and what was fake any more.

 

Researcher Jeremy Howard says that if the code was released it could "totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter".

 

OpenAI agreed with him. They said in February: ”Due to our concerns about malicious applications of the technology, we are not releasing the trained model,"

 

"As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

 

But they’ve now gone back on that decision, and released a much more powerful version of code.

 

However, they say this model of the AI is only slightly more convincing to humans than the previous version. Even though it's already done some things they hadn't actually taught it to do, like translate from one language to another.

 

Click on the link for the full article

Link to comment
Share on other sites

  • 2 weeks later...
  • 6 months later...

I don't need AI to tell me who has a punchable face.

 

Artificial intelligence can make personality judgments based on our photographs

 

Russian researchers from HSE University and Open University for the Humanities and Economics have demonstrated that artificial intelligence is able to infer people's personality from 'selfie' photographs better than human raters do. Conscientiousness emerged to be more easily recognizable than the other four traits. Personality predictions based on female faces appeared to be more reliable than those for male faces. The technology can be used to find the 'best matches' in customer service, dating or online tutoring.

 

The article 'Assessing the Big Five personality traits using real-life static facial images' will be published on May 22 in Scientific Reports.

 

Click on the link for the full article

  • Haha 1
Link to comment
Share on other sites

  • 9 months later...

Biden urged to back AI weapons to counter China and Russia threats

 

The US and its allies should reject calls for a global ban on AI-powered autonomous weapons systems, according to an official report commissioned for the American President and Congress.

 

It says that artificial intelligence will "compress decision time frames" and require military responses humans cannot make quickly enough alone.

 

And it warns Russia and China would be unlikely to keep to any such treaty.

 

But critics claim the proposals risk driving an "irresponsible" arms race.

 

"This is a shocking and frightening report that could lead to the proliferation of AI weapons making decisions about who to kill," said Prof Noel Sharkey, spokesman for the Campaign To Stop Killer Robots.

 

"The most senior AI scientists on the planet have warned them about the consequences, and yet they continue.

 

"This will lead to grave violations of international law."

 

The report counters that if autonomous weapons systems have been properly tested and are authorised for use by a human commander, then they should be consistent with International Humanitarian Law.

 

The recommendations were made by the National Security Commission on AI - a body headed by ex-Google chief Eric Schmidt and ex-Deputy Secretary of Defense Robert Work, who served under Presidents Obama and Trump.

 

Other members include Andy Jassy, Amazon's next chief executive, Google and Microsoft AI chiefs Dr Andrew Moore and Dr Eric Horvitz, and Oracle chief executive Safra Catz.

 

Click on the link for the full article

Link to comment
Share on other sites

I can see the utility of having automated control of SOME weapons. 

Although the only cases I can think of are defensive weapons, like CIWS or Aegis. (And I wonder how thoroughly those systems have actually been tested.)

Link to comment
Share on other sites

3 hours ago, Larry said:

I can see the utility of having automated control of SOME weapons. 

Although the only cases I can think of are defensive weapons, like CIWS or Aegis. (And I wonder how thoroughly those systems have actually been tested.)

 

CIWS gets tested pretty regularly.  And I can attest to the fact that it will take out a bird that may fly into it’s arc.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...