Jump to content
Washington Football Team Logo
Extremeskins

The Artificial Intelligence (AI) Thread


China

Recommended Posts

Strangelove redux: US experts propose having AI control nuclear weapons

 

Hypersonic missiles, stealthy cruise missiles, and weaponized artificial intelligence have so reduced the amount of time that decision makers in the United States would theoretically have to respond to a nuclear attack that, two military experts say, it’s time for a new US nuclear command, control, and communications system. Their solution? Give artificial intelligence control over the launch button.

 

In an article in War on the Rocks titled, ominously, “America Needs a ‘Dead Hand,’” US deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union’s nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Given the increasing time pressure Lowther and McGiffin say US nuclear decision makers are under, “It may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

 

In case handing over the control of nuclear weapons to HAL 9000 sounds risky, the authors also put forward a few other solutions to the nuclear time-pressure problem: Bolster the United States’ ability to respond to a nuclear attack after the fact, that is, ensure a so-called second-strike capability; adopt a willingness to pre-emptively attack other countries based on warnings that they are preparing to attack the United States; or destabilize the country’s adversaries by fielding nukes near their borders, the idea here being that such a move would bring countries to the arms control negotiating table.

 

Still, the authors clearly appear to favor an artificial intelligence-based solution.

 

Click on the link for the full article

Edited by China
Link to comment
Share on other sites

I'm convinced one of the reasons we haven't heard from intelligent aliens is because intelligent life is hard to develop and even harder keep alive from their inevitble mistakes.  Alien Civilizations probably destroy themselves all the time without a clear way to warn others, we'd be no different if we killed ourselves, too 

  • Like 1
Link to comment
Share on other sites

I thought we already had something similar to this as part of the mutually assured destruction strategy.  I read long ago, like in the 90s, that in the event of an all out nuclear strike the US had systems in place to launch a nuclear counter strike even if the chain of command was destroyed. 

  • Like 1
Link to comment
Share on other sites

12 hours ago, Destino said:

I thought we already had something similar to this as part of the mutually assured destruction strategy.  I read long ago, like in the 90s, that in the event of an all out nuclear strike the US had systems in place to launch a nuclear counter strike even if the chain of command was destroyed. 

 

Oh, we had it long before then.  

 

 

Link to comment
Share on other sites

15 hours ago, DogofWar1 said:

Oh for ****'s sake there is an entire sub-genre of fiction about why this is a bad idea.

 

Can we not?

Oh come on Negative Nancy. It’s not like anyone ever finds software security flaws or hacks into multiple systems on a daily basis. And especially not “Russiar”, “Jyna”, or “Little Rocket Man”. Those guys suck at the internets. 

Edited by The Sisko
  • Haha 1
Link to comment
Share on other sites

36 minutes ago, Larry said:

I could get behind handing over our nuclear weapons to a computer  

 

For the remainder of the Trump administration.  

 

 

Ha I came to post this same thing. 

 

On one hand, this is an absolutely horrible idea. On the other hand, it’s a way better idea than letting Donald Trump make decisions about nukes. 

  • Like 1
Link to comment
Share on other sites

I actually don't think it's a terrible idea. With hypersonic gliders being developed in the US, China and Russia, the amount of time to process the decision for launching nuclear weapons in response to a threat is orders of magnitude lower than before. I am not entirely sure that humans are capable of making rational decisions in scenarios involving nukes and hypersonic gliders.

Link to comment
Share on other sites

15 minutes ago, No Excuses said:

I actually don't think it's a terrible idea. With hypersonic gliders being developed in the US, China and Russia, the amount of time to process the decision for launching nuclear weapons in response to a threat is orders of magnitude lower than before. I am not entirely sure that humans are capable of making rational decisions in scenarios involving nukes and hypersonic gliders.

 

It isn't a terrible idea at all, especially considering the advancements in ML and AI recently and the speed/technology increases we're seeing in warfare. All of the doomsday stuff (aka fiction) rests on the AI spontaneously becoming self-aware and deciding "**** humans" which is exceptionally unlikely to happen. I mean, there are places working nonstop to try and figure out if sentient AI can actually be a thing, and yet some guys coding an ML system to control and predict a specific set of variables (in this case military movements and missiles) are going to stumble upon it accidentally? That's like accidentally building a nuclear weapon when you meant to build a garbage disposal. The chances are a billion to one against probably. 

Link to comment
Share on other sites

4 minutes ago, No Excuses said:

I actually don't think it's a terrible idea. With hypersonic gliders being developed in the US, China and Russia, the amount of time to process the decision for launching nuclear weapons in response to a threat is orders of magnitude lower than before. I am not entirely sure that humans are capable of making rational decisions in scenarios involving nukes and hypersonic gliders.

 

Was not expecting that. 

 

I'd support lasers in space before letting a computer decide the best time to fire nukes.  Keep in mind were talking about striking back in case theres no one left to pull the trigger, not stopping the nukes from getting to us, if we get to that point it's already over for us all anyway.

 

That moment when the proposed solution to maintaining the Aurora of mutually assured destruction is ensured mutual destruction.  What's to stop an adversary to attack what is in essence a security bypass and cause the US to nuke itself without needing anyone's approval?  Someone building that code could build a backdoor just for that.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, mistertim said:

 

All of the doomsday stuff (aka fiction) rests on the AI spontaneously becoming self-aware and deciding "**** humans" which is exceptionally unlikely to happen. I mean, there are places working nonstop to try and figure out if sentient AI can actually be a thing, 

 

Oh, man, actually the concern is not spontaneously becoming self-aware but being already designed to be self-awareand not being able to control it once it decides what to do.  I mean, man, how much technology do we have today that previous generations couldnt of imagined or maybe didnt think possible?  When you calculate that risk, do you factor in what being wrong would look like?  The odds of something happening are jus part of the story, the amount of damage done if it happens changes the risk level.

Link to comment
Share on other sites

5 minutes ago, Renegade7 said:

 

Was not expecting that. 

 

I'd support lasers in space before letting a computer decide the best time to fire nukes.  Keep in mind were talking about striking back in case theres no one left to pull the trigger, not stopping the nukes from getting to us, if we get to that point it's already over for us all anyway.

 

That moment when the proposed solution to maintaining the Aurora of mutually assured destruction is ensured mutual destruction.  What's to stop an adversary to attack what is in essence a security bypass and cause the US to nuke itself without needing anyone's approval?  Someone building that code could build a backdoor just for that.

 

My understanding is that you would put the nuclear launch protocols under the control of an AI in very specialized situations. Incoming hypersonics are one of them. From what I've read and listened to in talks by experts about hypersonics, the reaction time from when you know several glide vehicles carrying nukes are headed your way to detonation is likely in the range of 10-30 minutes, with practically no missile defense's possible against them.

 

That's a very short period of time for humans to determine the scope of the attack and the response that should be triggered.

Link to comment
Share on other sites

1 hour ago, Larry said:

I could get behind handing over our nuclear weapons to a computer  

 

For the remainder of the Trump administration.

This is a horrible idea. Any AI worth its salt would conclude in milliseconds that with a POTUS like Tя☭mp, the US has lost its collective mind and needs to be eradicated.

  • Sad 1
Link to comment
Share on other sites

4 minutes ago, Renegade7 said:

 

Oh, man, actually the concern is not spontaneously becoming self-aware but being already designed to be self-awareand not being able to control it once it decides what to do.  I mean, man, how much technology do we have today that previous generations couldnt of imagined or maybe didnt think possible?  When you calculate that risk, do you factor in what being wrong would look like?  The odds of something happening are jus part of the story, the amount of damage done if it happens changes the risk level.

 

Again, that's only how it works in fiction. Even if we had discovered how to build a self-aware AI (we haven't), nobody would put one in charge of something like that; and the chances of a non-sentient AI program suddenly becoming self-aware are probably a billion to one...if that. There are legitimate concerns about the ML/AI algorithm making the correct decisions based on many different scenarios but that's where the whole "training" of an ML system comes into play. 

Edited by mistertim
Link to comment
Share on other sites

1 minute ago, No Excuses said:

 

My understanding is that you would put the nuclear launch protocols under the control of an AI in very specialized situations. Incoming hypersonics are one of them. From what I've read and listened to in talks by experts about hypersonics, the reaction time from when you know several glide vehicles carrying nukes are headed your way to detonation is likely in the range of 10-30 minutes, with practically no missile defense's possible against them.

 

That's a very short period of time for humans to determine the scope of the attack and the response that should be triggered.

 

Agreed, if we gonna have this conversation, we may have to accept that mitigating that kind of risk cant be done from the surface of the earth, it has to be done from space.  But the second we do it, someone else will as will.  Is it worth it?  Ya, I wouldnt trust a computer to decide whether to end the world or not based on what they think is happening.  Software is made by people so software can be wrong, too, imagine what a false positive on that would look like? Is that worth it?

1 minute ago, mistertim said:

 

Again, that's only how it works in fiction. Even if we had discovered how to build a self-aware AI (we haven't), nobody would put one in charge of something like that; and the chances of a non-sentient AI program suddenly becoming self-aware are probably a billion to one...if that. There are legitimate concerns about the ML/AI algorithm making the correct decisions based on many different scenarios but that's where the whole "training" of an ML system comes into play. 

 

What do you mean it only works in fiction? 

 

How often has science fiction become science fact?  Specificly saying it wont jus become self aware doesnt change trying to create a system that is already self aware, not become selfaware out of nowhere, that is a goal for some people.

 

The current nuclear system has so many different needs for more then one source of approval, why create a system that needs conditions instead of approval?  We decimated the Iranian nuclear program by getting the system to believe a co edition that was not true vis malware, this is a not worth that kind of risk, self aware or not.

Link to comment
Share on other sites

11 minutes ago, Renegade7 said:

 

What do you mean it only works in fiction? 

 

How often has science fiction become science fact?  Specificly saying it wont jus become self aware doesnt change trying to create a system that is already self aware, not become selfaware out of nowhere, that is a goal for some people.

 

The current nuclear system has so many different needs for more then one source of approval, why create a system that needs conditions instead of approval?  We decimated the Iranian nuclear program by getting the system to believe a co edition that was not true vis malware, this is a not worth that kind of risk, self aware or not.

 

Creating an AI that is self aware is absolutely a goal of multiple researchers. But that has nothing at all to do with developing an AI program/platform to potentially control or partially control military systems; the two are completely separate and would be programmed in completely separate ways. "AI" isn't some all encompassing thing where as soon as a self aware one is created every AI becomes self aware. Nobody is going to put a sentient AI in command of anything at all, let alone missiles (again, that's if a sentient AI can be built, which we don't know yet). 

 

As far as science fiction, it's more often wrong that it is right, if you actually look at it objectively. It will get some parts a little right but some massively wrong. 

Link to comment
Share on other sites

4 minutes ago, mistertim said:

 

Creating an AI that is self aware is absolutely a goal of multiple researchers. But that has nothing at all to do with developing an AI program/platform to potentially control or partially control military systems; the two are completely separate and would be programmed in completely separate ways. "AI" isn't some all encompassing thing where as soon as a self aware one is created every AI becomes self aware. Nobody is going to put a sentient AI in command of anything at all, let alone missiles (again, that's if a sentient AI can be built, which we don't know yet). 

 

As far as science fiction, it's more often wrong that it is right, if you actually look at it objectively. It will get some parts a little right but some massively wrong. 

 

Never said everything that has limited AI will become selfaware jus because one program does. 

 

A machine that can pull a trigger without human intervention doesnt need a reason, it jus needs the command to do it, regardless of where that command comes from.  The T-1000s weren't selfaware, Skynet was.  

 

This gets touched on in different ways as technology improves and becomes more connected via the internet.  Something malicious can go through a far more thorough process of exploitation then humans can, scifi has transitioned to the threat of a selfaware AI having internet access.

 

AI by itself isnt a threat, selfaware AI is.  My specific issue with this plan isnt the system deciding spontsneously to kill all humans, it's that it's designed to make decisions on it's own when theres no one left to pull the trigger, it's a way around people making the final call if we are actually still around and will either be targeted or could make mistakes.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...