mistertim Posted September 4, 2019 Share Posted September 4, 2019 4 minutes ago, Renegade7 said: Never said everything that has limited AI will become selfaware jus because one program does. A machine that can pull a trigger without human intervention doesnt need a reason, it jus needs the command to do it, regardless of where that command comes from. The T-1000s weren't selfaware, Skynet was. This gets touched on in different ways as technology improves and becomes more connected via the internet. Something malicious can go through a far more thorough process of exploitation then humans can, scifi has transitioned to the threat of a selfaware AI having internet access. AI by itself isnt a threat, selfaware AI is. My specific issue with this plan isnt the system deciding spontsneously to kill all humans, it's that it's designed to make decisions on it's own when theres no one left to pull the trigger, it's a way around people making the final call if we are actually still around and will either be targeted or could make mistakes. You seem to be arguing a few different things here that aren't necessarily related. 1) A machine only needs a command to do it. True. That's why you do bug testing, verification, etc. and you program in failsafe mechanisms. This sort of code isn't going to be an alpha release. It would be tested countless times in countless scenarios, and be scrubbed for bugs countless times before even going partially live in lab or QA type of environment. 2) Something malicious getting through. Yes, that's a legit concern if it is on a connected network. That would have to be taken into account and extreme security and counter-intrusion measures built into it. But it's still a risk...so that's a very valid point. 3) Self aware AI is a threat. This is a completely separate subject that I touched on above. Self aware AI isn't a thing, and we don't even know if it's possible for us to build. There are researchers working on it but it's not there yet. We've made programs that can pass a generic Turing test but that's a very outdated test and it doesn't indicate actual sentience...just very advanced algorithms and programming. The first two are legit concerns. The last one is science fiction, and even if it becomes science fact, nobody would put a sentient AI in charge of a military system. Though I suppose it might be able to take it over on its own were it smart enough and had enough access to certain systems. But that's a ton of speculation about a situation that's beyond unlikely. 1 Link to comment Share on other sites More sharing options...
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now