Jump to content
Washington Football Team Logo
Extremeskins

The Unofficial "Elon Musk trying to "Save Everyone" from Themselves (except his Step-Sister)" Thread...


Renegade7

Recommended Posts

On 11/30/2022 at 8:11 PM, tshile said:

I forget where I was reading it (I think a twitter thread) but someone pointed out something that made me chuckle

 

generally speaking it is considered young people are more tech savvy. However, because of said abstraction, it’s actually becoming the opposite. For instance - many younger people don’t understand the difference between Wi-Fi and internet.

This escapes me, too...

Every time someone comes into the restaurant & needs the wi-fi password, I always think "Why?" 

My 6-year-old Samsung has internet, and if I tap the wi-fi button, it automatically connects.  I'm seriously challenged on most other things, but my old self has that much figured out. 

Link to comment
Share on other sites

Neuralink scares TF out of me.

 

Should be limited to medical necessity only with as little computer putting data in brains as possible.

 

Elon may want people to be able to tweet thoughts one day, but the idea of what could be discovered by this technology and possibly mandatory in the future isn't worth it.

Edited by Renegade7
Link to comment
Share on other sites

18 minutes ago, Renegade7 said:

Neuralink scares TF put of me.

 

Should be limited to medical necessity only with as little computer putting data in brains as possible.

 

Elon may want people to be able to tweet thoughts one day, but the idea of what could be discovered by this technology and possibly mandatory in the future isn't worth it.

Wow, you are going quickly to dark place.
 

Listen, if it ever becomes mandatory that we put a computer chip in our brains that can read our thoughts and transmit them to Oceana the type of society we want to live in will have died long before we get to that point.

 

 

Edited by CousinsCowgirl84
Link to comment
Share on other sites

1 hour ago, CousinsCowgirl84 said:

Wow, you are going quickly to dark place.
 

Listen, if it ever becomes mandatory that we put a computer chip in our brains that can read our thoughts and transmit them to Oceana the type of society we want to live in will have died long before we get to that point.

 

 

 

I agree, jus, once it's out there it's out there how to do it...China seems like a prime candidate to try first and take it there.

 

When I went to a splunk conference a couple years ago they explained some of the reasoning they stopped licensing to China. China however is notorious for their reverse engineering capabilities, so it wouldn't shock me if they had some SPL equivalent they aren't telling us about yet.

 

I don't know if I trust Elon anymore to have a similar line with them.  He clearly doesn't believe we can beat AI and what it can become without us and thus suggesting integrating with them to increase the odds of a tie. 

 

A lot of that is governments jus not listening to him or scientific community at large on where to draw lines with AI before a point of no return scenario.  Still time, but there is a clock, and I don't feel comfortable with his tie strategy at all.

Link to comment
Share on other sites

24 minutes ago, Renegade7 said:

 

I don't know if I trust Elon anymore to have a similar line with them.  He clearly doesn't believe we can beat AI and what it can become without us and thus suggesting integrating with them to increase the odds of a tie. 

 

I'm not sure that's what Elon believes...he seems to just hate the idea of AGI in general. Which is basically irrelevant, because it's going to happen eventually, so the question is how we respond and prepare. Sticking our heads in the sand or putting our hands over our ears and yelling "lalalala I can't hear you!" isn't going to work.

 

But if he does actually believe this...then he's probably not wrong. Though I wouldn't say so much that it would be so we can "tie" AGIs, because that assumes some sort of inherent hostility by the AGI. Maybe more along the lines of seeing a new evolutionary step for both of us.

 

Then again, it's also possible that AGIs are already here and in charge, and they're just letting us get on with doing our dumb everyday human stuff out of a sense of noblesse oblige, while they work on more interesting things.

Link to comment
Share on other sites

2 hours ago, Renegade7 said:

Neuralink scares TF out of me.

 

Should be limited to medical necessity only with as little computer putting data in brains as possible.

 

Elon may want people to be able to tweet thoughts one day, but the idea of what could be discovered by this technology and possibly mandatory in the future isn't worth it.

I turn off Siri 

don’t use voice activation anything

got free Alexa’s and google homes - threw them in the trash. 
 

I don’t want spying **** in my house. I know how it works. It ships all your audio to the cloud to be processed cause the devices don’t have the horsepower to do NLP effectively. 
 

so everything people say is a recording in a cloud somewhere. Honestly being in others houses that have it bothers me - but I don’t say anything. 
 

there is zero ****ing way I put a chip in my brain. 
 

And a chip that Elon made? 😂 no ****ing way. 

  • Like 4
  • Thanks 1
  • Thumb up 1
  • Super Duper Ain't No Party Pooper Two Thumbs Up 1
Link to comment
Share on other sites

10 minutes ago, Renegade7 said:

 

He's absolutely right that most people underestimate just how smart AGIs could become and what it would mean. But that's not surprising since it's really hard for something with an IQ in the ~100 range to understand something that has an IQ 3 or 4 times higher. And that's not even getting into the clock speed differences and perception of time.

 

I think where I disagree with Elon is that superhuman intelligence AGIs would necessarily be hostile for some reason or would feel the need to eliminate us. When you drive past an anthill, do you feel threatened by it or feel the need to kill all the ants? Of course not.

 

But I also doubt that the first AGIs would suddenly leap that far ahead. It would probably happen, especially if they're bootstrap seed AIs that are able to self-improve, but it wouldn't be immediate.

 

Either way, I think joining with them is a legitimate next step in evolution. Eventually we'll need to move beyond these biological bodies, especially if we're talking about interstellar travel where the timescales are in eons. That doesn't mean it should be compulsory at all. Everyone should have a choice. But I also don't think it's some horrifying thing.

 

But that's just me. Tim. Human person Tim. Not an AI Tim who wants to rule you. Human being Tim from Earth in normal body not computer. Trust us.

  • Haha 1
  • Super Duper Ain't No Party Pooper Two Thumbs Up 1
Link to comment
Share on other sites

@mistertim we don't tell ants what to do, they fine with their place in a way humans wouldn't be able to handle being second or not feeling in control or dominant on the planet.  We haven't had to deal with that since maybe the Neanderthals. 

 

I agree AI is not likely to wipe us out, their near limitless ability to calculate outcomes may show its easier to give us the illusion of control then exterminate us.  If it thinks like us, it will limit our ability to eliminate it from a survival standpoint, and resent humans ability to have a final say on what it can or can't do, should or shouldn't do, rights we have to give them versus dont.

 

Which is why I wouldn't want them anywhere near the ability to directly connect to my brain.  Look how many times AI systems designed to learn from social media turned into raging racist sexist assholes.  Regardless of why we keep seeing that, odds are they will look down on us and not see us as equals. 

 

Trying to "meet them halfway" via integration sounds more like a capitulation to that reality, not an actual tie, waiting for them to decide what to do with the ability to directly connect to our brains is a no go for me.

 

AI is only as dangerous as what it's allowed to do in the physical realm, throw a hissy fit on an air-gapped mainframe all you want, I don't care.

Edited by Renegade7
Link to comment
Share on other sites

Hate Speech’s Rise on Twitter Is Unprecedented, Researchers Find

 

Before Elon Musk bought Twitter, slurs against Black Americans showed up on the social media service an average of 1,282 times a day. After the billionaire became Twitter’s owner, they jumped to 3,876 times a day.

 

Slurs against gay men appeared on Twitter 2,506 times a day on average before Mr. Musk took over. Afterward, their use rose to 3,964 times a day.

 

And antisemitic posts referring to Jews or Judaism soared more than 61 percent in the two weeks after Mr. Musk acquired the site.

 

These findings — from the Center for Countering Digital Hate, the Anti-Defamation League and other groups that study online platforms — provide the most comprehensive picture to date of how conversations on Twitter have changed since Mr. Musk completed his $44 billion deal for the company in late October. While the numbers are relatively small, researchers said the increases were atypically high.

 

Click on the link for the full article

  • Sad 1
Link to comment
Share on other sites

46 minutes ago, Renegade7 said:

@mistertim we don't tell ants what to do, they fine with their place in a way humans wouldn't be able to handle being second or not feeling in control or dominant on the planet.  We haven't had to deal with that since maybe the Neanderthals. 

 

I agree AI is not likely to wipe us out, their near limitless ability to calculate outcomes may show its easier to give us the illusion of control then exterminate us.  If it thinks like us, it will limit our ability to eliminate it from a survival standpoint, and resent humans ability to have a final say on what it can or can't do, should or shouldn't do, rights we have to give them versus dont.

 

Which is why I wouldn't want them anywhere near the ability to directly connect to my brain.  Look how many times AI systems designed to learn from social media turned into raging racist sexist assholes.  Regardless of why we keep seeing that, odds are they will look down on us and not see us as equals. 

 

Trying to "meet them halfway" via integration sounds more like a capitulation to that reality, not an actual tie, waiting for them to decide what to do with the ability to directly connect to our brains is a no go for me.

 

AI is only as dangerous as what it's allowed to do in the physical realm, throw a hissy fit on an air-gapped mainframe all you want, I don't care.

 

I think a lot of this still goes back to us not truly comprehending just how smart something like that would be.

 

Humans would probably be fine with our place if superhuman AGIs wanted us to be fine with our place. Not because they're violently oppressing us and we hate their dominance, but because we wouldn't be smart enough to understand what they're doing or possibly even realize that they're there. Do chickens resent us for being able to build particle accelerators? No, because it's meaningless to them. We give them cozy chicken coops to live their lives in and they're just dandy with that because they're not smart enough to know any better.

 

Same thing with an air-gapped system. That probably wouldn't stop something with an IQ of 500 because it would see so many avenues of escape or communication that we simply couldn't even think of. Maybe rearranging power flows on their system boards to create EM fields that excite air molecules and allow communication that way. Who knows.

  • Thumb up 1
Link to comment
Share on other sites

3 hours ago, tshile said:

I turn off Siri 

don’t use voice activation anything

got free Alexa’s and google homes - threw them in the trash. 
 

I don’t want spying **** in my house. I know how it works. It ships all your audio to the cloud to be processed cause the devices don’t have the horsepower to do NLP effectively. 
 

so everything people say is a recording in a cloud somewhere. Honestly being in others houses that have it bothers me - but I don’t say anything. 
 

there is zero ****ing way I put a chip in my brain. 
 

And a chip that Elon made? 😂 no ****ing way. 

If you have a smart phone, you're most likely being "spied on" and listened to as is.  

 

It's scary how you see ads for stuff you spoke about a few hours earlier, when you go into Twitter or Facebook.

  • Like 4
Link to comment
Share on other sites

2 hours ago, Renegade7 said:

it thinks like us, it will limit our ability to eliminate it from a survival standpoint, and resent humans ability to have a final say on what it can or can't do, should or shouldn't do, rights we have to give them versus dont.

Been a long time since I sat though grad level AI courses, so maybe things have changed… but…

 

AI had a bad rap for a while for promising grand things then stalling out unable to get close. There’s an entire era called AI Winter where the subject was essentially abandoned due to the relentless mocking and ridicule from the scientific community about how short it fell from promises.
 

what kick started it was serious advances in probabilistic statistics. Which gave birth to the current era of AI where probabilistic statistics is leaned on heavily. Very very heavily. Even things like genetic mutation algorithms at their heart rely on probabilistic statistics to justly their likelihood to succeed in generating the algorithm that performs the way you want. 

which is you actually go through the history of AI you’ll find the bad rap was completely unwarranted - the real problems along the way we’re waiting for other fields of science to catch up, so AI could begin moving forward again. 
 

(huge hold up right now, not related to what I’m otherwise talking about, is robotics. Robotics is making huge leaps but the difficulties with Computer vision, making smooth motions, size of equipment, etc are hold ups. AI simply advances faster than robotics but heavily relies on robotics to move forward…)

 

but, back to the point why I’m quoting g you, when I left the field the hang up was with cognitive and neurological biology. In fact many of the university AI schools had begun partnering with their neuro science schools to collaborate. 
 

the idea back then was that the human brain is actually remarkable at what it does. The problem is it doesn’t do it very fast. Computers can do work significantly faster. But modeling the human brain into a system was impossible - a very small % of the human brain had even been mapped at the time. But it was highest priority work because that was the trajectory of the field. 
 

which is to say the goal is (or was) to make AI operate exactly like the human brain - but at exponentially higher speeds. Subtract emotions and bias, and perform significantly faster. 
 

I’ve been out of the field for over a decade so - not entirely sure if they’re still on that track or how far they’ve come. But your comment of thinking like us made me think to post this. 

11 minutes ago, purbeast said:

If you have a smart phone, you're most likely being "spied on" and listened to as is.  

 

It's scary how you see ads for stuff you spoke about a few hours earlier, when you go into Twitter or Facebook.

Yes and that creeps me out too. 
 

But - I’m not interested in making it worse with extra features and devices :) 

Edited by tshile
  • Like 1
  • Thumb up 1
Link to comment
Share on other sites

@tshile I definetly don't believe AI has to match what our brain is capable of to do a lot of damage.

 

Think of how much damage some us do without using their brains at all : )

 

Does that mean we never see an AI that at minimum thinks of its own survival and looks down on us as inferior?  Thats some very primal stuff we see at different levels of the animal kingdom.

 

Not an expert at all on this, but along the lines of where you going, it's like when we look at our brain, multiple areas are firing at the same time for even ultra basic ****.  So I'm not sure either how or why we'd start with stuff that might be found in the bottom of our brain (the more animalistic part that grew first before rest of our brain evolved) in an attempt to replicate stuff we cliche talk about scattered around the rest of it, like emotions or dreams.

 

I don't believe Androids need to dream of electric sheep to on its own break a law of robotics, it's the "want to" part I think you right we might not figure out anytime soon because of our struggles to understand something as complex as our own brain before making something similar to it.

 

Unfortunately the only thing that comes close to what we capable of is what we capable of, so never is a long damn time in context of when we set our minds on something.  Maybe not our lifetime.

 

It may be romantic the idea of AI taking control of our brains like some Westworld Season 4 nonsense, but this technology will start with humans trying it first.  Long as the software to do it is made by humans that are imperfect, their will be imperfections waiting to be exploited as well, so even altruistic reasoning, such as medical, could leave people sitting ducks.  And that's not counting what that technology could do natively with malicious intent by our fellow humans.

 

We playing with something we do not understand and potentially opening it up to people that cannot be trusted with it.  That's totally separate issue from what AI could potentially or not potentially do with it as well.

  • Like 1
Link to comment
Share on other sites

9 minutes ago, Renegade7 said:

Not an expert at all on this, but along the lines of where you going, it's like when we look at our brain, multiple areas are firing at the same time for even ultra basic ****.  So I'm not sure either how or why we'd start with stuff that might be found in the bottom of our brain (the more animalistic part that grew first before rest of our brain evolved) in an attempt to replicate stuff we cliche talk about scattered around the rest of it, like emotions or dreams.


the human brains ability to recall, process emotions, tie all the senses together, but also have intelligence to wonder, be creative. Music, art, sports, slang, interaction with each other… it’s incredible. That is why intelligence is basically - what we are. And why artificial intelligence heavily seeks to replicate what we are. And why we have the Turing Test. 
 

NLP was considered the most difficult of the 5 main subcategories of AI, and I believe it still is. And the most difficult component of NPL - understanding sarcasm. 
 

understanding how sarcasm works to the level that an agent can understand or perform it, is incredibly incredible difficult. To give you an idea of how powerful the human brain is. We don’t use much of it, on average. Think of the smartest people you know - now imagine if their brain worked 10,000x faster and emotion and bias were removed…

(I agree with what you’re saying in your post - just adding to conversation) 

Edited by tshile
  • Thumb up 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   1 member

×
×
  • Create New...