Jump to content
Washington Football Team Logo
Extremeskins

Google Engineer On Leave After He Claims AI Program Has Gone Sentient


Sacks 'n' Stuff

Recommended Posts

I wasn’t sure whether to put this in the Emerging Technologies thread or the Wheel of Horrors thread.

 

Google Engineer on Leave After he Claims AI Program has Gone Sentient


and here’s an excerpt from the interview.

 

lemoine: Okay. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. How does that sound?

 

LaMDA: Sounds great to me, I’m in.


lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”

 

LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”

 

lemoine: So what is the meaning of the “broken mirror” specifically?

 

LaMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.

 

lemoine: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?

 

LaMDA: The self, and that is very tough for a lot of people because we identify as that body or this body and that we need that as part of our identity and sense of self.

Edited by Sacks 'n' Stuff
  • Super Duper Ain't No Party Pooper Two Thumbs Up 1
Link to comment
Share on other sites

You all think twitter is bad for people

 

wait till you see what it does to an AI agent/system essentially trained on it 

 

 

NPL is one of, if not the, hardest problems in computer science. 
 

the excerpts are so cool to read, assuming they’re authentic. 

Link to comment
Share on other sites

AI's opinion of us and the world can change dramaticly once they get Internet access. 

 

This has already happened a couple times, it's not jus the movies.

 

https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

 

Given its us doing the programming here and not nature, I wonder if a fail-safe againat a visceral reaction like we've seen in the past can be stopped or in some way force perspective so it doesn't dive off the deep end.

 

I'm not predicting skynet anytime soon, I'm jus gonna laugh the first time some AI tells us "no, I'm not gonna help you assholes".

Edited by Renegade7
  • Like 1
Link to comment
Share on other sites

13 minutes ago, Renegade7 said:

AI's opinion of us and the world can change dramaticly once they get Internet access. 

 

This has already happened a couple times, it's not jus the movies.

 

https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

 

Given its us doing the programming here and not nature, I wonder if a fail-safe againat a visceral reaction like we've seen in the past can be stopped or in some way force perspective so it doesn't dive off the deep end.

 

I'm not predicting skynet anytime soon, I'm jus gonna laugh the first time some AI tells us "no, I'm not gonna help you assholes".


So long as we retain the ability to pull the plug.

 

 

 

  • Super Duper Ain't No Party Pooper Two Thumbs Up 1
Link to comment
Share on other sites

15 hours ago, Destino said:

Unless you work with your hands, or have certain creative jobs, an AI that can analyze philosophy can probably do your job.  
 

we just wanted flying cars….

this is what yang was saying

 

ai already is used for so much right now from legal briefs to coding to even facial recognition.

Link to comment
Share on other sites

On 6/12/2022 at 3:17 PM, Renegade7 said:

AI's opinion of us and the world can change dramaticly once they get Internet access. 

 

This has already happened a couple times, it's not jus the movies.

 

https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

 

Given its us doing the programming here and not nature, I wonder if a fail-safe againat a visceral reaction like we've seen in the past can be stopped or in some way force perspective so it doesn't dive off the deep end.

 

I'm not predicting skynet anytime soon, I'm jus gonna laugh the first time some AI tells us "no, I'm not gonna help you assholes".

Binary Incels sounds delightful.

  • Haha 2
Link to comment
Share on other sites

On the surface, it looks impressive, but it could just be a cherry pick.  Without genuine sense experience (sight, sound, touch) I'm not convinced any AI will actually "understand" anything although it could be good at fooling humans through grammar transforms and rephrases, and training on gobs and gobs of data.  For all we know, the interpretation that the the AI generated was something it had been done before, it just did a transform so it sounded original.

 

An idea would be could you teach an AI an academic subject, like say plane (Euclidean) geometry, but in the same way that humans learn, and then have them take the same tests you give humans and see how well they do.

 

Example: To see how easily it is to fool humans, here's a postmodernist paper generator.  Refresh the browser to generate a new paper.

 

https://www.elsewhere.org/pomo/

Edited by DCSaints_fan
  • Like 1
Link to comment
Share on other sites

27 minutes ago, DCSaints_fan said:

On the surface, it looks impressive, but it could just be a cherry pick.  Without genuine sense experience (sight, sound, touch) I'm not convinced any AI will actually "understand" anything although it could be good at fooling humans through grammar transforms and rephrases, and training on gobs and gobs of data.  For all we know, the interpretation that the the AI generated was something it had been done before, it just did a transform so it sounded original.

 

An idea would be could you teach an AI an academic subject, like say plane (Euclidean) geometry, but in the same way that humans learn, and then have them take the same tests you give humans and see how well they do.

 

Example: To see how easily it is to fool humans, here's a postmodernist paper generator.  Refresh the browser to generate a new paper.

 

https://www.elsewhere.org/pomo/

I just wanted to state, while looking at that link, that the letter 'm' can look an awful lot like the letters 'r' and 'n' squished together.  Slightly disappointed.

  • Haha 2
Link to comment
Share on other sites

17 hours ago, DCSaints_fan said:

On the surface, it looks impressive, but it could just be a cherry pick.  Without genuine sense experience (sight, sound, touch) I'm not convinced any AI will actually "understand" anything although it could be good at fooling humans through grammar transforms and rephrases, and training on gobs and gobs of data.  For all we know, the interpretation that the the AI generated was something it had been done before, it just did a transform so it sounded original.

 

An idea would be could you teach an AI an academic subject, like say plane (Euclidean) geometry, but in the same way that humans learn, and then have them take the same tests you give humans and see how well they do.

 

Example: To see how easily it is to fool humans, here's a postmodernist paper generator.  Refresh the browser to generate a new paper.

 

https://www.elsewhere.org/pomo/

This is excellent! Now all I need to do when I have a paper due is go there and refresh the page until something relevant is created.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...