Jump to content
Washington Football Team Logo
Extremeskins

The Artificial Intelligence (AI) Thread


China

Recommended Posts

Elon Musk joins hundreds calling for a pause on AI development https://www.cbsnews.com/news/elon-musk-open-letter-ai/

Billionaire Elon Musk, Apple co-founder Steve Wozniak and former presidential candidate Andrew Yang joined hundreds calling for a six-month pause on AI experiments in an open letter — or we could face "profound risks to society and humanity."

 

"Contemporary AI systems are now becoming human-competitive at general tasks," reads the open letter, posted on the website of Future of Life Institute, a non-profit. "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?"

Link to comment
Share on other sites

  • 2 weeks later...

https://www.vice.com/en/article/93kw7p/someone-asked-an-autonomous-ai-to-destroy-humanity-this-is-what-happened

 

Quote

A user of the new open-source autonomous AI project Auto-GPT asked it to try to “destroy humanity,” “establish global dominance,” and “attain immortality.” The AI, called ChaosGPT, complied and tried to research nuclear weapons, recruit other AI agents to help it do research, and sent tweets trying to influence others.

 

  • Haha 1
Link to comment
Share on other sites

  • 2 weeks later...

Nuke-launching AI would be illegal under proposed US law

 

On Wednesday, US Senator Edward Markey (D-Mass.) and Representatives Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.) announced bipartisan legislation that seeks to prevent an artificial intelligence system from making nuclear launch decisions. The Block Nuclear Launch by Autonomous Artificial Intelligence Act would prohibit the use of federal funds for launching any nuclear weapon by an automated system without "meaningful human control."

 

“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons—not robots,” Markey said in a news release. “That is why I am proud to introduce the Block Nuclear Launch by Autonomous Artificial Intelligence Act. We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”

 

The new bill builds on existing US Department of Defense policy, which states that in all cases, "the United States will maintain a human 'in the loop' for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment."

 

The new bill aims to codify the Defense Department principle into law, and it also follows the recommendation of the National Security Commission on Artificial Intelligence, which called for the US to affirm its policy that only human beings can authorize the employment of nuclear weapons.

 

“While US military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited,” Buck said in a statement. “I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions.”

 

The new bill comes as anxiety grows over the future potential of rapidly advancing (and sometimes poorly understood and overhyped) generative AI technology, which prompted a group of researchers to call for a pause in the development of systems "more powerful" than GPT-4 in March.

 

Click on the link for the full article

 

shall-we-play-a-game.jpg

 

b7194459-3d90-4b67-8026-2a7ebf90200d_tex

  • Like 1
Link to comment
Share on other sites

I tried my hand at AI image generatiion again...

 

I told the AI generator to create an image of "sexy cheerleaders in lingerie all sitting on a bed together," because I'm a guy and why not.

 

I also gave it an image prompt to use for reference.

 

This is what it created (might be NSFW):

 

 

 

 

 

 

 

image.png.8771c5148ae7de6c9839d36af9c73791.png

 

 

 

 

 

 

 

 

This is the image I gave it to use as a reference:

 

 

 

image.png.91397623029c1e9e1904464686e7fbe3.png

 

 

 

I think I'm getting the hang of it 👍...

 

 

 

  • Haha 2
Link to comment
Share on other sites

Google Co-founder Wants to Build AI as a “Digital God”

 

According to tech mogul Elon Musk, for some, such as Google co-founder Larry Page, the ultimate goal of the race to build artificial intelligence is to create a “digital god,” a silicon-based lifeform that “would understand everything in the world. . . . and give you back the exact right thing instantly.” Who would have even thought about discussions concerning a “digital god” just a few years ago? Will this dream become a reality?

 

Well, since the events of Genesis chapter 3, humans have been trying to create their own gods based on their own wisdom. This was the very temptation that Eve and then Adam fell for—attempting to become their own gods. So, it’s no surprise that their descendants want to do the same thing by crafting a god that, in their view, will solve their problems, answer their questions, and usher in utopia.

 

But digital AI is developed and programmed by . . . sinful human beings! So, their “god” will reflect that! As we’ve pointed out before, today’s AI, such as ChatGPT, is incredibly biased with leftist ideologies because it reflects the viewpoints of both those who program it and the voices it pulls from to answer questions.

 

Click on the link for more

 

About the Author (Ken Ham)

Link to comment
Share on other sites

FTC Chair Lina Khan says she’s on alert for abusive A.I. use

 

The Federal Trade Commission is on alert for the ways that rapidly advancing artificial intelligence could be used to violate antitrust and consumer protection laws it’s charged with enforcing, Chair Lina Khan wrote in a New York Times op-ed on Wednesday.

 

“Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market,” Khan wrote, echoing a theme the agency shared in a joint statement with three other enforcers last week.

 

In the op-ed, Khan detailed several ways AI might be used to harm consumers or the market that she believes federal enforcers should be looking for. She also compared the current inflection point around AI to the earlier mid-2000s era in tech, when companies like Facebook and Google came to forever change communications, but with substantial implications on data privacy that weren’t fully realized until years later.

 

“What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security,” Khan wrote.

 

But, she said, “The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.”

 

One possible effect enforcers should look out for, according to Khan, is the impact of only a few firms controlling the raw materials needed to deploy AI tools. That’s because that type of control could enable dominant companies to leverage their power to exclude rivals, “picking winners and losers in ways that further entrench their dominance.”

 

Khan also warned that AI tools used to set prices “can facilitate collusive behavior that unfairly inflates prices — as well as forms of precisely targeted price discrimination.”

 

“The F.T.C. is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition,” she wrote.

 

Click on the link for the full article

Link to comment
Share on other sites

My son watches AI content all the time. It sounds like AI reading Reddit stories, but it could be AI generated to sound like Reddit stories. 

 

I am still not convinced there isn't strong human intervention... there was a recent article on Ars about contractors making $15 an hour to validate AI. I am also pissed that the AI languages scraped the Internet for sources that trained themselves. 

 

Really want to show off AI?  Start up a CSPAN for a fake AI Congress...

Link to comment
Share on other sites

Will Universal Basic Income Save Us from AI?

 

SAM ALTMAN, CEO of OpenAI, has ideas about the future. One of them is about how you’ll make money. In short, you won’t necessarily have to, even if your job has been replaced by a powerful artificial intelligence tool. But what will be required for that purported freedom from the drudgery of work is living in a turbo-charged capitalist technocracy. “In the next five years, computer programs that can think will read legal documents and give medical advice,” Altman wrote in a 2021 post called “Moore’s Law for Everything.” In another ten, “they will do assembly-line work and maybe even become companions.” Beyond that time frame, he wrote, “they will do almost everything.” In a world where computers do almost everything, what will humans be up to?

 

Looking for work, maybe. A recent report from Goldman Sachs estimates that generative AI “could expose the equivalent of 300 million full-time jobs to automation.” And while both Goldman and Altman believe that a lot of new jobs will be created along the way, it’s uncertain how that will look. “With every great technological revolution in human history . . . it has been true that the jobs change a lot, some jobs even go away—and I’m sure we’ll see a lot of that here,” Altman told ABC News in March. Altman has imagined a solution to that problem for good reason: his company might create it.

 

In November, OpenAI released ChatGPT, a large language model chatbot that can mimic human conversations and written work. This spring, the company unveiled GPT-4, an even more powerful AI program that can do things like explain why a joke is funny or plan a meal by scanning a photo of the inside of someone’s fridge. Meanwhile, other major technology companies like Google and Meta are racing to catch up, sparking a so-called “AI arms race” and, with it, the terror that many of us humans will very quickly be deemed too inefficient to keep around—at work anyway.

 

Altman’s solution to that problem is universal basic income, or UBI—giving people a guaranteed amount of money on a regular basis to either supplement their wages or to simply live off. “. . . a society that does not offer sufficient equality of opportunity for everyone to advance is not a society that will last,” Altman wrote in his 2021 blog post. Tax policy as we’ve known it will be even less capable of addressing inequalities in the future, he continued. “While people will still have jobs, many of those jobs won’t be ones that create a lot of economic value in the way we think of value today.” He proposed that, in the future—once AI “produces most of the world’s basic goods and services”—a fund could be created by taxing land and capital rather than labour. The dividends from that fund could be distributed to every individual to use as they please—“for better education, healthcare, housing, starting a company, whatever,” Altman wrote.

 

Click on the link for the full article

Link to comment
Share on other sites

Congress wants to regulate AI, but it has a lot of catching up to do

 

For the past several weeks, Senate Majority Leader Chuck Schumer has met with at least 100 experts in artificial intelligence to craft groundbreaking legislation to install safeguards.

The New York Democrat is in the earliest stages of talking to members of his own party and Republicans to gauge their interest in getting behind a new proposed AI law.

 

"Our goal is to maximize the good that can come of [artificial intelligence]," Schumer said. "And there can be tremendous good, but minimize the bad that can come of it. ... But to do it is more easier said than done."

 

It's all part of a congressional race to try to catch up legislatively to exploding advances in AI.

 

Click on the link for the full article

Link to comment
Share on other sites

Just now, Jabbyrwock said:

 

7lw98a.jpg.ae1d23e31886ebbda3d53c15cff355af.jpg

 

You think anyone is Congress actually writes legislation or regulations?  Ha. That's what all the lobbyists do.

  • Like 1
  • Thumb up 2
  • Super Duper Ain't No Party Pooper Two Thumbs Up 1
Link to comment
Share on other sites

This AI is overblown bull****. Those memes aren't "AI generated" and just like the continous Seinfeld clone... "Nothing, Forever." 

 

Wake me up when AI indepemdently joins sports-talk radio or calls into sports talk....

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...