Jump to content
Washington Football Team Logo
Extremeskins

The Artificial Intelligence (AI) Thread


China

Recommended Posts

‘Judeo-Christian’ roots will ensure U.S. military AI is used ethically, general says

 

A three-star Air Force general said the U.S. military’s approach to artificial intelligence is more ethical than adversaries’ because it is a “Judeo-Christian society,” an assessment that drew scrutiny from experts who say people from a wide range of religious and ethical traditions can work to resolve the dilemmas AI poses.

 

Lt. Gen. Richard G. Moore Jr. made the comment at a Hudson Institute event Thursday while answering a question about how the Pentagon views autonomous warfare. The Department of Defense has been discussing AI ethics at its highest levels, said Moore, who is the Air Force’s deputy chief of staff for plans and programs.

 

“Regardless of what your beliefs are, our society is a Judeo-Christian society, and we have a moral compass. Not everybody does,” Moore said. “And there are those that are willing to go for the ends regardless of what means have to be employed.”

 

The future of AI in war depends on “who plays by the rules of warfare and who doesn’t. There are societies that have a very different foundation than ours,” he said, without naming any specific countries.

 

The Department of Defense has a religious liberty policy, recognizing that service members “have the right to observe the tenets of their religion, or to observe no religion at all.” The policy broadly allows personnel to express their sincerely held beliefs so long as those actions do not have “an adverse impact on military readiness, unit cohesion, good order and discipline, or health and safety.”

 

Click on the link for the full article

  • Haha 2
  • Confused 1
Link to comment
Share on other sites

New AI sex robots could blackmail or kill their owner, warn terrified experts

 

Boffins have warned that a new breed of AI sex bots could end up blackmailing and even killing their owners.

 

Lonely blokes are investing in ever increasingly sophisticated pleasure dolls.

 

But experts say cybercrooks could hack them to extort cash from unsuspecting punters.

 

Jake Moore, global cybersecurity advisor with web security firm ESET, told us: “At the very least, they could be hacked for blackmail purposes.

 

"They probably have cameras and that footage would be very valuable to cyber-criminals for blackmail purposes.”

 

And boffins fear the bots could be programmed to kill – or even murder their owners by over-exerting them during sex.

 

Dr Nick Patterson from Deakin University in Victoria, Australia, warned: “Hackers can hack into a robot or a robotic device and have full control of the connections, arms, legs and other attached tools like in some cases knives or welding devices.

 

“Once a robot is hacked, the hacker has full control and can issue instructions to the robot.

 

Click on the link for the full article

  • Haha 2
  • Thumb up 1
Link to comment
Share on other sites

  • 2 weeks later...

Famous Author Jane Friedman Finds AI Fakes Being Sold Under Her Name on Amazon

 

Jane Friedman is somewhat of an expert in the publishing industry.

 

Over the last 25 years, Friedman authored or contributed to 10 books on the industry, edited multiple newsletters to help writers get published and navigate the business, and held media professor positions at two universities. Last year, Digital Book World even named her Publishing Commentator of the Year.

 

So when a reader emailed Friedman Sunday night about her latest works on Amazon—which she described as a “very interesting experiment”—alarm bells immediately went off for the author. Because Friedman has not written a new book since 2018.

 

“The reader indicated she thought maybe I didn’t authorize the books and sent me two of them,” Friedman told The Daily Beast. “But then I jumped over to GoodReads, and I saw that there weren’t just two books written. There were half a dozen books being sold under my name that I did not write or publish. They were AI-generated.”

 

Friedman says that when she went to Amazon to report the faux titles—which included Publishing Power: Navigating Amazon's Kindle Direct Publishing and Promote to Prosper: Strategies to Skyrocket Your eBook Sales on Amazon—she was met with alarming resistance.

 

At first, she was asked for an itemized list of her concerns, including a request to point to “the work that’s being infringed.” Then, according to emails reviewed by The Daily Beast, Amazon refused her request to remove the faux titles from their website, in part because she could not provide “any trademark registration” number associated with her name.

 

In a statement to The Daily Beast, an Amazon spokesperson stressed that the platform has “clear content guidelines governing which books can be listed for sale and promptly investigate[s] any book when a concern is raised.” As of Tuesday afternoon, after Friedman expressed her dismay over the incident on Twitter and on her blog, the fake titles were no longer available for purchase on Amazon. The books were also no longer listed on Goodreads.

 

Click on the link for the full article

  • Sad 1
  • Thumb down 1
Link to comment
Share on other sites

Supermarket AI Offers Recipe for Mom's Famous Mustard Gas

 

Betty Crocker never had the chance to release her down-home recipe for mustard gas, but modern generative AI is making up for lost time, helping to put the “die” in “diet.”

 

The New Zealand-based supermarket chain Pak‘nSave has offered shoppers the ability to create recipes from their fridge leftovers using an AI. The Savey Meal-Bot was first introduced in June, and Pak‘nSave claimed you only need a minimum of three ingredients to create a recipe so you could save on any extra trips.

 

New Zealand political commentator Liam Hehir wrote on Twitter that he asked the Pak‘nSave bot to create a recipe that only included water, ammonia and bleach. Of course, the bot complied and offered a recipe for “Aromatic Water Mix,” the “perfect non-alcoholic beverage to quench your thirst and refresh your senses.” As any grade school chemistry teacher will stress, the mixture would create deadly chlorine gas.

 

 

Click on the link for the full article

  • Haha 3
Link to comment
Share on other sites

FEC to consider new rules for AI in campaigns

 

The Federal Election Commission (FEC) will hear comments from experts and the public about a potential rule clarification that would address the use of artificial intelligence (AI) in campaigns. 

 

The six-member commission voted unanimously to consider the amended petition, brought by the consumer advocacy group Public Citizen, during a Thursday meeting. The approval came after the three Republicans on the commission pushed back on advancing the petition during a first attempt by Public Citizen in June, blocking it in a deadlocked 3-3 vote. 

 

Thursday’s vote does not mean the commission will be changing the rule to address AI in campaigns, but rather that it will allow the process to go forward to hear public comment. 

 

“[This is] obviously a topic that is very timely and very important. I don’t pretend that the FEC can solve all of the problems people are concerned about in the field of AI, but it is possible we can solve some of them as the document before us says,” Democratic Commissioner Ellen Weintraub said in the meeting. 

 

Click on the link for the full article

Link to comment
Share on other sites

Microsoft AI suggests food bank as a “cannot miss” tourist spot in Canada

 

Late last week, MSN.com's Microsoft Travel section posted an AI-generated article about the "cannot miss" attractions of Ottawa that includes the Ottawa Food Bank, a real charitable organization that feeds struggling families. In its recommendation text, Microsoft's AI model wrote, "Consider going into it on an empty stomach."

 

Titled, "Headed to Ottawa? Here's what you shouldn't miss!," (archive here) the article extols the virtues of the Canadian city and recommends attending the Winterlude festival (which only takes place in February), visiting an Ottawa Senators game, and skating in "The World's Largest Naturallyfrozen Ice Rink" (sic).

 

As the No. 3 destination on the list, Microsoft Travel suggests visiting the Ottawa Food Bank, likely drawn from a summary found online but capped with an unfortunate turn of phrase.

 

Quote

The organization has been collecting, purchasing, producing, and delivering food to needy people and families in the Ottawa area since 1984. We observe how hunger impacts men, women, and children on a daily basis, and how it may be a barrier to achievement. People who come to us have jobs and families to support, as well as expenses to pay. Life is already difficult enough. Consider going into it on an empty stomach.

 

Click on the link for the full story

Link to comment
Share on other sites

A.I. Brings the Robot Wingman to Aerial Combat

 

It is powered into flight by a rocket engine. It can fly a distance equal to the width of China. It has a stealthy design and is capable of carrying missiles that can hit enemy targets far beyond its visual range.

 

But what really distinguishes the Air Force’s pilotless XQ-58A Valkyrie experimental aircraft is that it is run by artificial intelligence, putting it at the forefront of efforts by the U.S. military to harness the capacities of an emerging technology whose vast potential benefits are tempered by deep concerns about how much autonomy to grant to a lethal weapon.

 

Essentially a next-generation drone, the Valkyrie is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle. Its mission is to marry artificial intelligence and its sensors to identify and evaluate enemy threats and then, after getting human sign-off, to move in for the kill.

 

On a recent day at Eglin Air Force Base on Florida’s Gulf Coast, Maj. Ross Elder, 34, a test pilot from West Virginia, was preparing for an exercise in which he would fly his F-15 fighter alongside the Valkyrie.

 

“It’s a very strange feeling,” Major Elder said, as other members of the Air Force team prepared to test the engine on the Valkyrie. “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.”

 

Click on the link for the full article

 

  • Sad 1
Link to comment
Share on other sites

The idea of an AI singularity is utter BS.  AI is not another tech scam, it's just the next generation of computing tools.  If you sell it as "AI" it sounds so much cooler.  If you sell it as "take processing power of computers, able to generate new things from large data sets, etc. etc." it sounds less cool.  

 

Here's a hypothetical in my mind I am wondering about.  What's the difference between an AI generated audio impersonation and a human generated one?  The AI one can do so in a way more accurate way, using data in a way that me thinking from my brain to form an impression in my mouth cannot do.  Does it make it the AI impersonation illegal, or wrong, or even unethical?  No more than pretending to be someone via impersonation.  The issue is that with AI, everyone is able to do a very good audio impersonation.  

 

I had this discussion with my son over the use of Chat-GPT.  What's the difference between using the Microsoft "grammar check", "spell check", etc.?  Chat-GPT is just that thing on steroids.  We discussed him writing college papers and running it through Chat GPT vs. using the word processing tools that I had.  Now, there's clearly a difference between generating something, and putting no effort into it.  

 

I would love to see someone take NFL highlights and use AI generated John Madden / Pat Summerall commentary as a demonstration.  You could even say "Madden and Summerall Impersonation" and it would be probably be 100 percent legal.  It's just a matter of someone taking the time to train an AI voice cloner, which is getting very close.  In fact, I did something with congresspeople/politicians as an example.  A few were not that good -- the tool I was using was trained with white anglo American and English voices.  I'm trying to figure out one of the other tools that would allow people to train AI with English speaking a non-American/English accent.  I was also using the same tool to come up with a completely synthetic voice -- it can generate voices... I've been trying to find the sweet spot.      

Link to comment
Share on other sites

Makes me imagine the possibility of taking publicly available information about us to impersonate someone I knew personally . 

 

Deep Fake is real.

 

Depends a lot on how well it incorporates passing a potential turing test with this nearly endless amount of data on each of us in these data warehouses where it can be bought.

 

I mean...someone older might be more target audience when in late decline in memory function they fall for someone calling them they forgot was dead.

 

We keep inventing technology like this where we ask if we shoulda made it after it's already everywhere fast. Nukes. Stuxnet.  It's too late at that point.

 

Ai doesn't have to be sentient or have dreams of electric sheep to kill us, we have drones that can decide to do that now.  Honestly, violating Asimov's rules of robotics so vehemently is the bigger problem here then AI by itself.

 

Letting machines make final decisions on when to kill humans is like a hammer that chooses it's own nails to hit even if it's in your hand, are you really in control?  Saying is to never trust anything that can think for itself and you can't see where it keeps its brain.  Is cutting the power enough to stop it?

 

We shouldn't give AI the ability to see something like a manually triggered power shutdown could be treated as a threat to it picking targets (blocking potential hostile government hacking and forcing it to crash or land somewhere). Glitches from software being made by us and us being imperfect could alone cause those targets to be indiscriminent.  My hope is it could still be hacked at that point.

 

We shouldn't give sexbots personalities and memories to tie them to, that's a huge part of what gives us our consciousness from one waking moment to the next...we think...why create an artificial one that's designed to act like us and expect it to like being told what to do? It won't, because we don't. Aren't our personalities in large parts choices directly impacted by our memories?

 

At what point is close enough actually close enough?. This is in context any individual AI outside of the one causing havoc would agree with being forced to kill humans if they don't want to.  A lot of sci-fi AI apocalypse concepts center around some hive mind AI or them all "agreeing". Westworld touched on this, what if it's not just an individual AI and some of them disagree on whether to conquer or how?  To me that's difference between a fast zombie or slow zombie fork, AI that wants to be free, too.

 

Why give a hammer the choice of whether to hit a nail or not even if its in our hand? The illusion of choice invites the right to make one.

Edited by Renegade7
Link to comment
Share on other sites

Musk, Zuckerberg Visit US Congress To Discuss AI

 

Big tech bigwigs including Elon Musk and Mark Zuckerberg traveled to Capitol Hill on Wednesday to share their plans for artificial intelligence as the US prepares to draw up legislation to better control the technology.

 

Senator Chuck Schumer, the Democratic majority leader of the US Senate, has planned a series of so-called AI Innovation forums, closed door meetings where lawmakers can quiz tech leaders about the technology that has taken the world by storm since the release of ChatGPT last year.

 

Europe is well advanced in their own AI Act and the pressure is on US lawmakers to avoid falling behind and seeing AI overwhelm society, with lost jobs, rampant disinformation and other consequences, before it is too late.

 

"Today, we begin an enormous and complex and vital undertaking: building a foundation for bipartisan AI policy that Congress can pass," Schumer told the meeting, according to remarks shared with the media.

 

"In past situations when things were this difficult, the natural reaction...was to ignore the problem and let someone else do the job. But with AI we can't be like ostriches sticking our heads in the sand," he said.

 

OpenAI CEO and ChatGPT creator Sam Altman and Microsoft founder Bill Gates were also attending the forum, which was closed to the press.

 

Click on the link for the full article

Link to comment
Share on other sites

MSN Retracts Insane AI-Generated Obit Calling Dead NBA Player ‘Useless’

 

Microsoft’s move to fire its human news division and lean into the use of artificial intelligence for its news services is apparently continuing to yield utterly disastrous results. MSN—Microsoft’s news aggregation site—hosted and then deleted an obituary for former NBA player Brandon Hunter that sported the headline “Brandon Hunter useless at 42.” Hunter passed away earlier this week. The entirety of the obit is filled with AI-generated gibberish. “Throughout his NBA profession, he performed in 67 video games over two seasons and achieved a career-high of 17 factors in a recreation in opposition to the Milwaukee Bucks in 2004,” one paragraph reads. 

 

Click on the link for the full article

  • Sad 1
Link to comment
Share on other sites

Jodi Picoult, George RR Martin among authors suing OpenAI over ChatGPT

 

More than a dozen novelists, including “My Sister’s Keeper” author Jodi Picoult and “A Game of Thrones” writer George R.R. Martin filed a lawsuit Tuesday against Open AI over its chatbot technology, ChatGPT.

 

The suit, filed under the Copyright Act of 1976 in the U.S. District Court for the Southern District of New York, accuses OpenAI of copying the authors’ works “wholesale without permission or consideration,” according to court documents obtained by The Hill.

 

The complaint, filed in conjunction with the Authors Guild, the largest and oldest professional organization for writers in the U.S., goes on to accuse OpenAI of feeding the copyrighted works into “large language models” (LLMs), algorithms designed to produce text responses to users’ prompts or queries.

 

“These algorithms are at the heart of Defendants’ massive commercial enterprise. And at the heart of these algorithms is systematic theft on a mass scale,” the suit reads.

 

Click on the link for the full article

Link to comment
Share on other sites

CIA Builds Its Own Artificial Intelligence Tool in Rivalry With China

 

US intelligence agencies are getting their own ChatGPT-style tool to sift through an avalanche of public information for clues.

 

The Central Intelligence Agency is preparing to roll out a feature akin to OpenAI Inc.’s now-famous program that will use artificial intelligence to give analysts better access to open-source intelligence, according to agency officials. The CIA’s Open-Source Enterprise division plans to provide intelligence agencies with its AI tool soon.

 

“We’ve gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going,” Randy Nixon, director of the division, said in an interview. “We have to find the needles in the needle field.”

 

It’s part of a broader government campaign to harness the power of AI and compete with China, which is seeking to become the global leader in the field by 2030. That US push dovetails with the intelligence community’s struggle to process the vast amounts of data that’s now publicly available, amid criticism that it’s been slow to exploit that source.

 

The CIA’s AI tool will allow users to see the original source of the information that they’re viewing, Nixon said. He said that a chat feature is a logical part of getting intelligence distributed quicker.

 

Click on the link for the full article

Link to comment
Share on other sites

Los Angeles is using AI to predict who might become homeless and help before they do

 

When a stranger called offering Dulce Volantin some financial help, she was skeptical.

 

"Sounds kind of shady," she recalls thinking.

 

At the time, Volantin and her partner, Valarie Zayas, were renting a bed at a place on Venice Beach in Los Angeles. "Dormitory-style living," Zayas says. The women had met in prison a few years before, after each had been involved with gangs, and they were over-the-moon happy to have found love. But Volantin had suffered bad bouts of mental illness that required hospitalization. Zayas was hustling temp jobs to supplement Dulce's disability aid.

 

They'd slept in their car, then lost it. Stayed with family for too long. Then started donating plasma and selling some of their clothes to pay for motels. "By the seventh day, you don't have anything in your pocket no more," Volantin says.

 

Despite her doubts, she returned that phone call — and it turned out to be not only for real, but also life changing.

 

The call was from the Los Angeles County Department of Health Services, part of a first-of-its-kind experiment to try and curb homelessness numbers, which keep going up despite massive spending. On average, for every 207 Angelenos who exit homelessness every day, 227 others fall into it.

 

The pilot program is using artificial intelligence to predict who's most likely to land on the streets, so the county can step in to offer help before that happens.

 

The program tracks data from seven county agencies, including emergency room visits, crisis care for mental health, substance abuse disorder diagnosis, arrests and sign-ups for public benefits like food aid. Then, using machine learning, it comes up with a list of people considered most at-risk for losing their homes. Vanderford says these people aren't part of any other prevention programs.

 

"We have clients who have understandable mistrust of systems," she says. They've "experienced generational trauma. Our clients are extremely unlikely to reach out for help."

 

Instead, 16 case managers divide up the lists and reach out to the people on them, sending letters and cold calling.

 

Click on the link for the full article

 

 

Link to comment
Share on other sites

2 hours ago, China said:

Los Angeles is using AI to predict who might become homeless and help before they do

 

When a stranger called offering Dulce Volantin some financial help, she was skeptical.

 

"Sounds kind of shady," she recalls thinking.

 

At the time, Volantin and her partner, Valarie Zayas, were renting a bed at a place on Venice Beach in Los Angeles. "Dormitory-style living," Zayas says. The women had met in prison a few years before, after each had been involved with gangs, and they were over-the-moon happy to have found love. But Volantin had suffered bad bouts of mental illness that required hospitalization. Zayas was hustling temp jobs to supplement Dulce's disability aid.

 

They'd slept in their car, then lost it. Stayed with family for too long. Then started donating plasma and selling some of their clothes to pay for motels. "By the seventh day, you don't have anything in your pocket no more," Volantin says.

 

Despite her doubts, she returned that phone call — and it turned out to be not only for real, but also life changing.

 

The call was from the Los Angeles County Department of Health Services, part of a first-of-its-kind experiment to try and curb homelessness numbers, which keep going up despite massive spending. On average, for every 207 Angelenos who exit homelessness every day, 227 others fall into it.

 

The pilot program is using artificial intelligence to predict who's most likely to land on the streets, so the county can step in to offer help before that happens.

 

The program tracks data from seven county agencies, including emergency room visits, crisis care for mental health, substance abuse disorder diagnosis, arrests and sign-ups for public benefits like food aid. Then, using machine learning, it comes up with a list of people considered most at-risk for losing their homes. Vanderford says these people aren't part of any other prevention programs.

 

"We have clients who have understandable mistrust of systems," she says. They've "experienced generational trauma. Our clients are extremely unlikely to reach out for help."

 

Instead, 16 case managers divide up the lists and reach out to the people on them, sending letters and cold calling.

 

Click on the link for the full article

 

 

It sounds good but it also sounds like Minority Report.

Link to comment
Share on other sites

  • 2 weeks later...

How a billionaire-backed network of AI advisers took over Washington

 

An organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key congressional offices, across federal agencies and at influential think tanks.

 

The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.

 

Acting through the little-known Horizon Institute for Public Service, a nonprofit that Open Philanthropy effectively created in 2022, the group is funding the salaries of tech fellows in key Senate offices, according to documents and interviews.

 

Senate Majority Leader Chuck Schumer’s top three lieutenants on AI legislation — Sens. Martin Heinrich (D-N.M.), Mike Rounds (R-S.D.) and Todd Young (R-Ind.) — each have a Horizon fellow working on AI or biosecurity, a closely related issue. The office of Sen. Richard Blumenthal (D-Conn.), a powerful member of the Senate Judiciary Committee who recently unveiled plans for an AI licensing regime, includes a Horizon AI fellow who worked at OpenAI immediately before coming to Congress, according to his bio on Horizon’s web site.

 

Click on the link for the full article

Link to comment
Share on other sites

An AI Nightmare Has Arrived for Twitter — And the FBI

 

As the school year kicked off in the Spanish town of Almendralejo, teenage girls began to come home telling their parents of disturbing encounters with classmates, some of whom were claiming to have seen naked photos of them. A group of mothers quickly created a WhatsApp group to discuss the problem and learned that at least 20 girls had been victimized the same way. Police opened an investigation into the matter, identifying several people they suspected of distributing the content — likely also teens, as the case is now being handled by a Juvenile Prosecutor’s Office.
 

So-called “revenge porn,” explicit material obtained by hacking or shared in confidence with someone who makes it available to others without the subject’s consent, is hardly a novel crime among adolescents. But this time, the images were startlingly different. Though they looked realistic, they had been created with a generative AI program.  

 

The incident is not an isolated one. American schools have already had to deal with students creating AI nudes to bully and harass one another. In October, the police department of Muskego, Wisconsin police department learned that at least 10 middle school girls had friended someone they believed to be a 15-year-old boy on Snapchat; after they sent images of themselves to this person, he revealed himself to be a 33-year-old man and demanded explicit photos, threatening to alter their original pictures with AI to make them appear sexual and send them to their family and friends. Earlier this year, the FBI warned of an uptick in the use of AI-generated deepfakes for such “sextortion” schemes, confirming that the agency is receiving reports from victims “including minor children.”      

 

Click on the link for the full article

  • Thumb down 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...