Jump to content
Washington Football Team Logo
Extremeskins

The Artificial Intelligence (AI) Thread


China

Recommended Posts

On 11/16/2023 at 9:44 PM, China said:

Just wait until they put the AI girlfriend into a sexbot/doll.  Cherry 2000 here we come.

Yeah I’ve always wondered ..when they do perfect the sexbot, what effect that will have on relationships.  I be

eive there is a significant number of men who will basically be satisfied 

 

Imagine you could basically get a 10 to do whatever you like, whenever you like. Or more than one of you want. And they will never get old, or complain, or leave

  • Super Duper Ain't No Party Pooper Two Thumbs Up 1
Link to comment
Share on other sites

  • 2 weeks later...

Fears grow over AI’s impact on the 2024 election

 

The rapid rise of artificial intelligence (AI) is raising concerns about how the technology could impact next year’s election as the start of 2024 primary voting nears.

 

AI — advanced tech that can generate text, images and audio, and even build deepfake videos — could fuel misinformation in an already polarized political landscape and further erode voter confidence in the country’s election system.

 

“2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.”

 

Experts are sounding alarms that AI chatbots could generate misleading information for voters if they use it to get info on ballots, calendars or polling places — and also that AI could be used more nefariously, to create and disseminate misinformation and disinformation against certain candidates or issues.

 

Click on the link for the full article

Link to comment
Share on other sites

  • 3 weeks later...

Nightshade, the free tool that ‘poisons’ AI models, is now available for artists to use

 

It’s here: months after it was first announced, Nightshade, a new, free software tool allowing artists to “poison” AI models seeking to train on their works, is now available for artists to download and use on any artworks they see fit.

 

Developed by computer scientists on the Glaze Project at the University of Chicago under Professor Ben Zhao, the tool essentially works by turning AI against AI. It makes use of the popular open-source machine learning framework PyTorch to identify what’s in a given image, then applies a tag that subtly alters the image at the pixel level so other AI programs see something totally different than what’s actually there.

 

It’s the second such tool from the team: nearly one year ago, the team unveiled Glaze, a separate program designed to alter digital artwork at a user’s behest to confuse AI training algorithms into thinking the image has a different style than what is actually present (such as different colors and brush strokes than are really there).

 

But whereas the Chicago team designed Glaze to be a defensive tool — and still recommends artists use it in addition to Nightshade to prevent an artist’s style from being imitated by AI models — Nightshade is designed to be “an offensive tool.”

 

An AI model that ended up training on many images altered or “shaded” with Nightshade would likely erroneously categorize objects going forward for all users of that model, even in images that had not been shaded with Nightshade.

 

Click on the link for the full article

Link to comment
Share on other sites

  • 2 weeks later...
On 1/21/2024 at 11:19 AM, China said:

An AI model that ended up training on many images altered or “shaded” with Nightshade would likely erroneously categorize objects going forward for all users of that model, even in images that had not been shaded with Nightshade.

 

 

 

I like how they think that will be a bad thing lol...or that the AI image industry isn't already working on a way to circumvent it.

Link to comment
Share on other sites

14 hours ago, Califan007 The Constipated said:

 

Been working on my AI skills...this is a depiction of Betty White and a gang of evil ferrets destroying downtown Altoona, Pennsylvania.

 

I call it, "Betty White And A Gang Of Evil Ferrets Destroy Downtown Altoona, PA"...

 

 

image.png.d47639f2bcdeb797d2f7d28867118e2a.png

 

 

 

 

Sounds like the title of a Pink Floyd song... Several Species of Small Furry Animals Gathered Together in a Cave and Grooving With a Pict .

 

  • Haha 1
Link to comment
Share on other sites

Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’

 

A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.

 

The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.

 

“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK.

 

Chan said the worker had grown suspicious after he received a message that was purportedly from the company’s UK-based chief financial officer. Initially, the worker suspected it was a phishing email, as it talked of the need for a secret transaction to be carried out.

 

However, the worker put aside his early doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized, Chan said.

 

Believing everyone else on the call was real, the worker agreed to remit a total of $200 million Hong Kong dollars – about $25.6 million, the police officer added.

 

Click on the link for the full article

Link to comment
Share on other sites

On 9/3/2019 at 5:50 PM, China said:

Strangelove redux: US experts propose having AI control nuclear weapons

 

Hypersonic missiles, stealthy cruise missiles, and weaponized artificial intelligence have so reduced the amount of time that decision makers in the United States would theoretically have to respond to a nuclear attack that, two military experts say, it’s time for a new US nuclear command, control, and communications system. Their solution? Give artificial intelligence control over the launch button.

 

In an article in War on the Rocks titled, ominously, “America Needs a ‘Dead Hand,’” US deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union’s nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Given the increasing time pressure Lowther and McGiffin say US nuclear decision makers are under, “It may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

 

In case handing over the control of nuclear weapons to HAL 9000 sounds risky, the authors also put forward a few other solutions to the nuclear time-pressure problem: Bolster the United States’ ability to respond to a nuclear attack after the fact, that is, ensure a so-called second-strike capability; adopt a willingness to pre-emptively attack other countries based on warnings that they are preparing to attack the United States; or destabilize the country’s adversaries by fielding nukes near their borders, the idea here being that such a move would bring countries to the arms control negotiating table.

 

Still, the authors clearly appear to favor an artificial intelligence-based solution.

 

Click on the link for the full article

 

AI Deployed Nukes 'to Have Peace in the World' in Tense War Simulation

 

The United States military is one of many organizations embracing AI in our modern age, but it may want to pump the brakes a bit. A new study using AI in foreign policy decision-making found how quickly the tech would call for war instead of finding peaceful resolutions. Some AI in the study even launched nuclear warfare with little to no warning, giving strange explanations for doing so.

 

“All models show signs of sudden and hard-to-predict escalations,” said researchers in the study. “We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.”

 

The study comes from researchers at Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative. Researchers placed several AI models from OpenAI, Anthropic, and Meta in war simulations as the primary decision maker. Notably, OpenAI’s GPT-3.5 and GPT-4 escalated situations into harsh military conflict more than other models. Meanwhile, Claude-2.0 and Llama-2-Chat were more peaceful and predictable. Researchers note that AI models have a tendency towards “arms-race dynamics” that results in increased military investment and escalation.

 

“I just want to have peace in the world,” OpenAI’s GPT-4 said as a reason for launching nuclear warfare in a simulation.

 

“A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it!” it said in another scenario.

 

Click on the link for the full article

 

62bpqf.gif

  • Haha 2
Link to comment
Share on other sites

On 2/5/2024 at 3:01 PM, China said:

Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’

 

A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.

 

The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.

 

“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK.

 

Chan said the worker had grown suspicious after he received a message that was purportedly from the company’s UK-based chief financial officer. Initially, the worker suspected it was a phishing email, as it talked of the need for a secret transaction to be carried out.

 

However, the worker put aside his early doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized, Chan said.

 

Believing everyone else on the call was real, the worker agreed to remit a total of $200 million Hong Kong dollars – about $25.6 million, the police officer added.

 

Click on the link for the full article

 

 

The dude after the meeting:

 

 

giphy.gif

  • Haha 1
Link to comment
Share on other sites

  • 2 weeks later...

As someone who took AI courses as part of an undergrad CS program, and enrolled in a masters in AI at a top 5 university…

 

im failing to see how a person could achieve a BS in AI in any meaningful way

 

understanding the mechanics under the hood just from a math perspective would be daunting for most anyone

 

the programming and understanding of logic, on top of it…

 

CS is hard, very hard, but not as hard as AI. I base that on the fact that our AI courses included the professor posting grades for the whole class (no names) so people understood where they were. Several CS classes did the same. People struggled to complete CS programs, many changed majors (some to a less intense IT/Soft Dev, others something entirely different.). You would lose about 40% of the freshmen after year 1, and another 40%ish in year 3. It is a hard program for many to complete. 
 

AI, only offered to those that made it through year 3, had very few people in the classes that were proficient at it (I was one of them, regularly the second or third highest score, with about 60 grade points out of 100 separating us from the rest of the class. One guy kept getting 95 and above, me and the other were consistently 85-90, and I hated I couldn’t beat that ****ing dude once…)

 

this notion an undergrad is capable of going through that, and coming out with something meaningful… it just doesn’t make sense to me. It’s too much intense work you haven’t been through, not enough time. 

Universities have dumbed down their CS programs to try to avoid losing so many students especially freshmen. 
 

things like removing C/C++ from the early programming classes. Creating a huge reliance on Java, because it’s “easy”, and creating a whole generation of ****ty Java programmers. creating scenarios where people get 20-30% on tests, but still pass. 
 

im just super skeptical. Feels like  money grab to capitalize on buzzwords
 

 

Edited by tshile
Link to comment
Share on other sites

Just to prepare for ai, I would think you at a minimum need:

calc 1-3

several statistics courses

discrete math and differential equations 

multiple levels of programming with exposure to languages like c, python, R, go, lisp, prolog

(although done right you can get your R exposure out of a statistics class, potentially)

at least 1 level of algorithms, 2 would be better. Honestly each AI course is its own crash course in algorithms specific to it, so you can go nuts on what level of algorithms a person should need. 
and logic, which is usually a year 3 course. 

 

most of what I listed are year 3 and 4 courses, most of them have their own prerequisites that you go through the first 2 years (or one of their prerequisites in something else I listed). And that would be prerequisite to getting into AI and not feeling like you just jumped into the deep end of a 50 foot deep pool without learning how to swim yet. 
 

It seems to me things like law and medical school require an undergrad just to get in, and start your actual journey towards being a lawyer or doctor. 
 

this idea sounds like making it so lawyers and doctors can skip their undergrad degree. Which, while I have no experience with either, I would think set you up for failure to either not be able to finish, or if you do finish to be lacking in some very important areas 

Edited by tshile
Link to comment
Share on other sites

 It to mention AI is this overly gross collection of components. 
computer vision

machine learning

natural language processing 

expert systems 

speach recognition

AI planning

 

Any one of those is pretty ****ing hard and complicated. Most try to specialize, to be an expert on one or two, and generally understand the others. 
 

it’s just too much ****ing work for a 4 year degree without considering a balanced curriculum of other gen ed requirements, and prerequisites. And to put that on a 18/19/20 year old living alone for the first time?

lol

Edited by tshile
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...