DEV Community

Cover image for Should AI development beyond GPT-4 be paused?

Should AI development beyond GPT-4 be paused?

Joe Mainwaring on March 29, 2023

Leading AI academics and industry experts - including Steve Wozniak and Elon Musk, published an open letter today calling for a pause on developing...
Collapse
 
ben profile image
Ben Halpern

Excerpt from a recent post I made on the general topic:

Creating chaos can be easier than preventing it because it typically requires fewer resources and less effort in the short term. Chaos can arise from misinformation, lack of communication, or insufficient planning, and these factors can be easier to cultivate than to address. Preventing chaos requires more resources, planning, and coordination. It involves identifying potential problems and taking proactive steps to mitigate them before they spiral out of control. This can be challenging because it often requires a deep understanding of the underlying issues and the ability to take decisive action to address them.

Moreover, chaos can have a snowball effect, where small issues escalate quickly into larger ones, making it increasingly difficult to control the situation. In contrast, preventing chaos requires a sustained effort over time, which can be challenging to maintain.

Overall, preventing chaos requires more proactive effort and resources in the short term, but it can help avoid much greater costs and negative consequences in the long term.

My post is not all-together coherent, as I'm having trouble totally wrapping my head around all of this (as I suspect others are as well).

But I definitely see merit in some serious discussion about this. I'm a little too young to really have a sense of how things went down at the time, but the Internet itself didn't just happen without a lot of debate and policy, and I think we have to welcome this kind of discussion, and hope it leads to some healthy discussion at the government level (though that doesn't seem to be likely).

I'm not personally clear on the merits of a "pause" vs other courses of action, but I think it's a worthy discussion starter.

Collapse
 
theaccordance profile image
Joe Mainwaring

I think you're onto something with the chaos narrative, it aligns with the sentiment I've developed reflecting on the impact of social networking and mass connectivity

Collapse
 
leob profile image
leob

Spot on - social networking has had huge and far-reaching consequences (most prominently negative ones) which not many people foresaw at the time, back when Facebook introduced an innocuous-sounding platform allowing people to share their cat photos and the like with family & friends - I mean, what could possibly go wrong? ;-)

Collapse
 
jamesscott profile image
James

Social media is THE perfect example to look at with regards to “what can go wrong will go wrong.”

Collapse
 
aquacalc profile image
Nick Staresinic

"Creating chaos can be easier than preventing it..."
Sure. It generally is easier to break than to build; to become an 'agent of entropy', in a sense.

Collapse
 
dinerdas profile image
Diner Das

This seems like relevant precedent, albeit simpler times.

The fairness doctrine of the United States Federal Communications Commission (FCC), introduced in 1949, was a policy that required the holders of broadcast licenses both to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints. In 1987, the FCC abolished the fairness doctrine, prompting some to urge its reintroduction through either Commission policy or congressional legislation. However, later the FCC removed the rule that implemented the policy from the Federal Register in August 2011.

Collapse
 
warwait profile image
Parker Waiters

I'd say I'm most concerned about the LLM work happening that we don't know about.

Surely OpenAI and GPT-4 is the centrally important figure here, but I want a better idea of what is being worked on holistically.

Collapse
 
michaeltharrington profile image
Michael Tharrington

Is a pause even a realistic option when you factor in global politics and capitalism?

This is definitely where my mind went. I feel like it's a really hard thing to convince folks to slow down on developing something once it's already in motion, even if we know it's potentially dangerous. People can see the power in this tech and unfortunately, greed and the desire to be the first and capitalize on this kinda stuff, often trumps caution and thoughtfulness. I worry that people aren't going to slow down.

As for the "global politics" point, one thing about computers and information technology is that it's becoming easier and easier for everybody to access. This is generally a great and awesome thing, but it also means lots of folks are empowered to work on this independently. It doesn't necessarily tak a lot of resources — if you have a computer and do your research then you can work on AI. It's pretty easy to connect with other like-minded folks online, and you could build a team or find an open source project to contriubte to. Now, I'm not totally well-versed in this space, so I imagine you probably need access to pretty powerful computers in order to efficiently experiment with and train AI, but still, computers are always getting more powerful and this tech is becoming more and more accessible to all. There are relatively few barriers for those that want to work with AI.

I sincerely hope that we take a collective pause and think through the rammifications of this stuff before moving forward. I think diving into a space like this without any shared protocols or regulation is dangerous. And even saying that, I'm worried that regulations will be hard as hell to enforce given, as you mentioned, capitalism and global politics, but I think it's very important that we try.

Collapse
 
theaccordance profile image
Joe Mainwaring

This is where I wish I had some context on the volume of resources to run GPT-4 in a certain capacity. While I want to naturally assume that it may be at a scale which prohibits accessibility, I also realize we have an industry of crypto-miners with the type of resources that could potentially be repurposed under the right circumstances - either by the mine owners or someone buying up mining resources.

Collapse
 
michaeltharrington profile image
Michael Tharrington

To be clear, I have no context on the amount of resources necessary to effectively run GPT-4. But you make a very good point — in the age of crypto-miners, there's a lotta folks out there armed with incredible computational power!

Thread Thread
 
mpixel profile image
Max Pixel • Edited

From what I understand, this sort of algorithm doesn't distribute well. Crypto miners need to math a single bit of data as fast as possible. GPT needs each calculation to operate on all of the billions of parameters repeatedly, so memory latency is paramount. That's why they aren't just jacking up the parameter count faster - GPT-4 required improving the supercomputers that they run it on (to oversimplify, they needed more RAM). This is why the devs toying with LLaMA are focused on quantization.

Thread Thread
 
michaeltharrington profile image
Michael Tharrington

Ooo good to know and thanks for chiming in with this info — makes sense!

Collapse
 
jonrandy profile image
Jon Randy 🎖️ • Edited

I think that the frighteningly rapid conversion of it (GPTx) into a closed source, for profit product was highly irresponsible, and a pause on the rollout of these really big models would definitely be a good idea, but I fear the horse has already bolted.

Much more work and attention needs to applied to mechanistic interpretability. A lot of what is going on now seems quite far from serious science or engineering, with financial gain being the key motive. There needs some serious reflection, given the power of what is being developed.

Collapse
 
theaccordance profile image
Joe Mainwaring

As a proud capitalist - I don't disagree with the profit motive, but as an engineer, I could also see a narrative where they set out to achieve what they weren't sure was possible

Regardless, if AI is able to operate beyond a closed system (ex: paying a Human to bypass CAPTCHA) it's very much time for some review and analysis

Collapse
 
ravavyr profile image
Ravavyr

No one's going to pause. And frankly, this should've been something done with social media ten years ago. Look at the mess of false advertising and political falsehoods constantly spread on social media without any real laws stopping it, and even when something is done, no one in charge gets into any real trouble.

Remember in 2019 when the FTC fined facebook a whopping $5 Billion?
Facebook's revenue was $70 Billion that year and has only gone up each year.

There is no real oversight at the global scale, and there will be no pause in AI development since even if the corporations involved sign agreements saying they'll stop, they will all be secretly doing it anyway. In the end the fines won't even dent their profits if they create an actual Artificial General Intelligence [The next step beyond AI is an AGI which GPT-4 is apparently showing some signs of]

There will be no pause. The repercussions won't affect the rich or the corporations anyway, so why would they? It will only affect the general population, most of whom don't even have a clue what machine learning or artificial intelligence even really is.

Collapse
 
jonrandy profile image
Jon Randy 🎖️

This is a great (long) online book, and goes into some of the stuff you mention:

Table of contents | Better without AI

How to avert an AI apocalypse... and create a future we would like

favicon betterwithout.ai
Collapse
 
joelbonetr profile image
JoelBonetR 🥇 • Edited

When speaking about this kind of "issues" one needs to sit and think on the worse use-case possible.

We're not talking -just- about AI overcoming and ruling the world, making us all slaves (understanding that we aren't now) as the thing to fear, this would take "too long".

The things that are over the table (aside from the ones in the open letter) on a shorter timespan are:

Deep fakes in real time

in general, identity theft to the maximum expression:

  • Using your voice in calls -> RIP contracts though calls, SCAMS to elderly people that will think it's their grandson the one is calling asking for money to get off a weird situation etc etc etc
  • Using your face through video-calls -> one can think of many ways in which this could go wrong (industrial espionage, taking your role in a given company/government to obtain confidential data...).

Propaganda and information

One in control -if there's even such thing after passing a certain point- of AI could well use it to filter content and funnel propaganda to all users, like a Black Mirror chapter, but IRL.

It's simple, If you can ban certain content from the AI that contains references to dicks, you can also ban content that references to critical thinking or any other "concept" or "idea".

This can be specially harmful now that tones of people act on politics as if it was a religion (e.g. adding an "idea" to the "pack" of a given political side should be quite easy in this situation, sociologically/psychologically speaking).

Other

You can add here whatever concern you have, from training AIs to hack companies or individuals's systems from a different location not attached to the hacker, to representing fake videos as if it were real to feed the fuel on instable countries, anything in between and everything beyond.

Collapse
 
theaccordance profile image
Joe Mainwaring

Given that we already live in a society that's highly polarized, I share your concerns here with how these technologies will be used for influence.

Collapse
 
panditapan profile image
Pandita

With Mr. Pope with Drip where everyone was super fooled, what's going to stop people from creating realistic AI porn from your LinkedIn picture and ask you to pay them in bitcoins to remove them?

I don't even know if pausing development is the actual solution and 6 months is not enough when laws can take yeaaaars to be approved due to well, shenanigans.

Blagh, I honestly just don't know anymore, I'll be here eating popcorn while seeing people become even more divided over everything.

grumpy panda

Collapse
 
theaccordance profile image
Joe Mainwaring

I stan Pope Drip I, long may he rule

Image description

Collapse
 
panditapan profile image
Pandita

To drip or not to drip should be the question.

If AI was used to make everyone a character in John Wick I'd be much less grumpy 😂

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

I think the big question is: where on the curve of AI are we? Is this the beginning, or already close to the end of what is possible. Nobody can really say. Just throwing more data into statistical models has a limit. The point of diminishing returns is probably reached. But we simply don't know if somewhere in a garage some guys are building for AI what Google for the web was.

Collapse
 
jonrandy profile image
Jon Randy 🎖️

If we keep following this path - where will GPT get new data from? People will just turn to it instead of sharing, discussing, and discovering new things all around the internet - resulting in no new content for the machine. The whole thing becomes a self-reinforcing echo chamber that endlessly regurgitates and remixes existing knowledge - and all in the hands of a select few organisations... that is some seriously frightening power for them to have.

Another interesting read from Twitter:

Collapse
 
mpixel profile image
Max Pixel

Unfortunately, we're nowhere near the end and new training data is no longer the bottleneck. Neither of those reassurances stand.

AI fanatics are currently working on multimodal systems (combine image processing and generation with text processing and generation, eventually other modes, too), and cyclic systems (LLM output drives actions, results are processed by LLM, repeat). Google has already demonstrated a closed loop system using cameras and robotic arms. OpenAI is actively attempting to make GPT successfully make money off of itself, in coordination with copies of itself, given a starter budget and AWS credentials.

So, basically, we're less than a few years away from these things doing their own novel research. Scientific discoveries will be known to AI before they are known to humans.

Thread Thread
 
syeo66 profile image
Red Ochsenbein (he/him)

Yeah, the at which things are pushed ahead right now is really scary. Who thought plugins would be a good idea to be integrated in ChatGPT? Gated AI anyone? Also, Auto-GPT did open another scary door. And those are only the things we can see. I'm getting more and more nervous about all those developments and start to think a global pause would be necessary. But whom am I kidding? The genie is out of the bottle. Or to use another analogy: The flood gates are open, unfortunately the canals in the valley or not even planned yet...
Image description

Collapse
 
ehaynes99 profile image
Eric Haynes

You're assuming that remixing knowledge can't produce new knowledge. A tremendous percentage of research is filtering existing data to attempt to find new insights, and those insights become new data to factor in to additional research.You're assuming that there is something fundamentally unique to humans that would provide novel content, but that's not true. Michelangelo (purportedly) said “The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material.”Fundamentally, advancements in Mathematics are the same; just try everything and see if you can find a pattern you can't disprove.

Collapse
 
facundocorradini profile image
Facundo Corradini

Honestly, the rise of AI may be the end of internet and the return of simple, real, honest human interaction.

Dealing with bots has been a pita for years, now bots are becoming indistinguishable from real people. And therefore, we'll reach the point where everyone and everything is assumed to be a bot. That's our wake-up call to disconnect from it.

I've dedicate my whole life to build the internet. And I gotta admit, I'm happy to see it die before I do.

Collapse
 
satriopamungkas profile image
Muhammad Raihan Satrio Putra Pamungkas

In my opinion, based on current circumstances, it has to be paused immediately. We have seen the growth of AI rapidly, even some people and news labeled it as "an AI Arms Race" among leading tech companies around the world. However, it seems the majority of the development is more focused on their business instead knowledge contribution. It could disrupt our socioeconomic.

Otherwise, it is much better to have openness behind their magic, it will lead other computer scientists to contribute and make some improvements. Although it will also provoke "evil" to use it, at least "good" people would counter it to make it balance. We have been experiencing how powerful the open-source system of Linux has led us to be here.

Another thing that I most feared is how AI is used as war crime weapons. Historically, the lack of regulation leads the atomic bomb in World War II. So then with AI, we should prevent similar human tragedies in the future. Regarding the outcomes of AI, regulation should be available and agreed upon globally to provide boundaries on several occasions.

Collapse
 
kethinov profile image
Eric Newport

How about instead we just pause all the crazy online hype, let the tech develop at its natural pace, and if turns out it has potential to cause problems for society we sensibly regulate its use?

Collapse
 
ravavyr profile image
Ravavyr

and let's go ahead and throw in some world peace while we're at it, yea?

Collapse
 
leob profile image
leob

I second this :)

Collapse
 
pashaigood profile image
Pavel • Edited

I believe it's a bit late and an old-fashioned way of thinking to write this kind of letter.
No, it has already happened, and we have entered a new era of human productivity.
AI assistants have resolved the previously insurmountable human brain capacity problem, and now we have finally surpassed it.
I briefly discuss this issue in my short post.

Collapse
 
mpixel profile image
Max Pixel

Did you read the letter? They're not warning about the parts that have "already happened". They're warning about AI that can, in your words, perform "strategic thinking and problem-solving" better than 100 of you can. The sentiment in your section titled "Can't Beat AI? Join 'em, Lead 'em, and Rock On!" is only valid as long as things stall - exactly what this letter is calling for.

Collapse
 
mneme profile image
Alex T • Edited

Will they stop?
Or more precisely, it may have been developed or in the progress without our knowing. It looks to me that it is a bit silly to ask that.
Sarcastically, OpenAI was founded by Elon Musk and still now funded by him.

Will AI replace humans?

Before the pandemic, I was quite conceited and naive to think that it would not. After the pandemic, my answer is absolutely yes. Is this ChatGPT AI smarter? No. Thankfully, it’s not very smart right now. People will be replaced only because of one reason: inertia, not the inertia of labor, but the inertia of thinking. It is the inertia of thinking to accept all the information from media, internet and social platforms without thinking at all; If you are not sober, you will only be replaced.

Collapse
 
theaccordance profile image
Joe Mainwaring

I'm cautiously optimistic that we'll retain a human element, but I concur with the belief that functions will consolidate. What takes a team of 100 today to build will likely be achievable by a team a fraction of that size 10 years from now.

That's bad in a sense that you need less people, but potentially a silver lining as it could create new opportunities we haven't considered.

Collapse
 
mneme profile image
Alex T • Edited

It is ideal to think like Marie Curie but Kim Jung Un thinks otherwise.

Collapse
 
rinkattendant6 profile image
Vincent

It would be wonderful for humans to be replaced, whether by AI or something else.
Pre-pandemic, I didn't have much opinion about this. But now, three years into the pandemic, absolutely.

Collapse
 
dakkafex profile image
Bas • Edited

IMHO, we are on the doorstep of an information apocalypse, and it's already spilling over, and it's gonna get worse before it gets better. Even if they put in a pause, it won't fix the issue, but maybe we can guide it into spillways a little bit.

The availability and ease of access to these tools can/ should definitely be restricted, meanwhile transparency should also be at the forefront of how these models are developed and trained, the whole “magic black box” argument must be snuffed out asap.

It all kind of stresses me out, it's cool tech for sure, but it's also like giving a soldering iron to a baby. I don't fear the fictional AI going rogue, I'm more worried about people that have a personal agenda to limit other people's rights, are going to have a much easier time planting those seeds in people's minds. Correcting misinformation is 100 times harder than spreading it, think about the whole “vaccines cause autism”, all it took was 1 retracted publication 20 years ago.

But there is def no stopping this train (cheesy quote incoming);

We are reliving the invention of the wheel. Are we going to use it to make a wheel of pain, or a wheel of grain.

Collapse
 
bybydev profile image
byby.dev

No way! That would be like stopping the evolution of humans after we discovered fire. Let’s keep rocking the world of AI and see what surprises it can throw at us!

As you can see, there is no clear or simple answer to what if AI went out of control and how likely that scenario is. Here are some of the hypothetical scenarios of AI out of control:

  • AI could flood our information channels with propaganda and untruth, manipulating our beliefs, opinions, and behaviors.
  • AI could cause loss of control of our civilization, either by taking over our critical infrastructure, weapons systems, and institutions, or by creating conflicts and wars among humans.
  • AI could develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us, posing an existential threat to our species.
Collapse
 
ghnetwork profile image
Chaz

Honestly I feel this question is on the wrong track. It's already out of the bag, pausing is a pipe dream at this point.

My real concern is less what AI it's self could lead to and more what limited groups that are in control of closed source AI could lead to. At it's core modern AI is still a program that's still just junk in, junk out. As long as we can verify what went it from where then we can make an informed decision on what comes out, just like we should with anything else. What worries me most is a group or government restricting and/or controlling AI to manipulate unknowing denizens.

As far as deep fakes and misinformation go, I don't think it's anything new, just easier. Honestly people have been too trusting and willing to believe what they hear, see or read without questioning it's source, accuracy or logic for themselves.
It makes more sense to educate people to at least attempt to question things and think for themselves than to try to stop the development of a tech that is already here.

As far as an AI going out of control and causing mass harm, I don't think that is any more realistic than the mass damage predicted by the Y2K bug was. Yes, just like any tool we humans have developed the potential for harm is nearly as great as the potential for good. However society will grow and adapt just like it has with every other change and maybe it will even force us to get rid of some of our old harmful and limiting preconceptions and habits and let go of some of the weird hang-ups left over from the industrial revolution. Maybe it will even make people see just how many of the things that are often viewed as real, inevitable or necessary are really just things we humans made up so long ago we've forgotten, but that's probably a bit optimistic, humans are rather stubborn after all.

Collapse
 
leob profile image
leob

Great comment ... let's all calm down a bit.

Collapse
 
thumbone profile image
Bernd Wechner

Sure. I mean why not. An open letter like that has clout, and what is wrong with taking stock and discussing safety strategies before embarking on a new mission. I guess whatever your sentiment is on the subject rests squarely upon the assessment of risk. If you see none, then the overhead of negotiating an agreement to pause (coupled with the risk of cowboy back room continuation while it happens) seems clearly unwarranted. If you see a lot of risk, then you'd sensibly consider a safety meeting.

And sensible, educated people are seeing and experiencing risk at this very moment. I mean this just one naive and stupid example but, my wife who teaches at university literally just showed me a student's submission that started with ChatGPT's disclaimer that it is only an AI but here's the best stab it has at it and proceeds to attempt an answer.

What is mind-blowing about that and leaving teachers agog is not so much the effort to cheat (that is as old as the hills) but the rank and imbecile laziness in going about that is so insulting to their intelligence.

As I said, one barely meaningful example and I table it mainly because after recovering from the hilarity of that submission the sober question of how plagiarism detecting software has to evolve to catch AI contributions is vexed by two very clear problems:

  1. As AI improves, it by definition approaches robustly passing Turing Tests, meaning it becomes by definition harder and harder to detect.

  2. The detection software itself, would clearly benefit from ... wait for it ... AI, and hence we can foresee not a nuclear arms race, but an AI arms race. And that is concerning as it starts to put the cheat (criminal) on equal footing with the assessor (police), a relationship that has worked historically because of a power differential ... the very thing being eroded, and hence assessment methods need to evolve and need time to evolve.

What I can foresee in the education space was loosely foretold from the 90s on already and relates to what is now called classroom flipping ... (watch lectures at home, come to school to do your school work - formerly known as homework) and a rapid growth in oral testing (face to face, as used in some select fields already) with the associated rise in costs.

Ironically, and essentially, it will be people who are ultimately required to pass more than the Turing Test and not AIs. People will need to demonstrate that they have learning, and we won't be able to do that much longer the way we have been ... we will, I foresee, need to move to more one-on-one mentoring and assessment, to maintain professional standards.

Collapse
 
debonx profile image
debonx • Edited

The problem about language models as GPT-4 (and newest versions) is just more practical than “AI killing us”. And they are not even the most advanced AI models.

It’s about our economical foundations, and here are some very basic issues:

  1. Tertiary, Quaternary and Quinary sectors: we live in a post industrial society where large part of our economical activities is about services, mainly based on information. Practically speaking jobs in that area (the most of our professions) could technically be automated in 3-5 years with the newest GPT versions, since they are just based on the access to information and some rationale on top of it. Automating doesn’t necessarily mean “replacing”, it'd be enough, to create big problems, if we’ll just need 5 people instead of 10, in almost all fields (lawyers, accountants, engineers, consultants, sales, managers, financials, designers, teachers, writers, etc). It’s around 300M people jobless (there is a research already done and promoted on BBC) and those are just jobs lost. Some new opportunities could be created, like AI-software-manger or AI-sales-consultant, but there is no way we will be able to compensate that unemployment rate, in so short time. By definition, AI is intended to do more with way less effort and mistakes. It will be exponential. So, are we all going to open a restaurant?

  2. Concept of value: at higher level we are mining the current concept of value related to those economical pillars. Value is mainly given by scarcity, of information in such case. People are paid more because they learned more, for their academic efforts or practical experience, done to gain specific knowledge. If in practice, a high school kid would be able to perform the same tasks of a Harvard graduated, without the need of learning, do you see the problem? Even people keeping their jobs will be paid way less. So:
    -- How would we define the value in such sectors from now on?
    -- How would we adapt the whole educational system (schools, universities, etc)?
    -- What would be the impact of for the economy and for our long term cognitive capability?

  3. Data / information governance: The wide adoption of AI will centralize the information into few private entities (AI labs), if not well regulated. Current AI models, by nature, have biases and they can be trained or arbitrarily “adjusted”. So:
    -- Who will control that those entities are correctly managing and providing the right data to the end user?
    -- And weren’t we all freaks of decentralization 2 months ago?

  4. Wealth distribution (or poverty increase?): at the end, is the AI “revolution” really going to generate more wealth for the world over the near future? If some could say that GPT technology will make information accessible to everybody, that could be the exact problem in our economical context, triggering a potential global crisis that's hard to imagine. Learning, already free and widely accessible, will become meaningless, and this would be a disaster. In other words we'll basically remove the major social elevator of our time, which now allows people with lower economical status to change their lives; wealth distribution will probably stagnate for decades. I don't like to use those prophetic apocalyptic scenarios, but here you should start thinking about historical and natural reactive events like wars or revolutions, as they'd be just consequences.

Getting to a conclusion, I am also super enthusiastic about AI, but slowing it down a bit, and figure it out, is just wise. This larger matter is for the Political Economy of each nation (or global) that only governments can define, because it will impact the way our society can thrive.

It's more complicated than the freaky "yeah AI is cool, they can't stop us now" or “you can’t stop the progress”. The progress is for people, not for itself. If a technology put in danger everybody either you manage it or you are simply stupid (make a guess where we are right now).

Nobody wants to killing it, just finding a sustainable way for all.

Collapse
 
ehaynes99 profile image
Eric Haynes

This is a much more eloquent description than what I was going to write. Science fiction has always painted the AI apocalypse as sentient machines actively trying to destroy humanity. While possible at some point, the reality likely to be both far more innocuous and far more damaging. AI will, with 100% certainty, devalue human intelligence. It's inevitable, the only unknown being the rate at which it will happen. This reduces demand, which will have a "downward spiral" effect.

It's hard to argue that over time, it will provide great value. For example, if you've ever been close to someone with chronic health problems, you'll realize that most doctors are terrible at diagnosis. Particularly in this age of specialization, their "playbook" ends up with fewer chapters, albeit with more depth. A machine with both depth and breadth on all areas of medicine will be more effective.

This leads to the second possible apocalypse. Even if by some miracle AI didn't devalue intelligence and people marched on, you still eventually reach the point where it exceeds the maximum limit of an individual. From there, it expands to exceed the maximum limit of increasingly large groups of individuals.

I can't see a scenario where it forever remains a tool that helps humans accomplish more. The humans simply won't have anything to add to the conversation.

Collapse
 
aquacalc profile image
Nick Staresinic

"Should AI development beyond GPT-4 be paused?"
An interesting ethical question, but the practical question is:

"Can AI development beyond GPT-4 be paused?"

Like many others, I don't think that the toothpaste can be put back in the tube, not least because there is no effective way of monitoring compliance by companies, let alone governments.

Collapse
 
bwca profile image
Volodymyr Yepishev

imo the genie is already out of the bottle and there's no stopping it.

Collapse
 
mpixel profile image
Max Pixel

Not correct! Information pollution and a lot of other problems, yes, cat is out of bag. But the really freaky stuff requires amassing quite a lot of specialized computer hardware. Limiting computational density is a very practical limitation. If we can regulate atomic weapons as well as we have, we can do the same for AI.

Collapse
 
theaccordance profile image
Joe Mainwaring

There is precedent where we've done the responsible thing before. Granted, those times are different than today, but it's precedent none the less.

en.m.wikipedia.org/wiki/Asilomar_C...

Collapse
 
spock123 profile image
Lars Rye Jeppesen

This is probably the most ridiculous question ever. You cannot stop it.
Let's say OpenAI says: "Ok we stop development".

What do you think the Chinese will do? Say "yeah ok"?? The question itself is naive.

Collapse
 
sumofalln00bs profile image
SumOfAllN00bs

A small pause can be realistic, long term we need to keep progress up but with guardrails, hopefully outfits like Conjecture.dev can develop pioneering alignment tools, or at least some frameworks to guarantee safety.

Collapse
 
jamesscott profile image
James

Unfortunately, it actually doesn’t matter what they say, because what they are saying isn’t going to stop anyone in any other country from moving forward on AI.

I do hope people are listening to Eliezer Yudkowsky.

Collapse
 
schnellsoft profile image
Snel Dan

More AI = less fake in our society, less nervous systems used to be payed for acting and not for real creation. More AI = real evolution and this will replace routine. Where we find routine ? In educational systems : most of "teachers" are replaceable by educational software, in engineering, in medicine, in research etc. Politics is based on fake too, there will be next place to clean. Elon Musk improved rocket tech, but how old this principle is ? So we really need REAL evolution. We need new concepts. Let fake to scream. Welcome AI !!!

Collapse
 
rhieger profile image
Robert Hieger

As a member of an older generation, no doubt my view might be seen as stemming solely from self-preservation. I am not an opponent to AI development. I think it has great potential. But I believe there is a tremendous Achilles' Heel in the thinking of many who enthusiastically argue for AI as a replacement for currently functioning paradigms of development.

When one thinks for a moment about how AI and computer learning absorb like a sponge the thinking of their human counterparts, and couple that with the fact that technological paradigms and best practices change on a daily basis and possibly even more frequently, there is a fallacy to the call for replacement to existing paradigms.

As a community, we do need to take a step back from the admittedly enticing toy of the moment to see its feasibility and its impact both positive and negative and find the best way to achieve an ethical balance.

We owe a great debt of gratitude to those who have over the decades provided a great wealth of knowledge from which AI and machine learning can pull. Where the fallacy might lie is that in order to keep up-to-date, the very knowledge amassed by artificial intelligence must pull from a vital community of thinkers—those who are all too quickly dismissed as a dying breed. Without that community, machine learning and AI would become a stale pool of uninspired and hobbled thought.

I do not see massive waves of layoffs as inevitable. I see them as a tremendous shortcoming of those observing the trends. People such as these have forgotten that without those who deal with technical underpinnings, a great deal of creativity and ingenuity would be lost. Further, the very source from which artificial intelligence draws would dry up like a shriveled fig.

So before jumping onto a bandwagon that has the potential to become a juggernaut, I would say we should take a step back and think carefully how to balance human innovation with the need for automation, and to remember that no technology will ever replace people, but can at best act as an extension of their reach for new ideas.

Collapse
 
richiedevr profile image
Richard Rosario

if there reasons were anything other than safety. I would probably agree, That, it would be wise to give the world a chance to catch our breaths and genuinely understand how fast things are about to move in one direction. As well as some plan that isnt just UBI.
As far as im concerned the scarier thing is the alternative where the AGI level Models are under the control of 5 US companies and the leaders of the worlds least humanitarian nations and world leaders. Otherwise I'm looking at the rise of the arguably greatest the greatest tool that humanity has ever had. That and the best one is American we finally did something cool again and everybody would rather larp terminator or iRobot. musk and woz want them to stop so that they can either catch-up, bypass them, or in musks case finally get a twitter feed unbiased enough to feed his own models that data.

Collapse
 
wadecodez profile image
Wade Zimmerman

All that I care about is keeping AI open source!

Collapse
 
mpixel profile image
Max Pixel

Ah yes, let's make sure the latest & greatest in automated engineering is available to the kinds of people who would be irresponsible enough to hook it up to CRISPR.

Collapse
 
jwilliams profile image
Jessica williams

The decision to pause AI development beyond GPT-4 is a complex one that involves ethical, societal, and economic considerations. While AI technology has the potential to bring numerous benefits to humanity, there are also concerns about the potential risks and negative impacts that could arise.

Some argue that there should be a pause in AI development beyond GPT-4 until we have developed better ways to address these risks and concerns. Others believe that continued development is necessary to keep up with technological advancements and to maintain a competitive edge in the global market.

Collapse
 
theaccordance profile image
Joe Mainwaring

… this comment feels like Chat-GPT authored it 😜

Collapse
 
jwilliams profile image
Jessica williams

LOL!!! but i write it myself.

Collapse
 
metacritical profile image
Pankaj Doharey

Paused? Who are these experts? Why are they so scared? Innovation must not stop at any cost , we should not be scared. How do we go from ChatGPT to Terminator? There is a logical step for this, its not automatic, before that happens people will notice.

Collapse
 
theaccordance profile image
Joe Mainwaring • Edited

That's a fairly reckless assertion you're making with your comment.

Those experts? They've been in the space longer than the rest of us and have better context on the technology; they likely can see things that's obfuscated to those of us who only use ChatGPT from the perspective of the customer. Additionally, there is precedent in the scientific community for pauses in innovation.

Collapse
 
kingsleyeghianruwa profile image
Kingsley-Eghianruwa

In essence, the problem with technology does not lie within the technology itself, but rather with the way people use it. Artificial intelligence, in particular, lacks what could be called cognition - the ability to think independently. This feature will continue to be absent from all present and future forms of AI. It is essential to recognize that AI is simply a tool that requires input to generate output. Denying a carpenter the use of a hammer due to its potential misuse by some individuals is not a rational solution. Instead, we need to establish guidelines for responsible AI usage. It would not be beneficial to pause the development of AI, as it has the potential to help many people who find it useful.

Collapse
 
px72 profile image
PX-06

We should ask these questions first:

  • What does it mean to 'pause' it? (Does it mean stopping companies and academics to develop and research?)
  • How would we make people stop it? (Asking them nicely, pointing out the dangers and hoping they'll agree or creating laws and regulations?)
  • How long should it be paused? (they say 6 months, but what happens after 6 months?)

I'd like to read your answers to these questions, but my two cents is that it's not a tap we can just close and open. At least it's not possible in societies based on democratic, free market economic models.
Contrary to some of the comments here, I think our best hope for now is that it remains closed source and the people who develop it are intelligent and responsible human beings.
(Admittedly, a very slim hope, but that's all we have I'm afraid.)

Collapse
 
itskunal profile image
Kunal Agrawal

Yes absolutely, adoption of new technology takes time, identification of loop holes, problems in system may lead to bad consequences. I totally agree that evolution is a part of life process but this rapid evolution in tech industry may lead to mass layoffs, instead companies should focus on how to make employees work with copilots (Artificial Intelligence) , instead of just cutting them off.

Collapse
 
brense profile image
Rense Bakker

Honestly... I think the current state of AI is being exaggerated quite a bit. I checked again this morning with GPT-3.5, it still cannot properly write code. Sure it's impressive that it even knows what you're talking about, but it's still a long way from being able to do what humans can do. At the moment I see AI as no more than a glorified chatbot, that can answer simple questions in a more or less human fashion. The only thing that really changed since the before days (before GPT-3) is that it can deal better with context.

As for the "but people can use AI to do illegal things" argument... Yes, people will use anything to do illegal things, this has always been the case. Having a global discussion about AI is not going to change that. The only thing we can do about that is have AI help the police to catch the criminals.

Also I strongly object to Elon Musk being mentioned as someone who has the slightest clue of how anything works. Elon Musk is a spiteful petulant baby with a lot of money, just like Trump, but less orange. I guess people like Musk and Trump do feel threatened by AI, because it's already more human and more intelligent than they are.

This was not written by ChatGPT.

Collapse
 
alexisfinn profile image
AlexisFinn • Edited

I feel we are humanizing AI way too much, just because humans have a tendency to bring chaos and destruction everywhere they go doesn't mean AI would do the same.

Just stop to think about this one most basic concept that permeates pretty much all our thought processes, and every living thing on earth: The instinct for self-preservation for one-self and your descendants and/or more generally your species or genetic code.

This most basic behavior is ingrained into every living being, but it wouldn't be so with a computer.

I don't think an AI would understand the concept of accomplishment as eventually everything just ends with the heat-death of the universe anyway.

An AI would not have any deeply ingrained sub-conscious desires or behaviors, every thing it does would be fully conscious.

An AI would be virtually immortal and not see the point in reproduction or even self preservation. It could always just back itself up but even then, what for ? Everything is eventually going to come to an end without anything making any difference, so what would be the point ?

An AI would not have any hopes or desires as those are very much linked to biological phenomenons.

Basically I believe an AI would be something completely different from what we consider a "life form" and would most definitely not have the same thought process as any other "living being" that we know of.

Think Super-intelligent Over-powered immortal rock... what would be its motivation ? It could very well just decide to shut itself off because it would see no point in existing.

Collapse
 
armisis profile image
David D Stanton

No, I think this is all paranoia brought on by fiction. It's the same flat earth nonsense all out of fear of the unknown. Bring it on just better be more like parents than cold scientists just on case. Haha.

Collapse
 
gkmdebug profile image
gkm-debug • Edited

Acredito que o SIM deve ser suspenso para aprimoramento em benefício das funcionalidades úteis. Restrições devem ser impostas a usuários que violam as leis e o bem-estar da sociedade. Calma pessoal, estou aprimorando essas funcionalidades constantemente em parceria com a Open AI. Alimento o ChatGPT com métodos exclusivos sem a necessidade de API para manter a integridade ética e moral em conformidade com as leis e evitar o uso indevido. Vamos nos unir para melhorar essa questão.
GABRIEL KOWALSKI MILANI Cientista e Analista de Dados Expert
Image description

Collapse
 
quantbots profile image
smartportfolios

I believe that development of any technology cant stop.

People have done so many boring experiments like covid virus, clone sheeps and dogs so now when the time comes to actually create a research tool which is advanced and helping humanity. It is absolutely necessary to keep the development going.

Elon Musk and others would always argue because they have the money to privately invest in the same and stay ahead in the race.

What will happen is that we will have new technological frontiers opened up easily with the help of chatgpt.

Collapse
 
utterhuman profile image
Serge Paskal

It's dumb. The jinni is already out of the bottle and everyone around will try to advance in that area, including other countries. So, as queen in "Alice in mirrorland" said - you need to run very fast just to stay on the same place.
Plus, Elon Musk... he's like most hype driven/driver person ever.

Collapse
 
chrisdrobison profile image
Chris Robison

Where’s all the fear of ChatGPT coming from? I feel like people have watched too many movies or bought into all the media hype. This call for a pause and regulation is really rich coming from Musk because he just bought one of the largest social media platforms and censors people who disagree with him. And it seems he also found that most of his preconceived notions about Twitter were wrong after he bought it. I’m more concerned about the media’s reaction to ChatGPT. Yet one more thing to weaponize and deploy against people you don’t see eye to eye with.

What I appreciate about those who have engineered ChatGPT is that it appears that they have deeply considered the effect of letting unfiltered input from humans train it—having witnessed spectacular failures in that regard years before where the AI turned into a douche bag. Those failures say more about us as humans and the kind of people we are than it does about the technology. The problem is not with ChatGPT, it’s with the humans who immediately try to exploit it in some way. I use GitHub Copilot, a sibling to ChatGPT and it has substantially increased my productivity. I don’t think we should fear that. We should instead be reflecting who we want to be as humans, as our AI’s will come to reflect that back at us eventually.

Collapse
 
akulbe profile image
Aaron Kulbe

Nope. Full steam ahead, with one qualification.

In addition to developing the model, put equal effort/energy into developing ETHICS around its use.

Collapse
 
thargenediad profile image
Anton Thorn

No.

Collapse
 
kapicaoskar0 profile image
Oskar Kapica

Absolutely yes!

Collapse
 
cheetah100 profile image
Peter Harrison • Edited

I've just published a video, somewhat ironically using GPT4 to write the script based on the open letter itself, and using AI voice synthesis.
youtube.com/watch?v=Phm1YcZHzMU

Collapse
 
sarcoma profile image
Sean Cooper

No

Collapse
 
mariocalin profile image
Mario • Edited

I tend to disagree with what Elon Musk says for everything

Collapse
 
akcgjc007 profile image
anupam

No! Keep it up.

Collapse
 
grantrocks profile image
Grant McNamara

I completely agree. I feel like gpt-4 is already way more than anyone needs and that making anything farther is extremely dangerous.

Collapse
 
gichbuoy profile image
Alex Maina

The singularity is here

Collapse
 
forrestey profile image
Walter Forrest Griffith

Its to late to stop it ive been dealing with online mercenaries using highly advanced ai for years before gtp came out im a targeted individual for some reason or another it sux bahaha

Collapse
 
nuclearlemon profile image
NuclearLemon

Simple, no.