OpenAI Japan Exec Teases GPT-Next - Slashdot

Nagasaki IR Financing Is Reportedly Stable, Construction to Begin in 2023

According to a presentation leaked by Openai Japan executives, Openai is planning to announce a new AI model GPT-NEXT this year and promises 100 times power-up compared to GPT-4. This model, called the code name "strawberry," incorporates "System 2 Thinking" that enables intentional inference, not simple predictions, according to previous reports. GPT-NEXT also generates high-quality synthetic learning data, an important issue in artificial intelligence development. Open AI Japan's Tadao Nagasaki revealed the plan of this model and streamlined the architecture and efficiency of learning as a main factor in improving performance.

You may like to read:

China is Building Nuclear Reactors Faster Than Any Other Country

Musk Predicts AI Will Overtake Human Intelligence Next Year

China Just Stopped Exporting Two Minerals the World's Chipmakers Need

Elon Musk Says AI Could Eliminate Our Need to Work at Jobs

Airlines Are Coming for Your Carry-Ons

OpenAI Japan Exec Teases 'GPT-Next' More | Reply Login

OpenAI Japan Exec Teases 'GPT-Next'

The following comments will belong to the poster. We do not take any responsibility.

Ultra Hype ( Score: 2)

by gabebear (251933) WROTE: Yes, exponential growth. everytime. yes.

Re: ( Score: 2)

by JOH (27088) Writes: Yes, exponential growth. everytime. yes. Nobody says it is eternal.

Re: ( Score: 2)

From Fleeped (1945926) Wrote: Look at the plot of the article.

Re: ( Score: 2)

by JOH (27088) Writes: That's right. 202X "No" Future Models "It's over." Forever.

Re: ( Score: 1)

Writing from ZNRT (2424692):

It also shows the performance improvement of 100 times from GPT3 to GPT4. In this framework, the words such as "efficiency" and "power" are ambiguous, and any indicators do not improve 100 times from GPT3 to GPT4, and are not even close. This framework is so fucking that it needs to be scraped out to the bottom of the barrel remaining for the most ignorant investor. it hurts. The fact that such a stupid thing is happening may be close to reality.

Re: Ultra Hype ( Score: 2)

Bodolius (191265) writes:

The measurement generally applied to the comparison between GPT is the measurement of the parameters. This is similar to estimating the power of the car engine based on the reversal number of fuel economy. It's clear that there's a relationship, but if it's a measurement to optimize, you're doing something wrong. But that is not necessarily the wrong claim. Most marketing is like applied statistics, not a lie, but curating the facts carefully.

Re: ( Score: 2)

Writing from SHAITAN (22585):

"But it's not always false claims. Most marketing looks like an application statistics for cautious confirmation, not a lie."

Re: Ultra Hype ( Score: 2)

Bodolius (191265) writes:

There are many ways to curate facts, but data checking sites are really strange. Which are curated, which facts are "false" and which are "true"? Are you saying there are more facts that are not checked?

Re: ( Score: 2)

Writing from SHAITAN (22585):

Then you probably haven't seen the so-called "data checker" sites. These sites are used as part of a loosely organized system of political propaganda and misinformation. They use a variety of rhetorical tricks to undermine the critical thought process. One of those tricks is nicely included in your assumption that "facts are "fact-checked" to be false or true. Facts are facts, and they can be accurate or inaccurate. Truth, on the other hand, is subjective and relative.

Re: ( Score: 2)

From memory_register ( 6248354 ) writes: As long as it's convenient for this hype cycle, it doesn't have to be that way forever.

Re: ( Score: 2)

by omnihad ( 1198475 ) writes:

But you'll basically be paying rent forever. Suppose all your employees were hired through a centralized agency and that agency made cuts. Same thing if AI replaced your employees. Except this time you might not even be given access to the trained model itself. You could just give them access to the "employee" work units and monopolise the rest.

Re: ( Score: 2)

Rick Schuman (4662797) writes: Yes, exponential growth. everytime. yes.

The scam game that is AI marketing will grow exponentially, and so will the fools who split the money by giving shit in return.

A "leaked" presentation ( Score: 3)

by mukundajohnson ( 10427278 ) wrote: on Friday, September 06, 2024 @ 10:12 am ( #64767798 ) I'll wait until the end of the year to see the results before jumping to any conclusions.

It'll be 2,415 time smarter by then. ( Score: 2)

mmell (832646) wrote: Team up with Dillinger!

Wrong answers. Faster! ( Score: 1)

tdsknr (6415278) wrote: Wrong. Faster! by Pinky's Brain ( 1158667 ) wrote: "This country has content that Japan doesn't allow:

Japan? ( Score: 2)

Only country that allows pirated content for AI training. I think politics got Japan the big fish.

by mmell ( 832646 ) writes:

Are these the same guys as the $2,000 subscription ( Score: 2)

mmell (832646) wrote:

by omnihad ( 1198475 ) wrote:

Re: ( Score: 2)

by omnihad ( 1198475 ) writes: by mmell ( 832646 ) wrote:

Excuse me? ( Score: 2)

mmell (832646) wrote: by Shaitan ( 22585 ) wrote:

Re: ( Score: 2)

Writing from SHAITAN (22585):

gweihir ( 88907 ) wrote:

Re: ( Score: 2)

It is $ 2, 000 a month. And I don't think it works. Perhaps they are not aware of them, just keep their hype. Overall, the number of parts that overlap with the typical progress of larg e-scale fraud is increasing.

by Baron_yam (643147) WROTE: Friday, September 06, 2024 10:38 am ( #64767876)

Looks like obvious bullshit to me ( Score: 3)

The moment I read "intentional inference", I thought it was a mess. If someone discovered a way to make AI a real intelligence, that would be a much bigger problem.

Writing from mmell (832646):

Artificial intelligence is not impossible. ( Score: 2)

mmell (832646) wrote:

Post from Baron_yam (643147):

Re: ( Score: 2)

I do not say it is impossible. Nightflameauto (6607976) wrote:

Re: ( Score: 2)

Contrary to your denial, I believe that there is no fundamental obstacle to the true creation of artificial intelligence. Once you get over the obstacle (it may be a very close future), "artificial" intelligence is almost certain, but we will probably grow at an unpredictable speed. You said it. But there are a lot of truly wise people who say you are wrong, and you have a lot of money to support it.

Post from Baron_yam (643147):

Comment from Rick Schumann (4662797):

Re: ( Score: 2)

Rick Schuman (4662797) writes:

by Guruevi (827432) wrote:

Re: ( Score: 1)

No, we (the scientific community in general) are not completely wrong, but there is a lot of marketing that says they have found a way to cheat. Preliminary tests with their latest model show that the quality has regressed because their previous model has produced so much garbage that it will become garbage, garbage without a fair bit of human filtering. This new model, reading between the lines, requires an artificially generated training set.

from Shaitan (22585) wrote:

Re: ( Score: 2)

Writing from SHAITAN (22585):

Gweihir(88907) writes:

Re: ( Score: 2)

It is $ 2, 000 a month. And I don't think it works. Perhaps they are not aware of them, just keep their hype. Overall, the number of parts that overlap with the typical progress of larg e-scale fraud is increasing. This is due to the fact that general natural intelligence is lacking. Here, you are simply ignoring observed facts.

Rick Schuman(4662797) writes:

Re: ( Score: 2)

Rick Schuman (4662797) writes:

Writing from mmell (832646):

by Shaitan (22585) wrote:

Re: ( Score: 2)

Writing from SHAITAN (22585):

Superdre (982372) writes:

Re: Looks like obvious bullshit to me ( Score: 2)

In fact, it's not as bad as you think. We humans are only biological computers / robots, and our brain is just a bundle of biodent electrical connection. If you can't reproduce our thoughts on your computer, you're really naive. Certainly, it will take some time to imitate our thinking method, but as competition power grows, it is a matter of time to reach that level.

By Baron_yam (643147) Wrote:

Re: ( Score: 2)

I do not say it is impossible. GweiHir (88907) wrote:

Re: ( Score: 2)

It is $ 2, 000 a month. And I don't think it works. Perhaps they are not aware of them, just keep their hype. Overall, the number of parts that overlap with the typical progress of larg e-scale fraud is increasing.

D. Dawg. From Fresh (992280) Wrote:

not bullshit ( Score: 1)

@Baron_yam, if Strawberry is based on Quiet Star (Stanford), your observation has a better explanation than "Davame": New tricks to improve your thoughts are still very expensive. 。 In Quiet Stars, the model learns to use a scratch pad using a slope descent (not a context of learning, not RL, not like "thinking in step by step)). A very large drawing is that it takes as much as K token of the scratchpad K token to generate each output token. Perhaps they found a way to improve this

by D. Dawg. Fresh (992280) Wrote:

Re: not bullshit ( Score: 1)

@Baron_yam, if Strawberry is based on Quiet Star (Stanford), your observation has a better explanation than "Davame": New tricks to improve your thoughts are still very expensive. 。 In Quiet Stars, the model learns to use a scratch pad using a slope descent (not a context of learning, not RL, not like "thinking in step by step)). A very large drawing is that it takes as much as K token of the scratchpad K token to generate each output token. Perhaps they found a way to improve this

by Mesterha (110796) Wrote:

Re: ( Score: 2)

The moment I read "intentional inference", I thought it was a mess. If someone found a way to give AI a true intelligence, that would be much bigger than that.

Writing from mmell (832646):

FROM DFGHJK (711126) WROTE: Friday, September 06, 2024 10:38 am ( #64767878)

BS heavy ( Score: 4, Insightful)

In this summary, "100 times powe r-up" is written, but the article says "100 times better", "100 times stronger", "one digit (OOMS) jump". Apart from the obvious folly of these claims and at least some of the authors do not know what this means, what are the indicators of "power" and "better"? ? How is this improvement measured? Not claiming? In addition, "100 times improvement" is said to be "without a lot of computational resources". By the way, assuming that the definition of resources for performance improvement is not wasted, this claim and the following claims are what the meaning of "improvement will come from better architectures and learning efficiency". How terrible is the current "realization" in order to enable 100 times the performance improvement in one generation? It is clear that Openai is almost fraudulent, but it seems that things that are not fraudulent are hardly accompanied. It must be good to pay that much while licking your work. < SPAN> This summary says "100 times power up", but the article says "100 times better", "100 times stronger", "1 digit (OOMS) jump". I am. Apart from the obvious folly of these claims and at least some of the authors do not know what this means, what are the indicators of "power" and "better"? ? How is this improvement measured? Not claiming? In addition, "100 times improvement" is said to be "without a lot of computational resources". By the way, assuming that the definition of resources for performance improvement is not wasted, this claim and the following claims are what the meaning of "improvement will come from better architectures and learning efficiency". How terrible is the current "realization" in order to enable 100 times the performance improvement in one generation? It is clear that Openai is almost fraudulent, but it seems that things that are not fraudulent are hardly accompanied. It must be good to pay that much while licking your work. In this summary, "100 times powe r-up" is written, but the article says "100 times better", "100 times stronger", "one digit (OOMS) jump". Apart from the obvious folly of these claims and at least some of the authors do not know what this means, what are the indicators of "power" and "better"? ? How is this improvement measured? Not claiming? In addition, "100 times improvement" is said to be "without a lot of computational resources". By the way, assuming that the definition of resources for performance improvement is not wasted, this claim and the following claims are what the meaning of "improvement will come from better architectures and learning efficiency". How terrible is the current "realization" in order to enable 100 times the performance improvement in one generation? It is clear that Openai is almost fraudulent, but it seems that things that are not fraudulent are hardly accompanied. It must be good to pay that much while licking your work.

avatar-logo

Elim Poon - Journalist, Creative Writer

Last modified: 27.08.2024

Forums: r/ChatGPT, r/apple, r/technology, Slashdot, and MacRumors Forums Artie Beaty / ZDNet: Apple nears deal with OpenAI to power upcoming iPhone features. OpenAI Japan Exec Teases 'GPT-Next' por msmash (/09/06 ). Threads is Trading Trust For Growth por msmash (/09/06 ). Control Macros. Some. Read more of this story at Slashdot. ai Read More. Related Posts OpenAI Japan Exec Teases 'GPT-Next'. OpenAI plans to launch a new AI.

Play for real with EXCLUSIVE BONUSES
Play
enaccepted