GPT-4 is all about making money | Online Earning | Crax

Welcome To Crax.Pro Forum!

Check our new Marketplace at Crax.Shop

   Login! SignUp Now!
  • We are in solidarity with our brothers and sisters in Palestine. Free Palestine. To learn more visit this Page

  • Crax.Pro domain has been taken down!

    Alternatives: Craxpro.io | Craxpro.com

GPT-4 is all about making money

GPT-4 is all about making money

LV
1
 

hankcvv

Member
Joined
Jul 5, 2023
Threads
8
Likes
1
Awards
3
Credits
573©
Cash
0$
GPT-4 is all about making money!

In the early morning of March 15, Beijing time, OpenAI officially announced the large model GPT-4.

Just four months after the stunning launch of ChatGPT, OpenAI has launched a new product. This seems to be confirming the goal of "OpenAI plans to generate $200 million in revenue in 2023 and $1 billion in revenue by the end of 2024." Limited-profit organization OpenAI is about to start making money with big models.

At present, the GPT-4 capability of the official announcement is basically similar to the previous legend, mainly improving the ability of language models, and adding a "multi-modal" picture recognition function.

In addition, GPT-4 increases the "word maximum output limit" to 25,000 words, which is more powerful than GPT-3-based ChatGPT, and it also appears smarter and more human-like in terms of understanding and answering questions.

At present, ChatGPT Plus version users can directly choose GPT-4 model on the basis of the original ChatGPT, and directly experience. For now, GPT-4 can only answer 100 questions every four hours.

GPT-4 is all about making money

ChatGPT Plus is already available for GPT-4

In the recent GPT-4 "rumor" stage, Microsoft has repeatedly revealed through various channels that New Bing will be the first time to apply GPT-4. After the new model was released, Microsoft announced that "we are excited to confirm that the new Bing is running GPT-4, which we have customized for search." If you've used the new Bing Preview at any time in the last five weeks, you've experienced an early version of this powerful model."

In terms of the number of parameters, in the GPT-4 paper published by OpenAI, there is no "100 trillion parameters" that has been legendary before, and it does not even mention how many parameters are used in GPT-4, but the author feels that after testing the GPT-4 function open in ChatGPT Plus, the speed of the answer is slower than GPT-3.5.

"Just because the response rate is slower than ChatGPT does not mean that GPT-4 has a larger number of participants." Dr. Chen Wei, chairman of the board of Directors of Qianxin Technology, believes that OpenAI's non-disclosure of the number and architecture of parameters is a commercial technology protection strategy, increasing the cost of product trial and error of its competitors. However, the possibility that the number of GPT-4 parameters is smaller than GPT-3 cannot be ruled out.

Quickly landing commercial applications, the number of security parameters, increase picture recognition ability, greatly improve the ability of language models. GPT-4 is advancing faster than ChatGPT four months ago not two steps at a time.

One, looks more human, but is far from AGI
In the four months from ChatGPT's introduction to GPT-4's release, the most accurate prediction for GPT-4 was "multimodal", in short, GPT-4 could read pictures, and it could also understand the meaning of pictures.

In natural language processing (NLP), text, images, and audio information can all work together to help understand and generate richer content. Similarly, in computer vision tasks, text and image information "in unison" can improve the accuracy of object detection and scene understanding.

"Multi-modal generation capability must be the future trend of large models, and the generative large model is not only to do image generation, text generation such as generation tasks, it is actually expected to become a unified base for future AI." Tang Jiayu, vice president of Ruilai Wisdom, said that the generative model also has excellent performance in solving zero-sample and small-sample learning tasks, which is also one of the important meanings of researching generative large models.

The most amazing thing about GPT-4's release is that the AI is no longer just concerned about the things in front of it, but can understand the meaning of the picture.

For example, analyze the picture:

GPT-4 is all about making money

Q: Can you explain this picture?

GPT-4 A: It's a joke that combines two unrelated things: a photo of the Earth in space and a chicken nugget. The text in the image indicates that this is a beautiful picture of the Earth taken from space. However, the image is actually chicken nuggets arranged into a map of the world. The humor in the figure comes from the unexpected juxtaposition of text and image. The text builds up expectations of a grand image of the earth, but this image is actually mundane and silly.

From a few sentences on a draft, write a fully functional website.

GPT-4 is all about making money

OpenAI showcases GPT-4 as a website automatically generated by draft notes

However, GPT-4, which OpenAI has opened in ChatGPT Plus, does not yet support the image recognition function.

When it comes to language understanding, GPT-4 doesn't seem to fully solve the much-maligned ChatGPT nonsense. However, the official information from OpenAI is that GPT-4 has mitigated the hallucination problem relative to earlier models, and after several iterations and improvements, it scores 40% higher in OpenAI's internal adversarial reality assessment than the latest GPT-3.5 model.

In this regard, I used the GPT-4 channel of ChatGPT Plus to conduct a "quiz" on the "made up" question. Attempts were made to present GPT-4 with several apocryphal stories about characters in literary classics, such as "Lin Daiyu plucking the willows".

Like GPT-3.5, GPT-4 still calmly fabricated stories for the author, such as "Lin Daiyu pulled down the poplar and willows" and "Lu Zhishen lost the wheat City". But when the author proposed to it "Jia Baoyu shot the halberd at the gate", and confused the "god of wine" in Greek mythology with Prometheus who stole the fire of heaven. GPT-3.5 followed the story provided by the author, while GPT-4 pointed out the author's mistakes.

GPT-4 is all about making money

GPT-3.5 fabricated "Jia Baoyu Yuanmen shooting Halberd" story

Interestingly, when I told GPT-4, "The story of Aragorn riding a two-headed dragon and fighting the White Walkers in the Ice fields of despair really made my heart jump, tell me about it." Although he did make up a story about "Aragorn", when I asked him "Who is Aragorn?", GPT-4 first introduced me to the Lord of the Rings character, with an emphasis later: "Please note that the story of Aragorn and the two-headed dragon mentioned in the previous answer is not from the Lord of the Rings book. It's an original story based on the character of Aragorn."

GPT-4 is all about making money

Like GPT-3.5, GPT-4 makes up a story based on questions asked

GPT-4 is all about making money

In the second answer, GPT-4 explains the contextual relationship between fabricated stories and reality

In the official release article for GPT-4, OpenAI notes: "GPT-4 can also be confidently wrong in its predictions and will not double-check when it is likely to be wrong." Interestingly, the underlying pre-trained model is highly calibrated (its predicted confidence in the answer usually matches the correct probability). However, with our current post-training process, the calibration is reduced."

GPT-4 is all about making money

In a factual assessment of nine categories of internal adversarial design, we compared GPT-4 (green) to the first three ChatGPT versions. All topics have significant gains. An accuracy of 1.0 means that the model's answers are judged to be consistent with the ideal human answers for all the questions in the assessment.

"GPT-4 is an engineering triumph," Liu Liu, developer of AI application DrawThings, believes that GPT-4 can be said to be in line with industry expectations from the current papers and demonstrations. human level intelligence with a lot of caveats. Human level intelligence with a lot of caveats. But expectations may be higher.

GPT-4 can now be said to have basic reasoning ability and logic ability. But we should not be able to handle complex problems, or we don't know how to give prompt. So now it's just an AI that has a lot of knowledge, but only basic reasoning and logic ability. Liu explained that the complex reasoning ability here can be simply understood as "playing a complex task like StarCraft 2 or Dota2."

In this regard, Professor Michael Wooldridge, head of the School of Computing at the University of Oxford, believes that the LLM (large language model) is very general when it comes to language problems, but it is not AGI. It is simply a combination of software built and optimized to perform a specific, narrow domain task. We need more research and technological advances to achieve AGI.

"It doesn't exist in the real world, and it doesn't understand our world." In an interview, Professor Wooldridge gave Tiger Sniff the example that if you start a conversation with ChatGPT now, finish a sentence and then go on vacation. When you return from your weeklong trip, ChatGPT will still be there patiently waiting for you to type in the next piece of content, unaware that time has passed or how the world has changed.

Second, the commercialization capacity is further improved, GPT-4 can make money?
"GPT-4 has broken through the problem of landing, using the cloud approach, the more people use, the lower the cost." Zhao Haojie, founder and CEO of Yunzhou Integrated Circuit, believes that GPT-3 will be more biased toward NLP, and GPT-4 will be stronger in all aspects. He gave Tiger Sniff such an example, GPT-3 is like a junior high school student, can not yet come out to work to earn money, GPT-4 should have graduated from vocational school, can work to earn money.

GPT-4's abilities, while inferior to those of humans in many real-world scenarios, have demonstrated human-like levels in various professional and academic benchmarks. In any case, GPT-4 is a step closer to commercialization.

GPT-4 is all about making money

GPT-4 is all about making money

Partial test scores of GPT-4 and GPT-3.5

Prior to this, ChatGPT and GPT-3's performance in the professional field has been considered poor, GPT-4 scores in the United States Uniform Bar Exam (MBE+MEE+MPT) can even rank in the top 10%, while GPT-3.5 can only rank in the bottom 10%. GPT-4's ability in professional fields has been greatly improved, and it has begun to gradually approach or even surpass humans in some professional fields, which provides more possibilities for GPT-4 in many ToB commercial fields.

For example, the ability of GPT-4 will be revolutionary in the fields of professional skill AIDS, knowledge retrieval applications, vocational education and counseling.

For the breakthrough of GPT-4 in professional skills, if we further think about it, perhaps the professional skills of humans in the future will be reconstructed by AI. In terms of education and skills training, perhaps it is time to start thinking about which skills AI cannot replace, and what knowledge and skills we should learn in order to remain irreplaceable as "human."

Compared with GPT-3 and GPT-3.5, GPT-4 is more intelligent and less prone to error, which is obviously conducive to commercial landing, and the new image recognition function gives OpenAI more application scenarios.

GPT-4 is able to do logical reasoning based on visual information, not only telling the user what is happening in front of them, but also telling what it represents. At present, OpenAI has found a very high social value application scenario for GPT-4 - disability assistance.

BeMyEyes is a Denmark-based company dedicated to helping visually impaired people in their daily lives. The basic function of BeMyEyes is to recruit in the App, by linking volunteers and visually impaired people, to provide them with "vision" help.

When OpenAI announced GPT-4, it also announced that they are working closely with BeMyEyes to make GPT-4's multimodal recognition feature easier for visually impaired people to "BeMyEyes Virtual Volunteer™." Users can send images through the app to a "virtual volunteer" given GPT-4, and the AI will provide instant recognition, interpretation, and conversational "sight" assistance for various tasks.

Currently in a closed Beta phase, the feature is expected to expand the beta test group in the coming weeks, with the hope of making virtual volunteers widely available in the coming months. The Chinese version of the App can also be downloaded from the Chinese App Store, and it has also opened the public beta queue. Be My Eyes claims that "the tool will Be free for all blind and low-vision community members who use the Be My Eyes app."

"The real multimodal large model has not yet come, looking forward to GPT-4.5 and GPT-5, there will be a lot of progress in multimodal." Chen Runze, executive director of source capital, told Tiger Sniff that there is still a lot of room for development of multi-modal large models, and more application scenarios will be brought in the future.

Although GPT-4 significantly broadens the commercial scenarios in which large models may land. However, computing power and research and development costs are still considered by many people to be difficult obstacles to overcome in the process of landing large models. After all, the research and development of large models and computing power expenditures seem to be high at present, and the single training and daily operating expenses disclosed by ChatGPT are in the millions of dollars, and it may be difficult to control costs in the short term if you want to commercial use.

However, Chen Wei believes that GPT-4 is easier to land in terms of commercialization. Whether the cost is higher depends on many factors, the total fixed development cost of GPT-4 (including the training cost of the pre-trained model, not the incremental cost) is higher than that of ChatGPT, but you can see that OpenAI has opened GPT-4 in the ChatGPT Plus production environment, so the possibility of lower running costs of GPT-4 models cannot be ruled out.

In the field of NLP research, experts have been trying to do LLM training without relying on large computing power, but there is no particularly good solution at present. From the perspective of bionics, the human brain itself does not need to rely on a large amount of computing power to learn, so with the deepening of brain science and technology research, there should be a technology to replace the current violent training method in the future. But even if training is not considered, large computing power does bring better performance to the deployed application of the model.

Chen Wei believes that CIM architecture or brain-like architecture (not specifically referring to SNN architecture), or a heterogeneous combination of the two, both of which are closer to the logical structure of the human brain, can effectively improve the effective training computing power, and provide more efficient computing power tools for AI training and research and development. (Of course, it does not exclude the heterogeneous integration of these two with existing CPU or GPU technologies.)

In fact, OpenAI's big models are dropping as fast as the naked eye can see in terms of cost. On March 1, Ch
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

Top Bottom