ChatGPT Update

Jovialis

Advisor
Messages
9,519
Reaction score
6,226
Points
113
Location
New York Metropolitan Area
Ethnic group
Italian
Y-DNA haplogroup
R-PF7566 (R-Y227216)
mtDNA haplogroup
H6a1b7

This one seems pretty significant, because it will integrate all of the functions it currently has separate. Now you can use Code Interpreter, web browsing, DALL-E 3 and Plug-ins all at once, instead of having to switch between chat threads. This is only for paying subscribers, though I currently has not been rolled out to me yet.
 
For me, it has become an inevitable assistant when coding. I use it quite a lot, but it's not a magic tool which does all the things by itself. Like Steve Jobs said computers are the bicycle of the mind. I now think ChatGPT fits that description quite a lot, it works like a bicycle of the mind. I use it for brainstorming quite a lot.

But, now we will have open source LLM tools as well, so you can run it locally on your machine, pretty soon they will be as much as powerful as ChatGPT.
 
I agree, it has been very useful with coding. It was indispensable in helping me figure out all of the prompts in powershell, and Ubuntu to facilitate the merging of my DNA with the Reich lab set. As well as using it to get the right prompts and clean up script in R for Rstudio. I think bikes are a good analogy.
 
I am really worried about the involvement of the U.S. government regarding regulation of AI. I am afraid they are going to hobble it.
 
I agree, it has been very useful with coding. It was indispensable in helping me figure out all of the prompts in powershell, and Ubuntu to facilitate the merging of my DNA with the Reich lab set. As well as using it to get the right prompts and clean up script in R for Rstudio. I think bikes are a good analogy.

I will just not use it for architecting the whole Software Project or whole project, i almost fell short because of ChatGPT, but when i put it under the control, then it's a very very helpful tool, it's a magical tool which needs context. Especially for naming functions of variables, files, conventions, debugging,

Somebody said on medium.com the age of Mathematicians has come, and indeed it has, it's crazy how ChatGPT is built quite a lot on principles of Statistics and Probability.

I actually want to get really into it, but it's easier said than done.
 
For me, it has become an inevitable assistant when coding. I use it quite a lot, but it's not a magic tool which does all the things by itself. Like Steve Jobs said computers are the bicycle of the mind. I now think ChatGPT fits that description quite a lot, it works like a bicycle of the mind. I use it for brainstorming quite a lot.

But, now we will have open source LLM tools as well, so you can run it locally on your machine, pretty soon they will be as much as powerful as ChatGPT.
I love the analogy of ChatGPT being like a "bicycle for the mind" - that perfectly captures how it can accelerate and augment our own thinking. As a fellow coder, I can totally see how invaluable ChatGPT is for rapid ideation and brainstorming. It really becomes an extension of your own problem-solving abilities.

- contact@gptnederlands.nl
 
I saw the ChatGPT update with Dall-E for example, honestly it's sub-par in comparison with Midjourney.

Also, the custom bots are just over-hyped crap. It's ChatGPT still. Not undermining the actual product which is awesome and magical, but this trend of delivering no matter what is poisonous.
 
Sam Altman has been fired. His replacement is an Albanian woman, Mira Murati.


iTOW5WQl.png
 
Mira is an interim CEO, she was a CTO before and looks like she is quite successful.

It looks like the board of OpenAI had their differences, nobody really knows what happened, but Sam Altman looks like was pushing the buttons too much at pushing new features which the other part of the board disagreed with him.

I realized the performance of GPT-4 has decreased quite a lot, it tries to skip like a lot of things, especially on coding, probably it requires a lot of computation and they are trying to reduce the costs, this is not a real real real AI in the truest sense, a statistics/probability tricks on steroids moreso. But, very impressive nevertheless.
 
So Emmett Shear is the new CEO of OpenAI instead, Sam Altman is joining Microsoft, wow, what a turnover, unexpected.

Looks like a coup d'etat from Microsoft on first sight.
 
OpenAI employee’s express concerns about Q*, a potential Artificial General Intelligence (AGI) seen as an existential threat to humanity.

Allegedly concealed by CEO Sam Altman, this revelation may have led to his departure and triggered an internal investigation. Q* is designed to reason like humans, showing 'almost' human-like reasoning, especially in mathematics, marking a significant leap in AI complexity.

OpenAI scientists, alarmed by Q*'s development, signed a letter expressing serious concerns about its security and ethical implications. The fear is that Q*, with advanced capabilities, could act independently, posing unpredictable and dangerous consequences if its decision-making process is not fully understood and safety measures are lacking.

The rapid evolution of AI raises questions about security, privacy, employment, social impact. AGI, with potential cognitive superiority, brings possibilities and challenges across multiple fields. But despite the enormous potential, there are risks, including AGI acting independently, which raises ethical and security dilemmas.

Some say that with AGI we risk having a Skynet moment, it´s a extreme opinion, but the potential escape of an AGI from human control is a real concern.

This is why the US wants to keep AI out of nuclear weapons control systems.​
 
Last edited:
Whatever we see in ChatGPT, a lot of things from this guy mind and contribution, a German-born Computer Scientist.

It's absolutely bizarre that OpenAI Engineers don't fully understand the system they built ( acknolwedged by Karpathy), yet this guy invented those principles and totally fully understands the whole architecture.

 
Last edited:
Idk, frankly I think these companies are "fluffing" their products. But I do look forward to owning robot slaves as a geriatric, I hope.
 
The problem with AI, with artificial neural networks, is not the lack of understanding of its internal architecture.

Artificial neural networks are not programmed the traditional way in which an explicit solution process consisting of a sequence of well-defined steps expressible in code, a program, is given.

They emulate human brains, the functioning of neurons and synapses in the brain. We know the architecture of the system, but the problem is that no one knows how its decision process works. We know the inputs and outputs, but we do not know internally how things are processed. What nodes (artificial neurons) and layers did the decision-making process go through, how did the system arrive at that specific decision or conclusion. Nobody knows how to explain it, it's the black box problem. This is the main risk. The risk of AI making decisions that are completely unexpected and potentially harmful to humans.

Wvx2ZRH.png

That is why there is so much emphasis on the ethical and the security aspects involved in the development of AI and particularly in the development of Artificial General Intelligence (AGI) which is the holy grail of AI. In the accountability, transparency and explainability (reports that explain how a certain conclusion was reached) of the system.

With a technology as powerful as AI better safe than sorry.

Even someone like Elon Musk thinks there are dangers involved and the need for regulation.​


 
Understanding how it makes decision is part of the architecture which they don't understand in full. IMO, this is very scary for further progress indeed.
 
Frankly, I think it is just a marketing scam to inflate the threat of AI in order to get people the chatter about it. I do not think AI will be an existential threat any time soon. I posted an article from the google brain co-founder that holds this same position.

Google Brain founder says big tech is lying about AI extinction danger


The motivation for control of AI by government is also fairly obvious; they want to maintain control of the narrative. Just like they way they do with most of the media already.

If you have AI producing facts that violate the orthodox sophistry and propaganda, that's bad for the government's grip on power.
 
Back
Top