Chatgpt - Seriously good potential (or just some Internet fun)


In this work we present our approach to generating high-quality episodic content for IP's (Intellectual Property) using large language models (LLMs), custom state-of-the art diffusion models and our multi-agent simulation for contextualization, story progression and behavioral control. Powerful LLMs such as GPT-4 were trained on a large corpus of TV show data which lets us believe that with the right guidance users will be able to rewrite entire seasons. "That Is What Entertainment Will Look Like. Maybe people are still upset about the last season of Game of Thrones. Imagine if you could ask your A.I. to make a new ending that goes a different way and maybe even put yourself in there as a main character or something.” [Brockman]
fascinating proof of concept, they used multiple AI models to create a full episode of South Park completely done with AI from script to animation to voice over
 
looks like that company have already written new friends episodes - I've never watched one one but the AI version is everything I dreamed that would be,
let's get those script writers doing something more interesting

(image post - how do you post untagged twitter links ?)

53069190205_014575eb21_o_d.jpg


e: vimeo link https://vimeo.com/808789328
 
Last edited:
  • Like
Reactions: TNA
Hmm... New scam about

TLDR: People are using AI to write books and then they are publishing them. They have even taken to using the names of actual Authors as the Authors for their ChatGPT books.

 
When it first started. We could ask for code to do simple stuff in python. Now it only gives the outline. Which is a pain.

Then again, we're asking for far more complex code to meet specific complex requirements... haha
 
guess OC didn't get the order,
will these be in the new NHS database, or are they training them to replace BOE board, or cabinet

The UK government has announced plans to invest £100m in boosting AI chip production with the help of leading chipmakers AMD, Intel, and Nvidia. This investment is expected to support the order of up to 5,000 graphics processing units (GPUs) from Nvidia. GPUs are essential in powering AI applications like OpenAI’s ChatGPT, driving the demand for these chips.

With the global race for AI dominance intensifying, companies like Nvidia have experienced a surge in their value. The UK’s investment aims to position the country as a leading global technology “superpower” by the end of the decade, a vision set out by Prime Minister Rishi Sunak.

However, the UK still has ground to cover in establishing itself as an AI leader. According to GlobalData’s jobs analytics database, the UK lags behind the US and India in terms of the number of active AI job postings. From January 2023 to August 2023, the UK recorded 11.4k active AI job postings, while the US advertised 138.8k and India posted 46.8k advertisements.

To put the UK’s investment into perspective, the US announced a $52bn investment under the Chips Act, and the EU unveiled subsidies of up to €43bn into the AI industry. With the global AI market projected to be worth $383.3 billion in 2030, these investments aim to secure a competitive advantage in the AI landscape.
 
It maybe fine if you ask it for some output that is fictional as there's no reference to whether it has succeeded other than the response being plausible. I gave it a harder test, if it were trained with contents of the internet could it understand the NHS data dictionary to produce some SQL DDL. It failed, whether it was able to answer but chose not to because of the compute cost I don't know. However, the answer it gave didn't exactly own up to it not performing the task but instead it tried to pass off something as being the answer. It was only when comparing the answer to the NHS internet resource or personal experience, you'd know it had given a pointless answer. I don't have the response now because it was a few weeks back but for it to be useful it should be able to interpret class definitions and relate them to a database table. It wasn't that harder a test, as it was only patient demographic details I'd asked for, not the full NHS data dictionary.
 
Musks 99% AI full self drive has serious competition

California authorities have asked General Motors to “immediately” take some of its Cruse robotaxis off the road after autonomous vehicles were involved in two collisions – including one with an active fire truck – last week in San Francisco.
when an emergency vehicle that appeared to be en route to an emergency scene moved into an oncoming lane of traffic to bypass a red light. Cruise’s driverless car identified the risk, the blog post said, but it “was ultimately unable to avoid the collision.
...
”On Tuesday, Cruise confirmed on X, formerly known as Twitter, that one of its driverless taxis drove into a construction area and stopped in wet concrete.


wondered how much of the Indian moon landing was accomplished with Western/USA chip technology too, did Biden embargo them now
- you're either with us Mr Modi or signing on with Putin.

e: cemented car https://youtu.be/NA-2xCy-x5Q?t=41
 
Last edited:
Sorry. I haven't read the whole thread but I'm considering paying the $20 a month for GPT 4. I generally use it for things like book recommendations, looking up reference material and how to do certain tasks when programming.

My question is pretty simple. Is ChatGPT noticeably better when paying $20 a month for it?
 
So Sam Altman is out. Anyone seen any good theories as to the reason why?

I used a new thing called Google and it says -

OpenAI board's official statement read: “Mr Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.
 
Someone ask GPT4 since it can now browse the web.

Ignoring the corpo gobbledegook, really, it's 1 of 2 reasons.

OpenAI is a non-profit that runs a for-profit company. Either the board wants to pivot from safety to printing as much money as possible, and Sam blocked them. Or Sam wanted to make money, and the board stopped him. Considering all those devs left to form Anthropic, and Elon and others commented in the shift away from openness and safety when Sam came on board, I suspect the latter. He looked like a generic tech bro CEO at his dev day 2 weeks ago. I think Altman saw himself as the next Jobs, and that OpenAI would join FAANG.
 
Someone ask GPT4 since it can now browse the web.

Ignoring the corpo gobbledegook, really, it's 1 of 2 reasons.

OpenAI is a non-profit that runs a for-profit company. Either the board wants to pivot from safety to printing as much money as possible, and Sam blocked them. Or Sam wanted to make money, and the board stopped him. Considering all those devs left to form Anthropic, and Elon and others commented in the shift away from openness and safety when Sam came on board, I suspect the latter. He looked like a generic tech bro CEO at his dev day 2 weeks ago. I think Altman saw himself as the next Jobs, and that OpenAI would join FAANG.

The latter seems to be the case, thus the comment re: communication. He had the dev day, with the laundry buddy and other apps and the board + chief scientist see this as departing too far from their not-for-profit goals. They have a capped profit entity so they can recruit and reward top AI/ML talent etc. but growing like a conventional Silicon Valley startup company (in this case maximising revenue and growth by putting so much of their compute power towards inference for customers/silly commercial apps) isn't necessarily in line with their founding charter so the directors perhaps stepped in for that reason.

Also rumours of some internal fighting re: the Chief Scientist guy being sidelined a bit recently. And now rumours that the backlash has caused the board to have a rethink (perhaps Microsoft kicking off re: their investment) and allegedly talks re: Sam coming back.
 
Last edited:
The latter seems to be the case, thus the comment re: communication. He had the dev day, with the laundry buddy and other apps and the board + chief scientist see this as departing too far from their not-for-profit goals. They have a capped profit entity so they can recruit and reward top AI/ML talent etc. but growing like a conventional Silicon Valley startup company (in this case maximising revenue and growth by putting so much of their compute power towards inference for customers/silly commercial apps) isn't necessarily in line with their founding charter so the directors perhaps stepped in for that reason.

Also rumours of some internal fighting re: the Chief Scientist guy being sidelined a bit recently. And now rumours that the backlash has caused the board to have a rethink (perhaps Microsoft kicking off re: their investment) and allegedly talks re: Sam coming back.

Yeah. If they develop AGI, making money off a math tutor GPT or Mr Beast character.ai clone will seem quaint. Altman doesn't think LLMs will be the sole root to AGI. Right or wrong, they're at least useful for sucking up VC.

He joined Microsoft and took some top talent with him.


Considering this is the 2nd major split from OpenAI after Anthropic, there are probably some governance issues. This might be their Snapchat/Vine moment. If MS has GPT at home, what do they need to give OpenAI Azure credits for? Whenever Musk takes credit for putting Ilya Sutskever in OpenAI, it almost sounds like he's his "man on the inside", acting as a spoiler. Anyway, we'll see how far they go when Google, who by most accounts has the best AI team on the planet, Meta, Amazon with Anthropic and now Microsoft are all competing for the same space with clear profit motives and boatloads of cash. Maybe a fruity company will throw them a lifeline.
 
Back
Top Bottom