it is not learning. The training was done earlier this year.But if its learning can you change it?
it is not learning. The training was done earlier this year.But if its learning can you change it?
Do you host your own personal google search equivalent?Things like the above are exactly why uncensored models running on local hardware will become more important. Eventually you’ll just not be able to get information on many topics, or it will flat out give inaccurate information (like it already does). I know at the moment it‘s mostly just only working against the edge-lords, but who knows what the higher powers willl decide is off-limits next.
Do you host your own personal google search equivalent?
I know it is possible, I do similar at work (currently playing with Falcon models) But my point was if someone is worried about censored results then they should be running their own search engine.That is sort of available now.
I have the QLoRA 4b Guanaco 33B model which is based on Meta's AI running on a 5950X and a 8GB GPU. It isn't unusable at ~3 tokens/sec. The 65B model is rated to be very close to ChatGPT and with a better GPU it allegedly runs at about the same speed.
A few fine-tuned models have been "uncensored" at various sizes so almost anyone can run their own now with the 7B and 13B models capable of running at Bing AI's speed of operation.
Also cleaned up CC amounting to ~500GB unpacking to 2.8TB was recently made open source under the Apache 2.0 license.
I know it is possible, I do similar at work (currently playing with Falcon models) But my point was if someone is worried about censored results then they should be running their own search engine.
Let alone the only purpose of an uncensured model would eb to essentially spread hate speech.
You would also have the problem that the training was on curated data anyway, and even with the smaller models it is very expensive to train from scratch.
Bing AI is based off of ChatGPT 4 AFAIK, with minor tweaks and plugged into the internet, so in theory yes it should be able to.I agree, it's incredible.
Bing AI's Creative Mode is the only AI that I know can successfully complete a game of Tic-Tac-Toe.
I haven't used ChatGPT yet or Falcon and I'm wondering can either of them play Tic-Tac-Toe successfully yet?
I have a few 30b models running locally. It is amazing what can be ran from your PC. The amount of information that can be provided with no internet connection, displayed in an easy to digest way.
As always everything to be taken with a grain of salt. It will flat out make stuff up at times.
Bing AI is based off of ChatGPT 4 AFAIK, with minor tweaks and plugged into the internet, so in theory yes it should be able to.
I'm a hobbyist user at best, with no need to train anything. What I can run on my machine at the moment is absolutely good enough for me, and the uncensored models can give answers on topics that the regular ones will cry about.You ran inference locally, but you can't train locally. If you want results approach SOTS, then you need to spend a few hundred K minimum, a few million preferred. Otherwise your local model is already censured
You ran inference locally, but you can't train locally. If you want results approach SOTS, then you need to spend a few hundred K minimum, a few million preferred. Otherwise your local model is already censured
I'm a hobbyist user at best, with no need to train anything. What I can run on my machine at the moment is absolutely good enough for me, and the uncensored models can give answers on topics that the regular ones will cry about.
I'm fully aware that what I have at home is not even close to GPT, etc.
So GPT3.5 and 4 can play Tic-Tac-Toe. It was interesting.
I wonder if good opensource distributed AGI is inevitable and being opensource it can essentially be free similar to how the air we breathe is free.
DragonForce entered the chatI asked it to write lyrics with dragons, castles, swords & knights.
I won't ever have to write lyrics again.
(Verse 1) In a land of legends and ancient lore, Where dragons soar and castles roar, Knights don their armor, brave and true, With swords gleaming, they'll fight for you.
(Chorus) Oh, in this realm of fantasy, Where dreams and myths collide, We'll journey through the tapestry, Of dragons, castles, knights, and swords we ride.
(Verse 2) A dragon's fire, fierce and bright, Lights up the sky in the dead of night, Its scales of gold, a sight to behold, As it guards the treasure, a story untold.
(Chorus) Oh, in this realm of fantasy, Where dreams and myths collide, We'll journey through the tapestry, Of dragons, castles, knights, and swords we ride.
(Bridge) Through misty forests and winding trails, We'll face the challenges that legends unveil, Brave knights with honor, hearts full of might, Defending kingdoms, in the name of light.
(Verse 3) Within the castle's towering walls, The princess waits, her spirit never falls, Her heart's desire, a knight to save, With a sword of valor, he'll be her brave.
(Chorus) Oh, in this realm of fantasy, Where dreams and myths collide, We'll journey through the tapestry, Of dragons, castles, knights, and swords we ride.
(Outro) So let us weave these tales of old, Of dragons, castles, knights, and swords of gold, In our imaginations, forever free, In this realm of fantasy, where we shall be.
DragonForce entered the chat
Yes I get this, but it's not censored to the point of being useless. As in, it will give answers to specific things that others might say "no" to.You are missing the point. The model you are running is already censured because it ia pretrained. The training corpus was censured, and then extensive fine-tuning reinforcement learning was done to further censure undesirable results. The censuring is all in the weights, not the interface.