Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Looks interesting, will definitely check it out later. Did this come out of the blue or had NVIDIA announced they were releasing it?
I haven’t looked at it yet but what data are you providing to them? It looks like their main point in the video is it runs locally.You can now use your RTX card to give your data to Nvidia, aren't you happy to pay to become data providers?
I haven’t looked at it yet but what data are you providing to them? It looks like their main point in the video is it runs locally.
Since Chat with RTX runs locally on Windows RTX PCs and workstations, the provided results are fast — and the user’s data stays on the device. Rather than relying on cloud-based LLM services, Chat with RTX lets users process sensitive data on a local PC without the need to share it with a third party or have an internet connection.
Exactly....
![]()
![]()
Say What? Chat With RTX Brings Custom Chatbot to NVIDIA RTX AI PCs
New tech demo gives anyone with an NVIDIA RTX GPU the power of a personalized GPT chatbot, running locally on their Windows PC.blogs.nvidia.com
What you're seeing here is just a proprietary implementation of open source LLMs, which by their own marketing, are to be used with online accessible material, as without large amounts of data no model is useful.
If you truly think there won't be a metric ton of telemetry embedded then I've got a great deal for a fountain in Rome to sell you...
Based on the context information provided, it seems that the forum discussion is about a new tech demo called "Chat with RTX" that allows users to have a personalized GPT chatbot running locally on their Windows PC with an NVIDIA RTX GPU. The chatbot is powered by a proprietary implementation of open source LLMs, which are intended to be used with online accessible material. The discussion highlights the benefits of using a local app that never sends or receives internet data, as well as the importance of accuracy in the results provided by the chatbot. Additionally, there is a mention of telemetry being embedded in the tool, and the importance of keeping data private.
*simping intensifies*Does the chatbot wear a leather jacket?