Based on the context information provided, it seems that the forum discussion is about a new tech demo called "Chat with RTX" that allows users to have a personalized GPT chatbot running locally on their Windows PC with an NVIDIA RTX GPU. The chatbot is powered by a proprietary implementation of open source LLMs, which are intended to be used with online accessible material. The discussion highlights the benefits of using a local app that never sends or receives internet data, as well as the importance of accuracy in the results provided by the chatbot. Additionally, there is a mention of telemetry being embedded in the tool, and the importance of keeping data private.