Chatgpt - Seriously good potential (or just some Internet fun)

Soldato
Joined
15 Mar 2010
Posts
11,135
Location
Bucks
Yeh same and then edit out all the rubbish it also spits out. Like it's fine, it works but its not perfect and that's also OK.

Honestly the image stuff is more impressive than the text ai but both have a long way to go.
 
Associate
Joined
5 May 2017
Posts
974
Location
London
I use ChatGPT on an almost daily basis and I'm a Plus user.

I've had to re-write a Technical Requiremets doument recently which is almost 90 pages long.

ChatGPT pretty much wrote most of the business requirements and acceptance criteria which saved me weeks of work.

With it saving you weeks of work what did you move onto in the time that would have been allocated to that task? Were you rewarded for doing the job quicker?
 
Soldato
Joined
14 May 2009
Posts
4,201
Location
Hampshire
With it saving you weeks of work what did you move onto in the time that would have been allocated to that task? Were you rewarded for doing the job quicker?
It meant that I could hit a deadline that otherwise I would have most certainly missed unless I worked late into the evening/weekends to complete.

It freed up more time for me to work on the many other projects that I've got on.
 
Associate
Joined
5 May 2017
Posts
974
Location
London
It meant that I could hit a deadline that otherwise I would have most certainly missed unless I worked late into the evening/weekends to complete.

It freed up more time for me to work on the many other projects that I've got on.

If a deadline meant working evenings and weekends was it originally a feasible deadline? Was it under resourced and your original times frames were not taken into consideration?
Just seeing how scenarios like this usually benefit the employer over the employee as it becomes the norm to delivery quickly with even shorter and shorter deadlines.
 
Caporegime
Joined
23 Apr 2014
Posts
29,774
Location
Chadsville
If a deadline meant working evenings and weekends was it originally a feasible deadline? Was it under resourced and your original times frames were not taken into consideration?
Just seeing how scenarios like this usually benefit the employer over the employee as it becomes the norm to delivery quickly with even shorter and shorter deadlines.

That's actually a good point because you're potentially setting a precedent for efficiency for certain tasks.

Also, this is just the beginning, these models (or the next versions of them) will be so much better at your typical knowledge worker tasks within a year or two.
 
Last edited:
Soldato
Joined
14 May 2009
Posts
4,201
Location
Hampshire
If a deadline meant working evenings and weekends was it originally a feasible deadline? Was it under resourced and your original times frames were not taken into consideration?
Just seeing how scenarios like this usually benefit the employer over the employee as it becomes the norm to delivery quickly with even shorter and shorter deadlines.
All very valid points.

Anyone who's worked in a large global organisation knows that sometimes deadlines are imposed which are not realistic but we do what we have to ensure the work is completed.

Your point around setting the norm for quick turnarounds is certainly something that will happen and if they can't get it from 'us', they'll turn to AI to get what they need.
 
  • Like
Reactions: TNA
Soldato
Joined
12 May 2014
Posts
5,270
I use ChatGPT on an almost daily basis and I'm a Plus user.

I've had to re-write a Technical Requiremets doument recently which is almost 90 pages long.

ChatGPT pretty much wrote most of the business requirements and acceptance criteria which saved me weeks of work.
you’re post is lite on details so I’m having to assume stuff here with these questions

I’m assuming that you read both the original document as well as the outputs from ChatGPT. How much of the output was accurate? Did you need to make any changes to the output how substantial were these changes? Did it ignore stuff that you felt should have been added and vice versa?
 
Soldato
Joined
14 May 2009
Posts
4,201
Location
Hampshire
you’re post is lite on details so I’m having to assume stuff here with these questions

I’m assuming that you read both the original document as well as the outputs from ChatGPT. How much of the output was accurate? Did you need to make any changes to the output how substantial were these changes? Did it ignore stuff that you felt should have been added and vice versa?

That is correct.

I knew the original requirements doc inside and out but it was not in a great format and there were a number of new functional and non-functional requirements which needed to be added and I wanted to also include acceptance criteria to make it clear to any vendor what we would accept for that particular requirement.

ChatGPT helped me break down the original document into specifc areas and categories and extract the individual requirements and then provide a very simpe acceptance criteria so it could be understood by technical and non-technical folk.

I fact checked every output to ensure it's accuracy and made several edits to ensure the format, tone and language were consistent throughout the document. I had to correct GPT many a times due to it deciding to change the format of the responses.

It did glaze over some areas but I just had to prompt it to include them.

ChatGPT / any GenAI still requires the end user to have an understanding of the subject that they are seeking help with. You cannot fully trust any of the GenAI's to provide consistent and accurate responses.... not yet anyway :D
 
Last edited:
Soldato
Joined
1 Mar 2010
Posts
22,236
I'm not one to casually toss around the H-word, but, sure, you can say it: I'm a hero.
On Thursday, Google accounted in a blog post that it would be scaling back the AI search results it had rolled out the week prior to somewhat disastrous and hilarious ends. For example, its Google AI Overviews results suggested that it was good to eat one rock per day (which was presented seriously but was actually based on an article from The Onion), that Barack Obama was the first gay president, and that the way to keep the cheese from sliding off your pizza was to add 1/8 of a cup of glue to the sauce.

[
I don't think that's true - coding assistant LLMs certainly aren't a scam, Microsoft Copilot is a useful product. Claims by startups that they're replacing software engineers are overhyped (especially when said startups are still hiring engineers themselves :D) but making engineers more productive is certainly useful and this is only going to improve.
if you read the latter principal utility is providing an indexing of research papers ... like 'traditional' google


]
 
Caporegime
Joined
29 Jan 2008
Posts
58,931
if you read the latter principal utility is providing an indexing of research papers ... like 'traditional' google

No, that's a bit muddled, it's mostly used as a coding assistant, you're just referring to one company using it for something else.
 
Man of Honour
Joined
17 Oct 2002
Posts
29,124
Location
Ottakring, Vienna.
Yeah, I don't use it enough to have noticed this, but I can't imagine how frustrating it must be for it to fight you on generating code that it happily spat out 6 months ago. I think that's why for code a lot of people have moved to Claude and other models
I can tell you how frustrating it is - it's incredibly frustrating.
 
Caporegime
Joined
29 Jan 2008
Posts
58,931
Bayer are one of the principal licensors/co-financers and the usage is as described

Where are you reading that they financed this? AFAIK they're just a customer.

Microsoft's first copilot was the GitHub one based on OpenAI's codex model in turn based on GPT3, it's purely a coding assistant LLM and that's certainly not a scam it's a very useful product.

They next used GPT4 to make Bing Chat and have extended it into a much broader set of copilot productivity tools - Bing Chat and Bing Chat Enterprise are now Microsoft Copilot (it's basically still GPT4 at it's core - with options for data privacy for corporates etc.).

Indexing research papers isn't the "principle utility" of the product nor does the article claim that, they've explained that Bayer is using it for a variety of tasks, including summarising documents and, unsurprisingly, coding!
 
Last edited:
Associate
Joined
13 Jun 2013
Posts
1,815
I can tell you how frustrating it is - it's incredibly frustrating.
Are u talking about chatgpt? If so the new 4o model always gives out the full code solution, in fact I’ve seen people complain it churns out too much lol can’t win!
 
Soldato
Joined
1 Nov 2008
Posts
4,432
I find it hilarious how so many people are just asking ChatGPT stuff and regurgitating it into emails and text messages now. (And forum posts and Facebook posts...)

This first 6 minutes of this bit is about a person at the show who used ChatGPT to break up with someone. Didn't even bother to edit the text, lol

 
Last edited:
Soldato
Joined
1 Mar 2010
Posts
22,236
so the new 'apple intelligence' sounds rather insidious functionality -
Despite apple pleading for respecting security of your data, would you want all of you data emails/photos/searches/purchases/(calls?) 'crunched' by a LLM to allow it to subsequently help you,
like clippy the paperclip, your data also being potentially processed by 3rd party OpenAI

etc https://appleinsider.com/inside/apple-intelligence
 
Man of Honour
Joined
5 Jun 2003
Posts
91,387
Location
Falling...
we've had to roll out an AI policy at work as people are blindly using Google (which is now using AI generated responses) and ChatGPT and others to blindly answer questions / solve issues.

Anyone with any critical thinking capability or a few decades of work experience in the engineering space we're now getting quite cautious about the work from some people. We really don't want people to just copy and paste and/or not challenge the information they get.

Creating a more local (i.e. sandboxed) capability within our own ecosystem is where we're heading at the moment, especially with sensitive information and data. I worry that people are going to start copy and paste commercially sensitive documents into them.
 
Soldato
Joined
1 Nov 2008
Posts
4,432
we've had to roll out an AI policy at work as people are blindly using Google (which is now using AI generated responses) and ChatGPT and others to blindly answer questions / solve issues.

Anyone with any critical thinking capability or a few decades of work experience in the engineering space we're now getting quite cautious about the work from some people. We really don't want people to just copy and paste and/or not challenge the information they get.

Creating a more local (i.e. sandboxed) capability within our own ecosystem is where we're heading at the moment, especially with sensitive information and data. I worry that people are going to start copy and paste commercially sensitive documents into them.

That's why responses seriously need to be able to cite courses so you can then go to verify various aspects of the answer
 
Back
Top Bottom