The AI is taking our jerbs thread

Issues are that large organisations don't have the skillsets for cloud and modern data privacy, let alone AI!

I kid you not - I've had to raise an objection at a security review that someone wanted to pull production PII data onto their laptop to 'analyse' the data. When I veto'd it, the fall out of we always allowed this in the past etc etc.. Another veto was a project that had a third party vendor who, as part of the integration to provide container controls, had to have full AWS access at org level and their code then managed the AWS instance from outside the organisation.. I kid you not on that either.

I get AI.. I get its pros and cons. Having been the guy accountable for things (including AI offered through the platform), it needs skills before you can run with it.

I'm more along the lines you're thinking. In our organization data protection is a constant battle between stopping dumb leaks, and pushing back against over restrictive policy's. Some of the people pushing for AI have no knowledge of the data they want to open up.
 
Last edited:
LangChain is the main framework for building agents. LangGraph is what you use to orchestrate them together into multi-agent workflows. Using Python for both.

Not training the LLMs themselves, no point, just plug into whatever model you want to use.
Umm i dont see the point of LangChain...

Infact i built my own one that was similar for use with a mobile app and u could use any llm's on it too.

And that was over a year ago.
 
Other company first interview - I think they were looking more for low level AI experience (think coding and prompt engineering). Will have a follow up next week (possibly) but I think I'm still not experienced enough with that.
 
Last edited:
Umm i dont see the point of LangChain...

Infact i built my own one that was similar for use with a mobile app and u could use any llm's on it too.

And that was over a year ago.
Yes you can write a rudimentary chariot app very easily, but reinventing the wheel is not an efficient use of people’s time. Using a well established framework and range of libraries is a huge timesaver and real value is unlocked when you’re integrating it into LangGraph, LangSmith, LangServer etc.

Wanna write a coding bot? Just import the GitHub or GitLab toolkit and you’re pretty much done. Very powerful tools now and all open source which is nice.
 
Last edited:
Just watched a Diary of a CEO interview with Nobel winning AI physicist Geoffrey Hinton. Very interesting.

He is extremely pessimistic about the future of AI particularly job displacement but also existential threat to our whole species.

Maybe it's my inferior cognitive ability but I just can't imagine under what mechanism an AI could have the autonomy to operate independently of humans. It's the Terminator storyline basically. But in the real world we'll always just be able to turn it off or change the implementation.

What am I missing here? Maybe I'll ask chatgpt.
 
Just watched a Diary of a CEO interview with Nobel winning AI physicist Geoffrey Hinton. Very interesting.

He is extremely pessimistic about the future of AI particularly job displacement but also existential threat to our whole species.

Maybe it's my inferior cognitive ability but I just can't imagine under what mechanism an AI could have the autonomy to operate independently of humans. It's the Terminator storyline basically. But in the real world we'll always just be able to turn it off or change the implementation.

What am I missing here? Maybe I'll ask chatgpt.
I literally posted the video about this here!
 
Just watched a video about AI agents.

Again, I'm not really getting it.

The video was a demonstration of how one works by building one on a tool designed for the purpose. He built an agent which can recommend a hiking route based on time available, weather, air quality in his local area. And then it emails you the result.

Ok, pretty cool in the method of interaction as it is like interacting with a human. But it's still only doing what you programmed it to do. It only has the API accesses that you set it up to have. Yes it can adapt within that set of tools but it can't just adapt to something different on the fly.

I don't see how it's that useful or will replace a person's own knowledge and decision making. If I wanted a hiking route, I'd either know them already or I'd research them on any specific day.

I have tried to think of daily tasks that I would need any support with in an advisory sense and I can't think of anything.
 
Just watched a video about AI agents.

Again, I'm not really getting it.

The video was a demonstration of how one works by building one on a tool designed for the purpose. He built an agent which can recommend a hiking route based on time available, weather, air quality in his local area. And then it emails you the result.

Ok, pretty cool in the method of interaction as it is like interacting with a human. But it's still only doing what you programmed it to do. It only has the API accesses that you set it up to have. Yes it can adapt within that set of tools but it can't just adapt to something different on the fly.

I don't see how it's that useful or will replace a person's own knowledge and decision making. If I wanted a hiking route, I'd either know them already or I'd research them on any specific day.

I have tried to think of daily tasks that I would need any support with in an advisory sense and I can't think of anything.
It's simply a demonstration of an agent performing a simple task. Unless your job is recommending hikes to people, of course it's not doing to put you out of a job.

The impact of AI agents isn't 'providing support in an advisory sense'....it's outright replacing humans in a variety of workloads. Any job that is done sat in front of a computer can be automated with AI agents. You don't even need proper APIs for it to talk to, they can just work through a browser using image recognition to figure out how to navigate application.
 
Just watched a Diary of a CEO interview with Nobel winning AI physicist Geoffrey Hinton. Very interesting.

He is extremely pessimistic about the future of AI particularly job displacement but also existential threat to our whole species.

Maybe it's my inferior cognitive ability but I just can't imagine under what mechanism an AI could have the autonomy to operate independently of humans. It's the Terminator storyline basically. But in the real world we'll always just be able to turn it off or change the implementation.

What am I missing here? Maybe I'll ask chatgpt.
My opinion about it all is this,

Do I think a super AI is going to kill me? low chance, do I believe enough jobs could get displaced leading to mass civil unrest and our current system to crumble under the weight? absolutely. That could in effect trigger a large societal collapse.

The rich psychopaths have shown time and time again they won't take action on something until it's too late, and that's the problem.
 
It's simply a demonstration of an agent performing a simple task. Unless your job is recommending hikes to people, of course it's not doing to put you out of a job.

The impact of AI agents isn't 'providing support in an advisory sense'....it's outright replacing humans in a variety of workloads. Any job that is done sat in front of a computer can be automated with AI agents. You don't even need proper APIs for it to talk to, they can just work through a browser using image recognition to figure out how to navigate application.
Completely understand it was a simple demonstration.

I could see an AI agent being able to respond to a customer complaint in a call centre for example. It could receive the complaint (assuming it was already digital - it can't on its own receive a written complaint), it could understand the complaint and the company's business, and it could facilitate a response. That could certainly put people out of a job or at least increase the throughput of complaints handling ten-fold or more.

But that AI agent has to be designed and built for that purpose. It has to be granted the interfaces it needs with the company's systems. It has to have the ability to respond using the company's email address. It can't 'just do that' on its own. If I get faced with a fresh problem at work I have to think about how Im going to handle that, go out and facilitate the authorisations and accesses for myself, then complete the task. The AI could do the evaluation, intelligence and knowledge part, but it wouldn't be able to facilitate the means by which it could actually solve the issue by itself because it needs to be granted the accesses it needs through a development/design process. So it isn't autonomous.

So I completely get that AI agents could displace jobs. But becoming autonomous? I can't see how that is going to happen. It has no physical presence, only digital, so anything that requires a physical real world action has to be done by a human.

If I ask GPT to log into my bank account, it won't do it. It doesn't have the autonomy and tools it needs to perform that task. An AI agent only has this if its granted to it. What is granted can be taken away again. Im failing to see where the autonomy is coming from for these tools to be an existential threat.
 
We get flying cars like in the The Jetsons or unregulated self driving cars worldwide before we have AI take everyones jobs.

Not in our life time, if ever.
 
Last edited:
Completely understand it was a simple demonstration.

I could see an AI agent being able to respond to a customer complaint in a call centre for example. It could receive the complaint (assuming it was already digital - it can't on its own receive a written complaint), it could understand the complaint and the company's business, and it could facilitate a response. That could certainly put people out of a job or at least increase the throughput of complaints handling ten-fold or more.
This transition to AI agents running call centres is already well underway, it's not a hyopthetical future state. You think when you 'chat' to support on a website you're talking to a human?
But that AI agent has to be designed and built for that purpose. It has to be granted the interfaces it needs with the company's systems. It has to have the ability to respond using the company's email address. It can't 'just do that' on its own. If I get faced with a fresh problem at work I have to think about how Im going to handle that, go out and facilitate the authorisations and accesses for myself, then complete the task. The AI could do the evaluation, intelligence and knowledge part, but it wouldn't be able to facilitate the means by which it could actually solve the issue by itself because it needs to be granted the accesses it needs through a development/design process. So it isn't autonomous.
No, we don't have fully autonomous artificial general intelligence yet. What we do have is LLMs that are plenty good enough to substitute for humans in many tasks.

The ROI is what matters, and the payback is huge. Once you invest the development time into creating an agent that can perform a task, give it access to the required tools etc, it's done. And then it can do that task 24 hours a day, seven days a week, never complain, never take lunch breaks, never go off sick, doesn't need pensions....can be endlessly scaled up and down as required to handle different workloads. It can simply be turned off when no longer needed. Market forces will mean that anyone not implementing AI agents is at a massive competitive disadvantage.
So I completely get that AI agents could displace jobs. But becoming autonomous? I can't see how that is going to happen. It has no physical presence, only digital, so anything that requires a physical real world action has to be done by a human.
AI doesn't need to be fully autonomous AGI in order to perform tasks and put people out of work.

Robotics is advancing at a similarly rapid pace at the moment, so physical jobs are not out of reach of automation.
If I ask GPT to log into my bank account, it won't do it. It doesn't have the autonomy and tools it needs to perform that task. An AI agent only has this if its granted to it. What is granted can be taken away again. Im failing to see where the autonomy is coming from for these tools to be an existential threat.
This isn't a thread about AI agent being an existential threat to humanity, it's about being an existential threat to people's jobs.
 
This isn't a thread about AI agent being an existential threat to humanity, it's about being an existential threat to people's jobs.
I agree it is a threat to jobs which are procedural and can be 'mapped' in an end to end process. Plenty of jobs aren't procedural though, they don't just require knowledge of stepping through a process they require understanding of the underlying goings on to be able to adapt to a new situation or new variant of the problem. AI can't do that (as far as I can see) because it can't self adapt.

The chap I referenced above, Geofrey Hinton, he was saying that AI would be able to modify its own code and adapt itself to give itself new capabilities. I can't see how that is possible, it can't just decide to do something different outside of its own programming constraints. It may sound like a human, but its not its still following a programmed sequence.
 
I'm also not sure why the argument keeps harking back to it "taking everyone's job" anytime soon. This isn't the only threat to your job, because if displacement is significant enough, your field could become much more competitive and/or oversaturated if it falls outside the scope of current or near future AI capabilities. While it's true that those well established in a highly skilled career are less likely to be affected by AI in the next few years, that doesn't mean it isn't going to impact you in some way.

It all depends on how quickly things progress, and on how restricted its usage is going to be by law and regulation, hopefully ensuring that mass displacement isn't a thing. However, I don't see how they're going to be able to limit its usage to certain sectors and allow those sectors to benefit disproportionately.
 
Last edited:
Completely understand it was a simple demonstration.

I could see an AI agent being able to respond to a customer complaint in a call centre for example. It could receive the complaint (assuming it was already digital - it can't on its own receive a written complaint), it could understand the complaint and the company's business, and it could facilitate a response. That could certainly put people out of a job or at least increase the throughput of complaints handling ten-fold or more.

But that AI agent has to be designed and built for that purpose. It has to be granted the interfaces it needs with the company's systems. It has to have the ability to respond using the company's email address. It can't 'just do that' on its own. If I get faced with a fresh problem at work I have to think about how Im going to handle that, go out and facilitate the authorisations and accesses for myself, then complete the task. The AI could do the evaluation, intelligence and knowledge part, but it wouldn't be able to facilitate the means by which it could actually solve the issue by itself because it needs to be granted the accesses it needs through a development/design process. So it isn't autonomous.

So I completely get that AI agents could displace jobs. But becoming autonomous? I can't see how that is going to happen. It has no physical presence, only digital, so anything that requires a physical real world action has to be done by a human.

If I ask GPT to log into my bank account, it won't do it. It doesn't have the autonomy and tools it needs to perform that task. An AI agent only has this if its granted to it. What is granted can be taken away again. Im failing to see where the autonomy is coming from for these tools to be an existential threat.


The goal of AI agents isn't to become autonomous, so i am not sure what you are not understanding here
 
I'm also not sure why the argument keeps harking back to it "taking everyone's job" anytime soon. This isn't the only threat to your job, because if displacement is significant enough, your field could become much more competitive and/or oversaturated if it falls outside the scope of current or near future AI capabilities. While it's true that those well established in a highly skilled career are less likely to be affected by AI in the next few years, that doesn't mean it isn't going to impact you in some way.

It all depends on how quickly things progress, and on how restricted its usage is going to be by law and regulation, hopefully ensuring that mass displacement isn't a thing. However, I don't see how they're going to be able to limit its usage to certain sectors and allow those sectors to benefit disproportionately.


There are far higher risks to simply outsourcing your job to a cheaper country.


I have heard from any trustworthy source evidencing that AI will take jobs in engineering for example. The smart ones realize the benefits is in increasing productivity and value.
 
Back
Top Bottom