AI Coding Tips Thread

Soldato
Joined
20 Dec 2004
Posts
17,951
There's a few AI threads knocking about now, but thought one just focused on coding tools would be good.

The state of the art is moving so rapidly at the moment, and new tools and tech appearing so fast, it's hard to stay on top. Share your best tips here!

Today I've been configuring a couple of Sub Agents in Claude. One is a miserable C++ programmer like me that is an old school guy that insists on clean, well engineering code, with minimal dependencies. One is a Fintech specific reviewer that ensures certain domain specific stuff is adhered to.

I've added a pre-commit hook to Claude to make sure any changes are run past both these sub agents. Works really well, it now addresses all teh stuff that I was having to nag Claude about before.

Second thing today, I've added the Playwright MCP server, which is a web automation system (that we also use for E2E testing). What this means is that Claude can inspect the debug browser window (web app I'm developing), and instead of guessing it's got things right, it can now capture screenshots and inspect the images, click through the app, and make sure it actually works as expected. Pretty cool stuff.
 
I haven't really been making full use of AI when coding. I have Github Copilot which I use in the Jetbrains IDEs, Visual Studio Code and Xcode. I basically just have the chat screen open and ask it questions about my code. For more detailed answers I have Google Gemini Pro through my Google Workspace account which is very good for detailed replies.

I should look into optimising my usage though or look at all the available alternatives. Any hints and tips gratefully received :D.
I don't type code anymore, full stop.

My workflow revolves entirely around directing Claude to do the typing for me, it's much faster than I could hope to be. Lots of tips and tricks involved to get things to work efficiently though.

My first pro tip though, is to use Claude rather than CoPilot.

Anthropic are the leaders in practical implementations of LLM tools, they are leading the way with things like MCP, and I think their Sub Agent architecture is so much simpler and easier to work with than LangGraph.
 
Been refining my workflow a bit yesterday and this morning. Getting to the point now with this Playwright MCP, sub agent tester and reviewer, that I can pretty reliably just give Claude a ticket ID, and then set it off working for a while, and come back to review the change after it's been properly E2E tested for me.
 
I've been evaluating Codex versus Claude Code for a bit now, have max subs for both. I'm doing full stack work at the moment so have several VS code instances going with different agents working on different parts of the system simultaneously.

Codex with GPT 5.1 I quite like in terms of the quality of its code and analysis.....but it's glacially slow. It's ok if I kick it off with something and then go and do something else while it works....needs workflows building around that.

Claude Code is still my preference, although its quality varies as they upgrade and tinker with it. Still the best developer experience for me, and much, much faster than Codex. Maybe it's just familiarity, but I know how to talk to Claude, when to plan more in depth, when it can ad-hoc things. If I was buying my own sub I'd go for Claude, no questions.

I just prefer the CLI tools like Claude Code and Codex, over the integrated IDEs. I just open a terminal window inside Visual Studio 2026 and launch Claude in there for my game projects.

Being able to SSH into AWS services and launch Claude/codex over the terminal is v helpful at times too.
 
Last edited:
I'm finding the Claude is getting a bit stale in terms of up-to-date knowledge so hopefully v5 is on the horizon.

Anthropic are definitely in the lead in terms of agent quality and tooling but others are improving. Sub-agents are decent, have added some extra config for organisation-specific knowledge to stop it from doing weird things. Adds a lot of value but I still review every little bit of work it does - unless you have a really simple project, it still needs a lot of hand-holding.
Oh for sure, they aren't a replacement for experienced programmers, but amazing productivity tools at the moment.

We did evaluate a tool from a startup under the same VC as us. It was pretty impressive, basically it was a number of agents orchestrated adversarially, and deliberately made use of various different models, Anthropic, OpenAI, Mistral etc.....it just batted the work back and forth between the agents repeatedly reviewing each other's work. My boss spent a long time hammering out a spec document for a new system....fed it to this system and left it running. Did a pretty good job.

But....it does need to run for like 8+ hours and it absolutely chews through credits. The developers had it rewrite Redis in Rust as a demo....used a 5 figure GBP sum of credits apparently :)
 
Oh yeah I barely write any code directly now, only if it's a one liner or something.

Using Claude/Codex/etc as an interface into your code is the way forward, the LLMs are way, way faster at navigating and reading and updating files than a human ever could be.

Just don't have it making the architectural decisions.....tell it explicitly what you want it to do, get it done. Also just put it in plan mode and tell Claude to ultrathink gnarly bugs and stuff, pretty good at investigating and coming up with ideas, but you need to know what it's coming up with because some of it will be nonsense. But often it will spot the right issue and you can design a fix from that.
 
Been playing around with this and I'm shocked at how good it is. My only real concern is that it would be easy to integrate security issues when large amounts of code gets generated at a single time. In terms of models I'll try GPT 5.1 Codex next I think even though it is in preview.
Answer : Have agents that are specifically prompted to search for security issues, and operate independently of the code generation.

LLMs aren't great at considering lots of things at once....but they are very good at delivering with fairly narrow guardrails. If you tell an LLM to create a system and make sure it's secure......it's not going to be very reliable. Instead you have one agent that is tasked with creating the system. It's work is then passed to the security agent, which reviews it for security issues, and then passes it back to the coding agent to implement the fixes, and repeat until they're both happy.

It kinda is that simple, but it also a lot more complex in practice....but multiple agents orchestrated to do discrete tasks is the way to deal with complex problems. Same as any software engineering problem.
 
Well, I gave Mistral Vibe a decent go, but it just can't compete with Claude. Probably fine if you're just tinkering with a few scripts or fairly basic stuff, but falls down a bit with more complex stuff.

Claude Opus 4.6 which came out the other day continues to really impress in Claude Code. It's really getting better at sticking to plans and conventions, the flow of discussing a plan with it and having CC deploy subagents intelligently to get the job done is really moving things forward...so much so I've bitten the bullet and bought my own £90 a month Max sub in addition to the the max sub I use through work.

Particularly impressed at what Opus can do when it's connected to the Unreal editor Python API via my own MCP server. Really unlocks a lot more potential.

I really wanted Mistral to work....in fact I do like Le Chat, the main chat model, and Mistral 3 Large is doing a good job on some workflow tasks I'm using it for with a client....but Anthropic are really leading the way with the coding side.
 
Opus is the best coding model by a country mile, but it comes into it’s own when used with Claude Code.

I’ve tried a lot of the agentic IDEs, they’re all just VS Code in a frock. I tend to just use a plain terminal in VSCode now…for play programming like Python, JS etc. For real programming (C++) a plain terminal alongside Visual Studio 2026 works well.

I built out my MCP server for Unreal a bit more. It’s pretty useful, will tidy it up and make the repo public in the week.
 
Got a 365 co-pilot licence at work last week - haven't coded anything in my life, never touched python / C etc. - in 3 days of messing about I've build a little windows app in python, with fancy graphs the monitors the systems vital statistics. Added all the little tweaks I wanted other app to have for myself.

The AI suggested the structure, each component is modular, I can tweak the Disk stats panel without breaking other things for example.

I'm honestly astonished at how easy it was, and how good the end result was. The only difficulty now is coming up with useful ideas.

One thing I would really really like is eliminating the copy and paste back and forth with vscode. It gets tedious, especially when the indenting is wrong on paste.

Nate
you really shouldn’t be copy pasting. Use a coding agent Codex, Claude Code, Copilot….
 
He's using VSCode - he just needs the extension.
Yes, but there's a fundamental difference that I think a lot of people don't grasp fully.

At the most basic level, you have ChatGPT, Claude, Mistral, whatever.....it's a very basic chat completion interface. You type some text, it returns text. You can ask it to make code and then copy paste it into your code files.....but that's a *really* crappy way to work and incredibly slow for programming work.

Then you have coding agents. Claude Code is the best (still). This is MORE than a chat interface. It is a chat interface with a suite of tools it can use, to read your filesystem, search for files, read files, edit files, run commands etc. The ChatGPT version is called Codex. These agents not only do work, but they can spawn subagents to delegate tasks to.

You can use coding agents like Claude Code and Codex in two ways :

1) Just through a terminal, literally just open a command line, type 'claude', and off you go.

2) Through an IDE plugin, Visual Studio Code and Jetbrains have the best plugins. This basically gives you the same experience as the command line, with a prettier interface, and a bit more context for the agent (like what file you have open).

Chat interfaces and coding agents are very different things.

Don't use chat interfaces (Claude, ChatGPT, Copilot) to write code. Using coding agents (Claude Code, Codex, Github Copilot). They are different products for different tasks.

(I'm not trying to be patronising it's just a misunderstanding that a lot of people have, not helped by the new tools coming out every few months :P)
 
I think you're missing the fact that the Copilot extension in VSCode has an agent mode.

He has a Copilot license (I assume that's what the 365 offer is), so his options are the Github CLI agent (I don't know if this is out of preview yet), or the Copilot extension in VSCode, which has an agent mode, not just a chat mode, so he can use his license to log in to the extension, put it in agent mode and toggle between whatever models he has access to.
See this is demonstrating exactly what the problem is :)

There is not a Copilot extension in VS Code. There is a Github Copilot extension.

Copilot = Chat interface
Github Copilot = Coding Agent

ChatGPT = Chat Interface
Codex = Coding Agent

Claude = Chat Interface
Claude Code = Coding Agent

They've all made an absolute hash of the naming and communication of these products.
 
That's wrong because it's an all-in-one. The extension is called "GitHub Copilot Chat" and it's the only official thing that resembles Copilot when you search for the word "copilot" in the extensions.

I'm not arguing about the distinction between an agent and a chat interface; I'm very familiar with the landscape, but the extension I've told him to download is the right one. Hell, even the chat function is fine because it'll switch to agent mode if needed.
I'm not really talking to you with this btw, more a general FYI for people that are dipping their toes into AI assisted coding, and probably aren't up to speed with the differences in the products!

I think anyone that has been impressed with what they can do when they're copy and pasting stuff from chatGPT, is going to have their mind blown when they get set loose on Claude Code...
 
Remembered I get some passes for a free week's Claude Pro. Don't want put the link here as it'll just get scraped, but if anyone wants one drop me a PM. IT does need you to put payment details in I believe so it'll be up to you to cancel.

No kickbacks for me with this btw...
 
The little dev house I work at has been tinkering with AI for a while now but yes, we're all copy-pasting back & forth using chat agents - mostly Claude but we'll skip around when we run out of free credits :cry:
We're old boys so it's taken a minute for us to cotton on to what coding with AI really means. Boss man is now looking at organising a proper licence for us - likely Claude Code.

Can Claude Code be set up to "see" only the solution you're currently working on? ie could we have a distinct agent PER solution?
We specifically don't want to give it access to our devops repo and don't want it pulling from customer A's code to solve a problem for customer B.
Yes, you open your terminal and navigate to your repository, and run 'claude', it then starts up and that directory is its working directory. You *can* tell it to access stuff outside that working directory, but it'll need explicit permission.

Tis very useful for full stack applications, as it can explore all the different services at the same time to investigate and fix stuff.

I typically have 4 Visual Studio Code instances running, each with a separate Claude Code window, all on different repos. I also have a couple of terminals open, also running Claude, for general stuff.
 
Last edited:
I've been knocking stuff up with Claude in my browser using Opus 4.5/4.6.

Primarily things that just have one python script or html file that I can copy into the server rather than give it any form of direct access / git.

Am I really gimping the performance/efficacy of the code it generates by doing it in the browser as opposed to some terminal/plugin? I'm seeing mixed opinions online.
The actual quality of the code it generates won’t be any different.

The benefits are in productivity. The coding agents can do stuff like search files for you, edit any file, they can make new files, create test frameworks., execute tests, run builds, read the build output, automatically fix any errors. They can access APIs directly through cURL and test interfaces.

Basically it does all the driving for you. Let it takes care of all the donkey work.
 
The fun is solving a real world problem.

I'd configure as a group agents as a team that all peer reviewing, and they focus on the problem domain and that the code solves the problem:
1 is a domain expert analyst - answers domain questions and confirms domain solutions are at least viable
1 is a security expert analyst- ensures that the problem and the proposed solutions are secure (not just the code)
1 operational expert analyst - ensures that the final solution is operationally viable - that includes business continuty, target operational model, operational cost and integration
5 coding sub agents
1 coding standards agent

1 product leader - reviews and approves the suggestions and sequence/timeline to maximise return and ensure both build and operation meet both financial and risk targets.

The pipeline would then have a separate set of analysts that are reviewing the solution and code. It would also have security scanning.

I would also have a cycle of brain storming - new ideas and functionality plus changes to improve the solution (rather than code - you're prompt-to-opcode at this point anyway).
You can do this pretty easily in Claude Code. I just have one custom agent for my Unreal stuff right now (which is prompted to be an Unreal expert and enforce good Unreal patterns, look for performance code smells etc).

Just make the custom agent in Claude, save down to your project config....then add it into your Claude.MD and the orchestrator will take care of it.

You can go much deeper than that obviously, but CC makes these things pretty trivial
 
Claude Code really is something else. I'd tried other AI coding things before and found them to be somewhere between useless and actively harmful, but this? I gave it a couple of tests this morning, the first doing a refactor on a PHP/JS hobby project, and the second against the million line C++ codebase that I spend most of my work time working on, giving it a ticket I had from last week to solve. In both cases, it needed to be given further instruction to get it right but in both cases it did it substantially faster than I could.

If think this stuff is going to be mandatory for professional programmers from here on.
No way I could go back to not using coding agents and actually typing every character of code in like a caveman. It'd be like going back to using punch cards.

Anyone not learning how to use these tools is going to get left behind...well, at some point you'll have to come round.

Absolutely no way I could do what I'm doing at the moment building my own game studio while working full time on client work without Claude Code. Not just making the actual coding of the games quicker (because I've done that for a very long time professionally).....but just filling the gaps in areas outside my expertise. Building CI workflows, writing shaders, building website stuff, various automations and plugins. Completely transformational, especially for small nimble businesses that can suddenly get so much more done so quickly.
 
I spent a few hours learning how to use CC properly the other day. Going through setting up the MD files to set it up with ‘skills’, screenshot loop (was using it for frontend) etc, was way more to it than I initially thought.
if you're doing web work, it's worth installing the chrome extension. It lets CC connect straight to the browser, it can test drive stuff itself, access all the dev tools. Playwright MCP (can be installed from the plugin marketplace /plugin ) is also very powerful for that kind of thing.
 
Back
Top Bottom