Moving in IT/Programming

Well I do; I only recruit people who know how a computer works, how an operating system might or might not work, and other various 'boring system' things that are "totally irrelevant" for a programming role...

Otherwise, you can end up with capable programmers who, for example, think walking a 2D table by columns is the same as walking it by rows.

Okay, not exactly seeing how this is counter to the snobbishness.

There is far more to development than this guff ^

I mean do people really care about this any more? The majority of devs in my area have seen the light and moved into enterprise & cloud

Stuff like all that low level OS stuff doesn't matter a dime when you're working with autoscaling disposable VM nodes

No, because obviously people need to outgeek each other by insisting everything is written in assembler.


...

Seriously, given that stuff like JavaScript is one of the most popular programming languages, insisting that OS scheduling is critical knowledge to be a "capable" programmer is ludicrous. A capable programmer is anybody that understands the problem and codes up a solution that isn't too difficult to maintain, not someone who insists on writing everything in assembler because your end of day report now runs in 2 seconds rather than 10.
 
Seriously, given that stuff like JavaScript is one of the most popular programming languages, insisting that OS scheduling is critical knowledge to be a "capable" programmer is ludicrous. A capable programmer is anybody that understands the problem and codes up a solution that isn't too difficult to maintain, not someone who insists on writing everything in assembler because your end of day report now runs in 2 seconds rather than 10.

In fairness, modern JavaScript environments tend to be at least not appauling performance wise. That said, being so blasé about optimisation on a large scale system, isn't it basically the case that your organisation has to pay more money for more server resources if your application consumes a large amount of resources to meet a certain demand? Assuming I'm barking up the right tree, surely if management realised the implications of that they'd **** themselves.
 
Well, so here's my point.

I doubt most programming jobs are performance critical, and suppose you had a programmer that used twice as much RAM as needed. 8gb of RAM is 50 quid, and the developer costs 600 a day, how much time do you want to spend optimising the code?
 
Well, so here's my point.

I doubt most programming jobs are performance critical, and suppose you had a programmer that used twice as much RAM as needed. 8gb of RAM is 50 quid, and the developer costs 600 a day, how much time do you want to spend optimising the code?


The counter argument is scale: Say you are always going to charge some fixed amount, because that's the market value of the system. Charge any more and you'll lose out to competitors. Then you have to build the system yourself. You think you'll sell about 100,000 units a year because you're a medium to large sized business. Your developer costs £600 a day and works ~250 days a year, so he costs £150,000 a year to keep around (ps: trust for CV ;) ). You've just spent another £25 per unit because your developer didn't realise he could save some memory with design pattern x. You've potentially just shaved £100,000x25 = £2,500,000 off of your profit.
 
Last edited:
Yes, sure, and if you're planning on building Google Maps, by all means, look for scale. But the majority of people aren't building Google Maps.

As a side note, 600 a day isn't as much as you think it is, the work space, NI, various insurances, pension, holiday, training, travel budgets, etc, etc.

Edit: I'm not saying that it's bad to have technical experts, but I'm not the one saying that technical experts are the only capable programmers.
 
Sometimes the right IT person isn't the most technical one. It maybe the person with a wider skill set. There a wide range of skillsets required in IT, not all of them technical.
 
Well, so here's my point.

I doubt most programming jobs are performance critical, and suppose you had a programmer that used twice as much RAM as needed. 8gb of RAM is 50 quid, and the developer costs 600 a day, how much time do you want to spend optimising the code?

That completely misses the point.

What if you could reduce the computational load by 20% by using an improved algorithm and you are spending $10million a year in CPU costs on AWS, having an engineer at $100k a year make measure able improvements is just basic economics. Same can be applied to bandwidth, or storage, or memory. It all costs serious money when you scale up.

Some,times it is even necessarily brute CPU costs but latency. We have a service that is very time dependent, a real time forecast of events. Ideally we would have circa 1 second latency, you have to subtract several layers of networking so get a few hundred ms at th most to work with. We get a raw data feed and must turn out a prediction that uses state of the art machine learning, millions of times a second. Every CPU cycle starts to count and hits the bottom dollar. If we take an extra 1 second to produce each prediction then the value to customers is greatly reduced and so is our profitability.


It's also very easy to hit IO limits so bleeding edge compression and taking car of the bytes (and bits actually) add up to real savings that far outweight the developer cost.


Something that uses 8gb of memory is very 1990s!
 
Last edited:
You've run into the same fallacy as BusError: Using examples for large corporates and highly scale and latency sensitive applications as your benchmark as the only programming jobs.

RedMonk is apparently reporting JavaScript as the most popular language this year, followed by Java and PHP. Do you imagine that most of those developers are doing the latency optimisation you're talking about?

Edit: From The sorry state of server utilization and the impending post-hypervisor era
A McKinsey study in 2008 pegging data-center utilization at roughly 6 percent.
A Gartner report from 2012 putting industry wide utilization rate at 12 percent.
Resource limits are on average, very far from being a problem.
 
Last edited:
That completely misses the point.

What if you could reduce the computational load by 20% by using an improved algorithm and you are spending $10million a year in CPU costs on AWS, having an engineer at $100k a year make measure able improvements is just basic economics. Same can be applied to bandwidth, or storage, or memory. It all costs serious money when you scale up.

Some,times it is even necessarily brute CPU costs but latency. We have a service that is very time dependent, a real time forecast of events. Ideally we would have circa 1 second latency, you have to subtract several layers of networking so get a few hundred ms at th most to work with. We get a raw data feed and must turn out a prediction that uses state of the art machine learning, millions of times a second. Every CPU cycle starts to count and hits the bottom dollar. If we take an extra 1 second to produce each prediction then the value to customers is greatly reduced and so is our profitability.
Automated trading I'm guessing?
 
Gone more off topic than I intended.

OP: if you know some programming already and you are willing to learn, you'll probably do fine. If you're upfront about your skill level and they're willing to hire you then there'll probably be training to get you up to speed with whatever they need you to do.
 
I actually have done quite a bit of that, I've ploughed through huge code bases many of times trying to work out what is going on. How about the Nasm source code, the Unreal source, GCC source, many emulators, Qemu source, tons of game engines, Linux drivers etc. But as I said it's hard to gauge the gap between me and a professional developer.

Maybe I should just go freelance or start my own company as most businesses would not have a clue if you wrote hacky code, as long as it works.

The other thing is I know there is a whole techie language as well.

I only meant the parser was tedious as it's just string manipulation, and one function is relatively the same as the next. There's also no need for C++ to write one, should be written in C, I did it to improve my C++ skills but as I said it's a task for C so I wasn't learning anything new.

The problem you have is not the work. Its getting past the recruitment process. Because often the initial stages are not run by people from IT. Once you can get past that, you'd probably be fine with a face to face with technical people, or indeed a coding test, or a code review of your past code. But to get to them, the HR people might eliminate people with no experience, or IT qualifications to a specific level.

I'm not a programmer, I've no IT qualifications. But have worked in most areas of IT for many years including roles as a programmer for many years, both contracting and permanent roles. I get hired not because of having better technical skills, (because obviously I don't - though I have passed code reviews and such when contracting) but because of a depth and breath of my experience and also because it encompasses non-IT area's. Especially with people skills etc. I don't think I've ever worked anywhere computational load was a significant issue. Its mainly been about speed of the front end, especially in recent years with the move to the web, but also optimizing the back end database queries. optimizing computational load was really something done to make the code concise, or more maintainable, also there a level of personal/professional pride in writing the most elegant code you can.

I would suggest that that if you don't like the mundane stuff of string manipulation and such that you steer away from regular programming jobs in big corporations and business in general. As there's a lot of that kind of work, and you'd be very bored. You should probably steer towards more specialist area's. But that will require you to have a portfolio of work that demonstrates an aptitude for that kind of work, that they can review. Having qualifications, though is become more essential. Its something I'll probably have to give more attention to myself. Though I like doing more than just coding.
 
You've run into the same fallacy as BusError: Using examples for large corporates and highly scale and latency sensitive applications as your benchmark as the only programming jobs.
....


Edit: From The sorry state of server utilization and the impending post-hypervisor era

Resource limits are on average, very far from being a problem.

That link is very much the story of where I am at the moment. We run a huge amount of batch jobs, mainly data processing at night. The computational load is relatively light.

You've just reminded me, I've a database job to kick off this weekend, I'll have to log on remotely and start it off.
 
What if you could reduce the computational load by 20% by using an improved algorithm and you are spending $10million a year in CPU costs on AWS, having an engineer at $100k a year make measure able improvements is just basic economics. Same can be applied to bandwidth, or storage, or memory. It all costs serious money when you scale up.

Some,times it is even necessarily brute CPU costs but latency. We have a service that is very time dependent, a real time forecast of events. Ideally we would have circa 1 second latency, you have to subtract several layers of networking so get a few hundred ms at th most to work with. We get a raw data feed and must turn out a prediction that uses state of the art machine learning, millions of times a second. Every CPU cycle starts to count and hits the bottom dollar. If we take an extra 1 second to produce each prediction then the value to customers is greatly reduced and so is our profitability.

Automated trading I'm guessing?

Nah, 1 second latency and hosting on AWS it wouldn't seem to be likely

servers are generally co-located at exchanges for most automated trading and latency needs to be much much lower - granted not all systematic trading is high frequency/low latency stuff but even then I'm not sure AWS is required and you'd presumably still want your execution to be fairly efficient even if you're a big CTA/Hedge Fund implementing with some trend following system
 
Last edited:
Googling automated trading and AWS yields enough results to suggest that it's not unheard of to use AWS to do automated trading.

I can't personally think of any other applications of 1 second predictions... Maybe a betting shop, but that's still automated trading really
 
Googling automated trading and AWS yields enough results to suggest that it's not unheard of to use AWS to do automated trading.

I can't personally think of any other applications of 1 second predictions... Maybe a betting shop, but that's still automated trading really

Machine learning is used in plenty of fields... how do companies predict the music you'd maybe want to listen to on your streaming service... would it be any good if the suggestions didn't pop up right away - I'm sure there are various web based services with lots of users would want to keep latency relatively low... down to 1 second or so. '$10million a year in CPU costs on AWS' sounds more like a tech firm dealing with lots of users.

the sort of people you're reading about using AWS for trading are more likely to be people with a few thousand in an account at interactive brokers which they play around with in their spare time
 
Last edited:
Back
Top Bottom