Maths

Pretty sure I used a Stroud book at uni - advanced engineering mathematics IIRC and I found it good. On the flip side, the Stewart's calculus book I didn't like and that was 'the' recommended book.
100% get them out of a library if you can - saves you some dosh, supports your library and it's why they're there.
 
Pretty sure I used a Stroud book at uni - advanced engineering mathematics IIRC and I found it good. On the flip side, the Stewart's calculus book I didn't like and that was 'the' recommended book.
100% get them out of a library if you can - saves you some dosh, supports your library and it's why they're there.
Thank you. Very useful.

I have this book for calculus:

 
Or be like me... Get indoctrinated into regexes for work and then become the person who can only think in regex and frustrated with others who can't do it properly.

I have entire days where I'm angry at some of my peers' bad regex skills. Grrrr
Yes, it’s something I only need to dip into once in a while and each time I have to get my head around it again.

Same with xslt but, thankfully, that seems to have died a death.
 
I'm mainly wanting to learn for four reasons. Electronics, game programming, quantum computing and machine learning. I know it'll be a long journey and I'm not expecting to be able to do it overnight but I'm keen and have a lot of spare time.

I have A level maths, as part of a 4 year degree I did formal methods (putting english into set theory and formally proving the maths) as part of a software engineering degree specialising parallel & distributed systems (super computing for science). I’ve also professionally have done jobs involving quantum and quantum cryptography.

The width and depth of maths you want todo is going to be a large undertaking. As others have suggested, you may want to set a goal then work through examples.

Games maths now are heavily reliant on optimised libraries and frameworks.

I did an AI course from oxford last year and, like quantum, you’ll probably target alevel maths in calculus and probability then progress from there.

AI you will need statistics and calculus. Quantum is a different kettle of fish. I had access to 135 quantum PHD staff.
 
I have A level maths, as part of a 4 year degree I did formal methods (putting english into set theory and formally proving the maths) as part of a software engineering degree specialising parallel & distributed systems (super computing for science). I’ve also professionally have done jobs involving quantum and quantum cryptography.

The width and depth of maths you want todo is going to be a large undertaking. As others have suggested, you may want to set a goal then work through examples.

Games maths now are heavily reliant on optimised libraries and frameworks.

I did an AI course from oxford last year and, like quantum, you’ll probably target alevel maths in calculus and probability then progress from there.

AI you will need statistics and calculus. Quantum is a different kettle of fish. I had access to 135 quantum PHD staff.

Stats should be highest priority and the important one to learn normally used in preprocessing and aspects of feature engineering.


Calculus including abstract calculus only used if you dig deeper ie very deep in A.I and machine learning.
 
Last edited:
Stats should be highest priority and the important one to learn normally used in preprocessing and aspects of feature engineering.


Calculus including abstract calculus only used if you dig deeper ie very deep in A.I and machine learning.

True, but the calculus will help with the data science POV. Not everything is a neural network.
 
True, but the calculus will help with the data science POV. Not everything is a neural network.
You don't really need Calculus, as you have libraries like sklearn tensflow would deal with that you just need to know the output meaning. You only use Calculus to modify models which is at higher level and really not needed for most jobs. The best place to start any data science learning is stats and understand the output specially metrics and know when to use the algos or hybrid algos.
As someone said matrix is good to understand.

Cromulent don't look at everything it would only confuse you. Start off with basic stats, once you understand when to use them, then move onto model algos like log regression and svm combined
with kernels (when the relationship in none linear )etc and when to use them.

Then move onto regularisation for the algos. One important thing you really need to understand " variance and bias" trade off. Then move onto cross validation methods and how to use them then look at the metrics like f1score accuracy etc...very important.

Start with a simple pipeline ie a sequence of methods used. 1) preprocessing 2) feature engineering 3) model.training 4) prediction.
I recommend you jump from 1 to 3, forget about 2, once you understand that then go 1 2 3 4.
Machine learning isn't that hard, its learning the concepts.
 
Last edited:
Thank you. Very useful.

I have this book for calculus:

No problem. I'd add that I think the Stroud book has calculus in it, but I'm not 100% and I can't check as I've lent it to someone and not had it back :(
 
Or be like me... Get indoctrinated into regexes for work and then become the person who can only think in regex and frustrated with others who can't do it properly.

I have entire days where I'm angry at some of my peers' bad regex skills. Grrrr
Another +1 for regex here. Its applicability for pattern matching is simply the best. I used to use it a lot back in my coding days.

If anyone is interested in learning, you can start using regex101.com for a stab.
 
Another +1 for regex here. Its applicability for pattern matching is simply the best. I used to use it a lot back in my coding days.

If anyone is interested in learning, you can start using regex101.com for a stab.
Import re. In python.
 
You don't really need Calculus, as you have libraries like sklearn tensflow would deal with that you just need to know the output meaning. You only use Calculus to modify models which is at higher level and really not needed for most jobs. The best place to start any data science learning is stats and understand the output specially metrics and know when to use the algos or hybrid algos.
As someone said matrix is good to understand.

Cromulent don't look at everything it would only confuse you. Start off with basic stats, once you understand when to use them, then move onto model algos like log regression and svm combined
with kernels (when the relationship in none linear )etc and when to use them.

Then move onto regularisation for the algos. One important thing you really need to understand " variance and bias" trade off. Then move onto cross validation methods and how to use them then look at the metrics like f1score accuracy etc...very important.

Start with a simple pipeline ie a sequence of methods used. 1) preprocessing 2) feature engineering 3) model.training 4) prediction.
I recommend you jump from 1 to 3, forget about 2, once you understand that then go 1 2 3 4.
Machine learning isn't that hard, its learning the concepts.

For quantum:
* statistics - specifically distributions and the use of biased distributions (for example in QFFT encoding/decoding) for encoding forms of data, as well as understand how NOT to bias.
* complex number maths- you'll still be coding at a lower level than most computational maths so understanding the basics of Bloch Sphere will help understand concepts.
* algebra and calculus - this is more important understand how to translate the problem into maths that then works on a quantum computer.

Quantum - the basic ideas are relatively simple, actually getting a mathematically sound output with characterisations etc is where the demons lie. Things like noise and confidence levels and how the different systems fail (because we're not at the standard of normal computer chips at the moment), error detection and correction, and how to break the algorithm to gain the best speed vs the decoherence time of the qubits during the calculations.

Most quantum folks that are taken seriously have a physics degree with quantum mechanics (typically PHD). That background gives the correct level of rigorous attention to the process and the results. Physics at that point heavily uses probability and statistics.

You can use things like TKet or vendor compilers but in reality over time libraries for numerical computation will either be written to make use of quantum or quantum-inspiration will be used to improve their classical computational performance.

Bearing in mind the way quantum computers work is completely different to the common von-neuman machines that are used today.

A high level simple example of the difference of quantum:

Von-neuman CPUs - you programme instructions to fetch process and store data. Simple 'vector' operations were added where small matrices can have the same instructions performed but they're quite restrictive to the current instruction being executed. Think of it as instruction first..

GPUs - are von-neuman but operate in massively parallel. Imagine pixels on your monitor, each with a simple von-neuman CPU behind it. It may gather (fetch) and scatter (store) data without each pixel being in step (normally) as they work independently. The key here is altering algorithms in maths into matrix operations that minimise the data dependencies so each pixel can operate independently and not hold up the others or corrupt the data processing. This is where libraries like tensor flow and AI make use of this to accelerate the simple maths used in AI neural networks. Think if it as data first ...

Quantum - works completely differently. Different quantum tech operates differently but this is a rough common approximation:
You build a 'circuit' that has input qubits (set values) and output qubits (set to 'superposition'). You then run through the steps of the quantum 'programme' where the quantum computer applies different transforms that you have programmed in the circuit. Then you collapse the circuit (collapse the wave form) which then results in the output qubits currently set to superpositions being set. You then use something like QFFT to reassemble the qubits into a digital number for the output parameter.

The simple view is that each time you run the circuit you get a result (a massive performance increase over von-neuman as you don't have to loop over every input possible value to search for the result) but you may need a couple of runs to confirm statistically and you will need to alter the circuit to fetch the next result. This is why you need the maths skills to correctly breakdown the problem into this form.
Each run can take a few seconds to load the circuit (with the new input data and reset qubits) and execute this is why they're slow to execute each run but the output value of each run could take a von neuman CPU/GPU a 100+ years of classical computation.

So libraries you may use may simplify this but the reality is the QC works differently, is used for a different purpose (often used for lowest gradient search, or factorisation searches). The limit of the qubits and the time you have to process also limits their use.


Moving back into AI, the more modern use of AIs - combine many areas of maths with neural networks and machine learning. Training networks for example in a non-LLM capacity where the network directly is trained on the physics has given rise to physics-informed neural networks. Then you have graphing neural networks - again how you assess the applicability of different forms requires some degree of maths understanding to say "actually this doesn't require a costly neural network as a k-mean or gaussian mixture models (clustering) or a simple linear regression is all that's required".

There's a lot of BS out there from people that have no real basis of understanding or for commercial gain. This is where a bit more formal structure such as a study course would help the OP to move forward.

Anyone can take a library and throw the library functions at data but can you say why it's the best tool for the job? I don't disagree with your start with probability and statistics (because assessing the dataset will use these). However quickly an understanding of wider maths will become important in translating problems.
 
Last edited:
When I went to college I did computer programming at BTEC level 3 over two years.

I also did an A level in maths on the side. Surely that is something the OP could do? It has been a long time since I done it but it was 1 or 2 lessons a week. I am sure 20 years later some colleges might even offer part time courses in exactly what you are looking for.

Get some GCSE books to brush up on your Maths before hand.
 
For quantum:
* statistics - specifically distributions and the use of biased distributions (for example in QFFT encoding/decoding) for encoding forms of data, as well as understand how NOT to bias.
* complex number maths- you'll still be coding at a lower level than most computational maths so understanding the basics of Bloch Sphere will help understand concepts.
* algebra and calculus - this is more important understand how to translate the problem into maths that then works on a quantum computer.

Quantum - the basic ideas are relatively simple, actually getting a mathematically sound output with characterisations etc is where the demons lie. Things like noise and confidence levels and how the different systems fail (because we're not at the standard of normal computer chips at the moment), error detection and correction, and how to break the algorithm to gain the best speed vs the decoherence time of the qubits during the calculations.

Most quantum folks that are taken seriously have a physics degree with quantum mechanics (typically PHD). That background gives the correct level of rigorous attention to the process and the results. Physics at that point heavily uses probability and statistics.

You can use things like TKet or vendor compilers but in reality over time libraries for numerical computation will either be written to make use of quantum or quantum-inspiration will be used to improve their classical computational performance.

Bearing in mind the way quantum computers work is completely different to the common von-neuman machines that are used today.

A high level simple example of the difference of quantum:

Von-neuman CPUs - you programme instructions to fetch process and store data. Simple 'vector' operations were added where small matrices can have the same instructions performed but they're quite restrictive to the current instruction being executed. Think of it as instruction first..

GPUs - are von-neuman but operate in massively parallel. Imagine pixels on your monitor, each with a simple von-neuman CPU behind it. It may gather (fetch) and scatter (store) data without each pixel being in step (normally) as they work independently. The key here is altering algorithms in maths into matrix operations that minimise the data dependencies so each pixel can operate independently and not hold up the others or corrupt the data processing. This is where libraries like tensor flow and AI make use of this to accelerate the simple maths used in AI neural networks. Think if it as data first ...

Quantum - works completely differently. Different quantum tech operates differently but this is a rough common approximation:
You build a 'circuit' that has input qubits (set values) and output qubits (set to 'superposition'). You then run through the steps of the quantum 'programme' where the quantum computer applies different transforms that you have programmed in the circuit. Then you collapse the circuit (collapse the wave form) which then results in the output qubits currently set to superpositions being set. You then use something like QFFT to reassemble the qubits into a digital number for the output parameter.

The simple view is that each time you run the circuit you get a result (a massive performance increase over von-neuman as you don't have to loop over every input possible value to search for the result) but you may need a couple of runs to confirm statistically and you will need to alter the circuit to fetch the next result. This is why you need the maths skills to correctly breakdown the problem into this form.
Each run can take a few seconds to load the circuit (with the new input data and reset qubits) and execute this is why they're slow to execute each run but the output value of each run could take a von neuman CPU/GPU a 100+ years of classical computation.

So libraries you may use may simplify this but the reality is the QC works differently, is used for a different purpose (often used for lowest gradient search, or factorisation searches). The limit of the qubits and the time you have to process also limits their use.


Moving back into AI, the more modern use of AIs - combine many areas of maths with neural networks and machine learning. Training networks for example in a non-LLM capacity where the network directly is trained on the physics has given rise to physics-informed neural networks. Then you have graphing neural networks - again how you assess the applicability of different forms requires some degree of maths understanding to say "actually this doesn't require a costly neural network as a k-mean or gaussian mixture models (clustering) or a simple linear regression is all that's required".

There's a lot of BS out there from people that have no real basis of understanding or for commercial gain. This is where a bit more formal structure such as a study course would help the OP to move forward.

Anyone can take a library and throw the library functions at data but can you say why it's the best tool for the job? I don't disagree with your start with probability and statistics (because assessing the dataset will use these). However quickly an understanding of wider maths will become important in translating problems.

He said he has GcsE maths, he wants to learn ML. There's no point learning the maths if you don't know what to learn.

So the best option is to learn basic stats and how to used the libraries once he is confident, he can build and learn the other maths after. Otherwise he will not learn or understand the structure or methods.in ML.

.For example Lagrange to optimise for SVM for the hyperplanes and optimal margins or for inference , RL regularisation etc.. he will get stuck for weeks if not months. I can go into stochastic calculus for other complicated areas of ML he will get completely lost.

There's no point at this time, or in the near future. Why make something complicated from the start.
The best solutions in ML are the lease complex.ones. I have give. Him areas to look at when does his research.

There is no need to do a course, everything is online and there are some good websites.
 
Last edited:
Back
Top Bottom