I'm not sure how to quantify the risk. I view A.I. as certain - there's nothing in the human consciousness that I can believe is intractable. It's going to take a while - sadly our generation will probably miss all the excitement - but mankind will get there.
Once A.I. is with us as a daily reality, a whole lot of engineering which is presently infeasible becomes rather easier. The risk of the resulting rapid progress having catastrophic consequences is unknown, but I don't think it could be categorised as "incredibly small".
Time will tell. You made an interesting choice for a PhD.
You view AI as some binary system that will "some day be achived". That is flawed thing. Intelligence is spectrum, as is AI. AI systems are already widely used in industry and have been for sometime, far outperforming humans is specialized domain. AI is used in finance, stock trading, networking, logistics, routing, optimization, clinical diagnosis, operations research, robotic surgery, industrial robotics, natural language processing/voice recognition, call centers, data mining, spam blockers, signal processing, work force optimization, automated warehouses, dispatching, engine and turbine design optimization, security, automated CCTV monitoring, bioinformatics, OCR..
In the 1960s the US introduced numerical zip codes to facilitate the first digital sensors and optical character recognition systems using AI.
AI doesn't mean some kind of holy grail super computer that is better than humans at all tasks. AI can be very simpler it doesn't even have to involve a computer (emergent intelligence, physics based intelligence, mechanical/chemical/genetic/optical computation). AI like natural intelligence is a multifaceted spectrum of individual behaviors and processes. Increasing ng complex aggregation and integration of such behaviors leads to higher levels of intelligent systems. This is a slow and gradual process. It is highly unlikely there will any/many large jumps for a variety of reasons, certainly not unknowingly.
Catastrophically terminator style scenarios are incredibly remote. Much more liekly are viruses and things that spread very rapidly, adapt and mutate and cause significant damage (dos attacks etc) and are impossible for security experts to defeat by hand, instead requiring intelligent virus killers that can also adapt and modify to attack the virus - artificial immune systems is a very active research area. This might render network computers I feasible or very expensive, or more likely unreliable which could have severe consequences but they will never be able to launch nuclear weapons- these things just aren't connected to the internet for a reason and are on private networks with many fails safes.
Anyway, things like global warming are much more dangerous. Dangerous warming is pretty much a guarantee now whatever we do due to the half life of co2 in the atmosphere. I am also far more worried about large western outbreaks of Ebola or new variant bird flus and things. The Spanish flue rapidly spread across the whole planet including re toe pacific islands, quickly killing up to 100million people which was around 5% of the planet's population. With today's connected world there are some truly nightmare scenarios. Something serious could eradicate 1-billion people.