Advanced software

Soldato
Joined
12 Oct 2003
Posts
4,027
I was thinking the other day about how advanced technology could become but then wondered the same about software, how advanced can a given piece of software get given enough updates?

Imagine we put our most clever programmers, scientists and engineers to work on creating software to do a given task, like AI, what's possible on current hardware?
 
Well hardware is not 'smart' its just been made smaller over time, good software is what has always made it 'intelligent' :)
 
Google's self driving car is a great example that matches your criteria

Edit:

Edit 2: It's tempting to think of this video of some kind of controlled concept video but it's really not. I've been taking the Udacity.net class that teaches you the basics behind the algorithms used by this car and it is really eye opening. The car can drive itself - really. And not stupidly too. It goes fast, is accurate, can make lane shifts and over-take. It can practice emergency stops in case of a child running in front of the car, recognise traffic lights, pathfind its way through towns, park itself etc. It can even hand-brake turn into a parking space. It really is amazing and we could be seeing this kind of tech driving us around one day.
 
Last edited:
Software is not actually the weak point in most of advanced AI/robotics systems. Yes there is a huge amount of progress still to be made but no expert really believes the software will ever be the limiting factor. Hardware is the big problem to overcome. I am not meaning simply computational power which should eventually become sufficiently powerful and small, but mechanical design, materials, sensory systems.

A good example is a bee or a simple house fly. The computational requirements are pretty minimal, drosophila only has a few thousand neurons. But the mechnical complexity of a simple house fly is far beyond the capabilities of mankind to replicate with current technology.

Another example, the sensory system of almost all animals far exceeds any man made sensors. A moth can smell a few molecules of pheromones drifted from miles a way, a dog can smell trace amounts of odours, a bat can fly in complete darkness due to an advanced ultrasound system that weighs mere grams yet far exceeds the capabilities if a $100k scanning laser finder that weigh 10kg.


The biggest limitation in the humanoid robots is the mechanical complexity. Control systems for natural walking gaits are now very advanced

Energy is perhaps the biggest killer. Our best small portable energy sources have orders of magnitude less energy density than natural energy sources like food.
Given the best commercial batter technology, Lithium polymer cells, an equivalent weight of a mars bar will have 100x the energy source.

And it is not only the really limited energy sources we have, but the massive inefficiencies of our current mechanical systems. I was at a seminar last year of a professor researching penguins. By his calculations if you gave the same energy equivalent to 1 liter of petrol to a penguin (in the form of fish), then the penguin could swim half way around the world or more. A car would just go a few miles down the road.
 
DP, what kind of computational equivalent do you think would be comparative to the processing and computational power of the Human Brain?
 
Biological batteries seems a possibility, just hard to create a cellular system that can 'heal' itself by cellular replacement.
Mitochondrial dna might give suggestions, as they are simply cellular inclusions.
 
DP, what kind of computational equivalent do you think would be comparative to the processing and computational power of the Human Brain?

It is hard to answer because the processing power of the human brain comes through massive parallelisation of simple computing units at relatively slow speeds while even the supposed parallel super computers have a relative small number of very fast very complex computing units.

A Neuron has a firing rate of about 10ms, so the human brain is really only about a 10Hz processor. In the human brain signals are generally fed through 5-10 neural layers making a processing latency time of 50-100ms. Hence 10Hz is about the limit of human perception and a TV at 25HZ is perceived as fluid motion.

Silicon transistors operate at thousands of MHz so are massively quicker to pass through a simple signal, but millions of transistors and cycles are needed to compute basic information. The biggets super computers have thousands of processors, the human brain has billions of neurons, so there is several orders of magnitude difference

IBM have a project where they try to simulate a human brain, http://www.scientificamerican.com/a...ulates-4-percent-human-brain-all-of-cat-brain
But that is fundamentally much slower than a human brain because it is simulating the chemical-physical processes within a neuron and synapse, which takes quite a lot more computation that merely the computation of the information itself. The chemical processes within a neuron and the synapse don't actually require any computation in the brain, they are just chemical reactions, so it is quite inefficient if we try to model these chemical reactions. As an aside, the speed of chemical reactions has sparked great interest in building chemical processors.


I think the only real way to judge the computational power of the human brain is to build computational systems that can surpass human abilities in a range of tasks normally associated with human intelligence. Biological brains are fundamentally different to artificial processors, so trying to get them both to do the same thing with the same method is flawed. Trying to get them both to do the same tasks using whichever method is appropriate has some merit.

IBM's Depp blue chess paying computers and the new Watson jeopardy playing computer show how it is possible to reach human intelligence within a single domain. My manger created the worlds first Bridge playing software which succeeded in beating the best human experts in the world. He recently developed a crossword solver that ranked highly in the world championships recently, and his software runs on a standard laptop.

The future decades will see a massive rise in the abilities of computer software to beat human experts. These systems will also become smaller and simpler, and will be able to be run from the future iphones etc rather than desktops or large servers.

These systems may not be intelligent in a natural sense and will solve these tasks in alternative means, but that is to be expected format he differences in hardware.

New hardware design may alleviate many of the limitations of current designs. Chemical or genetic computers would allow massive parallelisation.
Massive parallelisation allows computer systems to break some of the hard rules of computational theory. E.g., it is provable that sorting algorithms run a on Turing computer can be no faster than N*LogN where N is the size of the set of numbers to sort. Alternative computing technologies can do this faster, perhaps LogN time. Another example is the Travelling Salesman Problem (given a set of cities find the shortest path that connects all the cities). This is an extremely hard problem for computers, NP-Hard to be exact. If there are N cities then the run-time is in order N!, (N factorial) so is impractical for large values of N. Slime mold bacteria have been shown to solve the TSP problem in constant time irrespective of the number of cities due to massive parallel computation arising from billions upon billions of bacteria cells.
 
Thanks, it is pretty amazing just what the human Brain is capable of when you compare it to the likes of Deep Blue, which for all their power can only effective compete in a single specialised field.

I am also fascinated by the thought of biomechanical or chemical computing.
 
It's interesting how bio computing could get to a point where it is as controversial as animal or stem cell research.

Certain the best way to simulate a brain is just to take a brain in the first place.
And some research goes along these lines. There is work taking rat brains and feeding them signals, others are achieving basic arithmetic using extracted slug neurons, others have robots connected up to the pheromone sensors of a moth.


I read a paper which stated that if you really want a humanoid worker with the looks and capabilities of a human by far the easiest path is to genetically engineer a human child and graft on wireless interface hardware so you can effectively remote control the human. Ethics aside...

In the 1960s the CIA had project that messed around with the brains of cats and inserted various hardware that made the cat a remote control surveillance robot.

There is a certain merit in exploiting and reusing what nature has evolved rather than trying to engineer millions of years of evolution from scratch.
 
Certain the best way to simulate a brain is just to take a brain in the first place. And some research goes along these lines.

I read a paper which stated that if you really want a humanoid worker with the looks and capabilities of a human by far the easiest path is to genetically engineer a human child and graft on wireless interface hardware so you can effectively remote control the human.

There is a certain merit in exploiting and reusing what nature has evolved rather than trying to engineer millions of years of evolution from scratch.

I once read a sci-fi short story (I cannot recall the title or author) about humans being assessed for capabilities at about age 12 and then basically being programmed for one task (the one task they were best suited for) at age 16-18.

The story centred on a person who could not be pigeon-holed and programmed, which effectively made him an outcast, although it turned out that those capable of creative and original thinking were those very same outcasts.

They needed to learn that for themselves though, as it couldn't be programmed, or taught.
 
I think within the next 20 years, we may have 'virtual' intelligence - something that appears to be an AI, but is still limited by what data is originally programmed in to it. IMHO, when people refer to an 'AI' (for example, a game 'AI') they are really talking about a 'VI' instead.

On the other hand, creating a 'true' AI will take longer due to the need for a programme that can adapt on the fly, and is capable of learning and making intelligent decisions. Maybe by this time, we have optical computing/biological computing or even quantum computers. I'd imagine this on the time scale of 50-80 years from now.
 
Back
Top Bottom