A Major Advance in Computing Solves a Complex Math Problem 1 Million Times Faster

A Major Advance in Computing Solves a Complex Math Problem 1 Million Times Faster


Reservoir computing is already one of the most advanced and most powerful types of artificial intelligence that scientists have at their disposal – and now a new study outlines how to make it up to a million times faster on certain tasks.

That's an exciting development when it comes to tackling the most complex computational challenges, from predicting the way the weather is going to turn, to modeling the flow of fluids through a particular space.

Such problems are what this type of resource-intensive computing was developed to take on; now, the latest innovations are going to make it even more useful. The team behind this new study is calling it the next generation of reservoir computing.

"We can perform very complex information processing tasks in a fraction of the time using much less computer resources compared to what reservoir computing can currently do," says physicist Daniel Gauthier, from The Ohio State University.

"And reservoir computing was already a significant improvement on what was previously possible."

Reservoir computing builds on the idea of neural networks – machine learning systems based on the way living brains function – that are trained to spot patterns in a vast amount of data. Show a neural network a thousand pictures of a dog, for example, and it should be pretty accurate at recognizing a dog the next time one appears.

The details of the extra power that reservoir computing brings are quite technical. Essentially, the process sends information into a 'reservoir', where points of data are linked in various ways. Information is then sent out of the reservoir, analyzed, and fed back to the learning process.

This makes the whole process quicker in some ways, and more adaptable to learning sequences. But it also relies heavily on random processing, meaning what happens inside the reservoir isn't crystal clear. To use an engineering term, it's a 'black box' – it usually works, but nobody really knows how or why.

With the new research that's just been published, reservoir computers can be made more efficient by removing the randomization. A mathematical analysis was used to figure out which parts of a reservoir computer are actually crucial to it working, and which aren't. Getting rid of those redundant bits speeds up processing time.

One of the end results is that less of a 'warm up' period is needed: That's where the neural network is fed with training data to prepare it for the task that it's supposed to do. The research team made significant improvements here.

"For our next-generation reservoir computing, there is almost no warming time needed," says Gauthier.

"Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that's all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points."

One particularly difficult forecasting task was completed in less than a second on a standard desktop computer using the new system. With current reservoir computing technology, the same task takes significantly longer, even on a supercomputer.

The new system proved itself to be between 33 and 163 times faster depending on the data. When the task objective was shifted to prioritize accuracy though, the updated model was a whopping 1 million times faster.

This is just the start for this super-efficient type of neural network, and the researchers behind it are hoping to pit it against more challenging tasks in the future.

"What's exciting is that this next generation of reservoir computing takes what was already very good and makes it significantly more efficient," says Gauthier.

The research has been published in Nature Communications.

https://www.sciencealert.com/a-new-neural-network-solved-the-hardest-of-maths-problems-a-million-times-faster

Post a Comment

Previous Post Next Post