How bad is ternary signalling with clocked logic, where one might use 0, +-V ? While binary purists might puke, 3-level "trits" might occupy a unique position in terms of minimum power. Why would SIMD save power? (I've heard people say this for many years, but have never been able to track down any evidence for this assertion.) Wouldn't an SIMD array have to distribute an enormous clock network, as well as an enormous opcode network -- even when only a small fraction of the receiving circuitry actually does anything. My problem with parallel processing has always been that the ability to *pack* a computation in 2D, 3D + time, is ridiculously difficult -- in general -- so only a small fraction (fractal dimension smaller than actual physical dimension) of the circuitry can be active at any one time. On the other hand, this is good, since if every circuit were constantly switching, the entire chip would melt. This problem of distributing computations to multiple processor elements has recently become extremely important, because the so-called Spectre vulnerability seems to affect *every* implementation of shared computation which utilizes shared caches. So perhaps the only safe computation is one which is "real time", with *zero* degradation in the presence of additional processes. At 12:08 PM 2/16/2018, Tom Knight wrote:
The power savings there is from locating the processing next to the memory, not from higher radix arithmetic. This avoids the power associated with driving massive amounts of data from memory to processor and back. The fact that they are using analog arithmetic is an artifact of the need to make it small, not that it is inherently lower power. Also, accuracy is not required. I'm not arguing that this is a bad design, but rather pointing out that the "analog is better" idea is a vast oversimplification of a complicated set of tradeoffs. A SIMD digital array would likely work as well or better, and would be my first approach.
On Feb 16, 2018, at 2:53 PM, Henry Baker <hbaker1@pipeline.com> wrote:
Perhaps I'm being dense, but how does your criticism affect this new MIT chip which claims to save an immense amount of power?
At 11:18 AM 2/16/2018, Tom Knight wrote:
For a given signal to noise ratio, binary signalling is optimal. More states need to be encoded further apart, and the power scales with the square of the voltage, so higher bits/baud requires more power. Flash wins not because of this, but because the limiting factor is cell density.
On Feb 16, 2018, at 12:20 PM, Henry Baker <hbaker1@pipeline.com> wrote:
Flash memories have been storing 2-3 bits/cell for several years now, which requires distinguishing 4-8 different levels when reading.
Perhaps it's time to reconsider using *quatenary* (base 4) or *octal* (base 8) arithmetic in computer arithmetic circuits?
I hadn't realized that A/D converters were so simple and might save so much power (and perhaps even chip space?).
https://news.mit.edu/2018/chip-neural-networks-battery-powered-devices-0214
MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 93 to 96 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.
New chip reduces neural networks' power consumption by up to 95 percent, making them practical for battery-powered devices.
Larry Hardesty | MIT News Office February 13, 2018
Most recent advances in artificial-intelligence systems such as speech- or face-recognition programs have come courtesy of neural networks, densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data.
But neural nets are large, and their computations are energy intensive, so they're not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone.
Now, MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.
"The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations," says Avishek Biswas, an MIT graduate student in electrical engineering and computer science, who led the new chip's development.
"Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don't need to transfer this data back and forth?"
Biswas and his thesis advisor, Anantha Chandrakasan, dean of MIT's School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, describe the new chip in a paper that Biswas is presenting this week at the International Solid State Circuits Conference.
Back to analog
Neural networks are typically arranged into layers. A single processing node in one layer of the network will generally receive data from several nodes in the layer below and pass data to several nodes in the layer above. Each connection between nodes has its own "weight," which indicates how large a role the output of one node will play in the computation performed by the next. Training the network is a matter of setting those weights.
A node receiving data from multiple nodes in the layer below will multiply each input by the weight of the corresponding connection and sum the results. That operation -- the summation of multiplications -- is the definition of a dot product. If the dot product exceeds some threshold value, the node will transmit it to nodes in the next layer, over connections with their own weights.
A neural net is an abstraction: The "nodes" are just weights stored in a computer's memory. Calculating a dot product usually involves fetching a weight from memory, fetching the associated data item, multiplying the two, storing the result somewhere, and then repeating the operation for every input to a node. Given that a neural net will have thousands or even millions of nodes, that's a lot of data to move around.
But that sequence of operations is just a digital approximation of what happens in the brain, where signals traveling along multiple neurons meet at a "synapse," or a gap between bundles of neurons. The neurons' firing rates and the electrochemical signals that cross the synapse correspond to the data values and weights. The MIT researchers' new chip improves efficiency by replicating the brain more faithfully.
In the chip, a node's input values are converted into electrical voltages and then multiplied by the appropriate weights. Summing the products is simply a matter of combining the voltages. Only the combined voltages are converted back into a digital representation and stored for further processing.
The chip can thus calculate dot products for multiple nodes -- 16 at a time, in the prototype -- in a single step, instead of shuttling between a processor and memory for every computation.
All or nothing
One of the keys to the system is that all the weights are either 1 or -1. That means that they can be implemented within the memory itself as simple switches that either close a circuit or leave it open. Recent theoretical work suggests that neural nets trained with only two weights should lose little accuracy -- somewhere between 1 and 2 percent.
Biswas and Chandrakasan's research bears that prediction out. In experiments, they ran the full implementation of a neural network on a conventional computer and the binary-weight equivalent on their chip. Their chip's results were generally within 2 to 3 percent of the conventional network's.
"This is a promising real-world demonstration of SRAM-based in-memory analog computing for deep-learning applications," says Dario Gil, vice president of artificial intelligence at IBM. "The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays. It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future."