In this day and age, most of our processed data is what is know as “big data.” This data is usually very sparse in that it handles a plethora of x and y variables which makes that data very confusing and complicated without the help of computer softwares to speed up the process. An example of big data is if all of the customers who shop at a certain superstore (such as Walmart) were mapped to every product that the store sells. If a certain customer has bought a certain product, the table would say “1,” whereas if that customer has not bought specific product, the table would say “0.” The table, naturally, would be mostly zeroes since it is not only dealing in what customers bought, but also what they did not buy.

With this sparse data, algorithms usually do various addition and multiplication operations involving zeroes, but these operations are a waste of time and money since 5 + 0 = 5 and

5 x 0 = 0. These operations, no matter what, will always end in the same result. While these calculations may only take a few milliseconds at most, those fractions of seconds add up due to the vast amounts of zeroes in the countless pages of data. Instead of having the algorithms perform these monotonous calculations over and over, programmers write their own code to avoid zero entries in the data. That code, moreover, is very complicated and it generally only applies to a slim range of issues.

At the Association for Computing Machinery's Conference on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH), computer scientists from MIT, the French Alternative Energies and Atomic Energy Commission, and Adobe Research recently unveiled a new system that automatically produces code designed for sparse data which has the potential to make the lives of programmers and big-data analyzers much easier.

The new automatic code provides a 100 times speed optimization compared to other software packages that do not formulate their own code or that the code must be looked over by programmers to prevent entries involving zero.

The system has been named “Taco” (or “Tensor Algebra Compiler”). What makes the system superior to other softwares is that it is able to understand the structure of matrices (or big data tables such as the one described with customers at Walmart) thus it is able to effectively analyze them faster and easier. The system is useful since it is able to work with not only 3-dimension matrices but 4-dimension matrices as well. An example of a 4-dimension matrix would be if the customers at Walmart were not only mapped to the products they buy and don’t buy, but also their ratings on each product that Walmart sells.

"Sparse representations have been there for more than 60 years," claims Saman Amarasinghe, an MIT professor of electrical engineering and computer science (EECS). "But nobody knew how to generate code for them automatically. People figured out a few very specific operations -- sparse matrix-vector multiply, sparse matrix-vector multiply plus a vector, sparse matrix-matrix multiply, sparse matrix-matrix-matrix multiply. The biggest contribution we make is the ability to generate code for any tensor-algebra expression when the matrices are sparse."

In the last few years years, the mathematical manipulation of tensors, known as tensor algebra, has become vital to not only big-data analysis but also machine learning. It has also been commonplace in research since Einstein’s time, but never has it been used more than it is today.

Usually, managing tensor algebra involves the mathematical software to have contracted tensor operations in each part of the whole system. For example, if a certain operation involved 4 tensors to be multiplied and added to a fifth, the software would run its standard protocol for the multiplication of tensors. This result would then be stored away for later use. Then the tensor addition would come in.

In this age of extensive amounts of big data, moreover, this approach is much too time-consuming. In order to achieve maximum efficiency, Kjolstad explains, every section of tensor operations requires its own "kernel," which would describe a computational template.

"If you do it in one kernel, you can do it all at once, and you can make it go faster, instead of having to put the output in memory and then read it back in so that you can add it to something else," Kjolstad mentions. "You can just do it in the same loop."

Computer scientists and programmers have developed templates for a good amount of tensor operations that are involved in machine learning and big-data analytics. Creating computational templates for each and every operation, however, is nearly impossible without the use of another software since the amount of these “kernels” is infinite. This is because the template that adds 3 and 4 is different from the template that adds 4 and 5 and so on. Not to mention that there would be even more templates for 4-dimension matrices.

Many tensor operations incorporate the multiplication of an entry from one tensor with one from another tensor. If either entry is zero, then their product is going to be zero as simple math states. This brings back the issue of softwares wasting large amounts of time on these useless calculations which are already inferred by the person analyzing the big-data.

Another option for saving time in analyzing the data is to hand-write code to leave out zero entries in the calculations. This could involve carrying forward the nonzero entries when adding or completely leaving out the multiplications involving zero all together. This makes analyzing the data faster but requires an immense amount of preparation on the part of the programmer since they have to optimize the code by hand rather than having the software do it.

The code for multiplying three matrices, which is considered a simple example of a tensor since it only contains three dimensions and can be done by hand in a small amount of time, might, for instance, take 12 lines if the matrix is has no zeroes. But if the matrix is sparse, the same operation (multiplication) may require 200 lines of code or more, just to incorporate omissions and general edits.

Taco adds all of those extra lines of code, even the omissions and elisions, automatically. The programmer would state the dimensions of a tensor, whether it's full or sparse (to clarify if there need to be any zero omissions), and the location of the file or cache from where the values can be imported from and exported back to following the calculations. Generally, Taco builds a hierarchical-like map that shows, which entries from the tensors are nonzero and, which entries from each tensor are zeroes. Then Taco simply discards the zero entries.

Lastly, Taco uses an extremely efficient (but not quite perfect) indexing scheme to store the exclusively natural and positive integers of the sparse data sets. It has been tested that sparse tensors that include the zero entries could take up more than 100 exabytes (or 1000000000 gigabytes). This is approximately ten times the storage space of the prestigious Google systems. Using Taco, moreover, will compress the data to an astounding extent so that the data will only use up approximately 10 gigabytes which is minute enough to fit on a smartphone or laptop.