Back to Table of contents

Primeur weekly 2016-12-19

Crowd computing

Quake-detection app captured nearly 400 temblors worldwide ...

Quantum computing

Microsoft intensifies quantum cooperation with QuTech ...

Fast track control accelerates switching of quantum bits ...

Two electrons go on a quantum walk and end up in a qudit ...

Researchers discovered elusive half-quantum vortices in a superfluid ...

Focus on Europe

New e-IRGSP5 support project held its kick-off meeting in Barcelona ...

European Commission to organize eInfrastructure Proposers' Day on January 19, 2017 ...

ARM extends HPC offering with acquisition of software tools provider Allinea Software ...

Martin Kersten appointed ACM Fellow ...

PRACE Preparatory Access Type D expected for early 2017 ...

PRACE SHAPE 4th Call awards 4 new innovative European SME projects ...

Middleware

DDN collaborates with Synergy Solutions Management to offer video surveillance and HPC design, test and training at new Innovations Lab ...

GridGain Professional Edition 1.8 adds in-memory SQL Grid to industry-leading in-memory computing platform ...

Technique shrinks data sets for easier analysis ...

Hardware

Intersect360 Research to launch 9th Annual HPC Budget Map survey ...

DDN named a global leader in object storage by IDC ...

Mellanox 25G/100G Ethernet solutions enables Artificial Intelligence Speech Recognition Technology at iFLYTEK ...

Applications

BMW Group to start research with IBM Watson ...

Fincantieri selects IBM Cloud to meet growing international demand for more efficient shipbuilding ...

Better ranking for Big Data using seriation solution ...

Eye-popping view of CO2, critical step for carbon-cycle science ...

Supercomputer simulation reveals 2D glass can go infinitely soft ...

Extraordinary animation reveals ocean's role in El Niños ...

Pitt engineers receive $500,000 award from NASA to advance additive manufacturing ...

Rice and Baylor team sets new mark for 'deep learning' ...

Method enables machine learning from unwieldy data sets ...

Barrow identifies new genes responsible for ALS using IMB Watson Health ...

Global brain initiatives generate tsunami of neuroscience data ...

The Cloud

Q2 SaaS and PaaS Cloud revenues for Oralce up 81%, and up 89% in non-GAAP constant currency ...

Method enables machine learning from unwieldy data sets

16 Dec 2016 Cambridge - When data sets get too big, sometimes the only way to do anything useful with them is to extract much smaller subsets and analyze those instead.

Those subsets have to preserve certain properties of the full sets, however, and one property that's useful in a wide range of applications is diversity. If, for instance, you're using your data to train a machine-learning system, you want to make sure that the subset you select represents the full range of cases that the system will have to confront.

At the Conference on Neural Information Processing Systems, researchers from MIT's Computer Science and Artificial Intelligence Laboratory and its Laboratory for Information and Decision Systems presented a new algorithm that makes the selection of diverse subsets much more practical.

Whereas the running times of earlier subset-selection algorithms depended on the number of data points in the complete data set, the running time of the new algorithm depends on the number of data points in the subset. That means that if the goal is to winnow a data set with 1 million points down to one with 1,000, the new algorithm is 1 billion times faster than its predecessors.

"We want to pick sets that are diverse", stated Stefanie Jegelka, the X-Window Consortium Career Development Assistant Professor in MIT's Department of Electrical Engineering and Computer Science and senior author on the new paper. "Why is this useful? One example is recommendation. If you recommend books or movies to someone, you maybe want to have a diverse set of items, rather than 10 little variations on the same thing. Or if you search for, say, the word 'Washington'. There's many different meanings that this word can have, and you maybe want to show a few different ones. Or if you have a large data set and you want to explore - say, a large collection of images or health records - and you want a brief synopsis of your data, you want something that is diverse, that captures all the directions of variation of the data.

"The other application where we actually use this thing is in large-scale learning. You have a large data set again, and you want to pick a small part of it from which you can learn very well."

Joining Stefanie Jegelka on the paper are first author Chengtao Li, a graduate student in electrical engineering and computer science; and Suvrit Sra, a principal research scientist at MIT's Laboratory for Information and Decision Systems.

Traditionally, if you want to extract a diverse subset from a large data set, the first step is to create a similarity matrix - a huge table that maps every point in the data set against every other point. The intersection of the row representing one data item and the column representing another contains the points' similarity score on some standard measure.

There are several standard methods to extract diverse subsets, but they all involve operations performed on the matrix as a whole. With a data set with a million data points - and a million-by-million similarity matrix - this is prohibitively time consuming.

The MIT researchers' algorithm begins, instead, with a small subset of the data, chosen at random. Then it picks one point inside the subset and one point outside it and randomly selects one of three simple operations: swapping the points, adding the point outside the subset to the subset, or deleting the point inside the subset.

The probability with which the algorithm selects one of those operations depends on both the size of the full data set and the size of the subset, so it changes slightly with every addition or deletion. But the algorithm doesn't necessarily perform the operation it selects.

Again, the decision to perform the operation or not is probabilistic, but here the probability depends on the improvement in diversity that the operation affords. For additions and deletions, the decision also depends on the size of the subset relative to that of the original data set. That is, as the subset grows, it becomes harder to add new points unless they improve diversity dramatically.

This process repeats until the diversity of the subset reflects that of the full set. Since the diversity of the full set is never calculated, however, the question is how many repetitions are enough. The researchers' chief results are a way to answer that question and a proof that the answer will be reasonable.

The paper is titled " Fast mixing Markov chains for strongly Rayleigh measures, DPPs, and constrained sampling ".

Source: Massachusetts Institute of Technology

Back to Table of contents

Primeur weekly 2016-12-19

Crowd computing

Quake-detection app captured nearly 400 temblors worldwide ...

Quantum computing

Microsoft intensifies quantum cooperation with QuTech ...

Fast track control accelerates switching of quantum bits ...

Two electrons go on a quantum walk and end up in a qudit ...

Researchers discovered elusive half-quantum vortices in a superfluid ...

Focus on Europe

New e-IRGSP5 support project held its kick-off meeting in Barcelona ...

European Commission to organize eInfrastructure Proposers' Day on January 19, 2017 ...

ARM extends HPC offering with acquisition of software tools provider Allinea Software ...

Martin Kersten appointed ACM Fellow ...

PRACE Preparatory Access Type D expected for early 2017 ...

PRACE SHAPE 4th Call awards 4 new innovative European SME projects ...

Middleware

DDN collaborates with Synergy Solutions Management to offer video surveillance and HPC design, test and training at new Innovations Lab ...

GridGain Professional Edition 1.8 adds in-memory SQL Grid to industry-leading in-memory computing platform ...

Technique shrinks data sets for easier analysis ...

Hardware

Intersect360 Research to launch 9th Annual HPC Budget Map survey ...

DDN named a global leader in object storage by IDC ...

Mellanox 25G/100G Ethernet solutions enables Artificial Intelligence Speech Recognition Technology at iFLYTEK ...

Applications

BMW Group to start research with IBM Watson ...

Fincantieri selects IBM Cloud to meet growing international demand for more efficient shipbuilding ...

Better ranking for Big Data using seriation solution ...

Eye-popping view of CO2, critical step for carbon-cycle science ...

Supercomputer simulation reveals 2D glass can go infinitely soft ...

Extraordinary animation reveals ocean's role in El Niños ...

Pitt engineers receive $500,000 award from NASA to advance additive manufacturing ...

Rice and Baylor team sets new mark for 'deep learning' ...

Method enables machine learning from unwieldy data sets ...

Barrow identifies new genes responsible for ALS using IMB Watson Health ...

Global brain initiatives generate tsunami of neuroscience data ...

The Cloud

Q2 SaaS and PaaS Cloud revenues for Oralce up 81%, and up 89% in non-GAAP constant currency ...