In the area of performance analysis, we are often not aware of the actual issues and the quantitative importance of measuring, Jésus Labarta explained. The divergence between our "mental models" and the actual behaviour of a machine can be quite substantial. Therefore, there is a need for extreme detail analysis and visualization.
The performance tools constitute a Big Data application so performance analytics are a matter of extensive research.
There are different types of analysis with a precise and complete definition per region HWC characterization, explained Jésus Labarta. All counters are taken from a single run.
The tracking has a structural evolution. The frame sequence is measured with clustered scatterplot as the core counts increase.
There also exist instantaneous metrics at no cost, according to Jésus Labarta. The instructions evolution for routine copy faces of NAS MPI BT.B are an example.
He also mentioned Dimemas with its powerful 'what if' scenarios.
The research community has great expectations of hybrid parallelization too.
Jésus Labarta went on to talk about the power of integration. There is a 13% gain if we increase the IPC of Cluster1. So how can we improve the application, he asked. There is a 19% gain if we balance the clusters 1 and 2.
The basic performance model measures the fundamental behaviour in terms of load balance, serialization and transfer.
Jésus Labarta continued with the issue of semantics to eliminate the network/MPI noise. The HPC people are trying to model hypothetical target platforms.
Jésus Labarta made a comparison with the oil industry. What is important in the oil industry? We have to know where we drill, he stated
Why do we do material simulation and docking? We do this to discover the microscopic structure.
Jésus Labarta ended by saying that we really need good tools for performance analytics in order to gather deep insight for productive optimization efforts.