Table 5.

Summary of evaluation indicators

Paper Title Experimental evaluation indicators
[50] Amount of successful reverse
[51] Accuracy,Precision, Recall, F1 score
[52] CFG Soundness, CFG Completeness, Analysis Time
[53] Match Accuracy, Template Accuracy, Transfer Accuracy
[54] Accuracy of Extraction, Time Overhead, Memory Overhead
[57] The highest rate of precise clusters, Correctness, Concisness
[58] Conciseness, Coverage
[59] Purity, F1 score, recall
[60] Homogeneity Score, Completeness Score
[61] Correctness, Conciseness
[62] Correctness, Perfection
[64] Precision, Recall, F1 score Coverage
[65] Sub-message alignment matching score
[67] Keyword Extraction Correctness
[68] The total average match rate, Exact match ratio
[69] Match ratio, DTA time, Offline time
[71] Precision, Recall, F1 score
[72] Coverage (Cov), Accuracy over Coverage (AoC)
[73] Correctness, Effectiveness
[75] Homogeneity, Completeness, V-measure
[76] Correctness, Perfection, F1 Score
[77] Field split accuracy, Semantic recognition accuracy
[78] Coverage, Conciseness, Perfection
[79] Calculation Amount, Accuracy
[80] Precision, Recall, F1-score
[81] Precision, Recall, F1 score, Conciseness Efficiency
[82] Number of nodes, Number of edges, Graph density, Number of recurring patterns
[83] Classification Accuracy
[84] States, Membership queries, Equivalence queries, Learning time
[46] Detection accuracy, Detection time
[48] Scan cycle time
[49] Accuracy, Sensitivity, Specificity

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.