Log in

A network-based response feature matrix as a brain injury metric

  • Original Paper
  • Published:
Biomechanics and Modeling in Mechanobiology Aims and scope Submit manuscript

Abstract

Conventional brain injury metrics are scalars that treat the whole head/brain as a single unit but do not characterize the distribution of brain responses. Here, we establish a network-based “response feature matrix” to characterize the magnitude and distribution of impact-induced brain strains. The network nodes and edges encode injury risks to the gray matter regions and their white matter interconnections, respectively. The utility of the metric is illustrated in injury prediction using three independent, real-world datasets: two reconstructed impact datasets from the National Football League (NFL) and Virginia Tech, respectively, and measured concussive and non-injury impacts from Stanford University. Injury predictions with leave-one-out cross-validation are conducted using the two reconstructed datasets separately, and then by combining all datasets into one. Using support vector machine, the network-based injury predictor consistently outperforms four baseline scalar metrics including peak maximum principal strain of the whole brain (MPS), peak linear/rotational acceleration, and peak rotational velocity across all five selected performance measures (e.g., maximized accuracy of 0.887 vs. 0.774 and 0.849 for MPS and rotational acceleration with corresponding positive predictive values of 0.938, 0.772, and 0.800, respectively, using the reconstructed NFL dataset). With sufficient training data, real-world injury prediction is similar to leave-one-out in-sample evaluation, suggesting the potential advantage of the network-based injury metric over conventional scalar metrics. The network-based response feature matrix significantly extends scalar metrics by sampling the brain strains more completely, which may serve as a useful framework potentially allowing for other applications such as characterizing injury patterns or facilitating targeted multi-scale modeling in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

Download references

Acknowledgements

Funding is provided by the NIH Grant R01 NS092853. The authors are grateful to Dr. David B. Camarillo at Stanford University for data sharing. They also thank Dr. Zheyang Wu at Worcester Polytechnic Institute for help on statistical analysis.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Songbai Ji.

Ethics declarations

Conflict of interest

We have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Out-of-sample prediction performance

Appendix: Out-of-sample prediction performance

The network-based injury metric was retrained using the entire NFL or VT dataset. Tables 4 and 5 in “Appendix” report the out-of-sample performances when predicting injury occurrences for impacts in the other reconstructed and SF dataset. For each injury predictor trained on the same training dataset, the injury prediction performance depended on the testing dataset used. For example, the network-based metric trained on the NFL dataset to maximize accuracy achieved a rather poor sensitivity with VT dataset (0.091, i.e., only one of the 11 concussion cases was correctly predicted), but a perfect score of 1.000 for the SF dataset (both concussions correctly predicted).

Table 4 Out-of-sample prediction performances using NFL-trained predictors to predict injury in the VT and SF datasets
Table 5 Out-of-sample prediction performances using VT-trained predictors to predict injury in the NFL and SF datasets

On the other hand, when using the same SF dataset for testing, the performance of each injury metric in most categories also depended on the training dataset used. For example, the best performer in terms of out-of-sample prediction accuracy was \(v_{\text{rot}}\) when trained on the NFL dataset, but it was \(a_{\text{lin}}\) when trained on the VT dataset (accuracy of 0.936 vs. 0.927; Tables 4 and 5 in “Appendix”). The dependency on the training dataset was also evident when comparing the injury thresholds established. Using those from the NFL dataset as baselines, the injury thresholds for MPS, \(a_{\text{lin}}\), \(a_{\text{rot}}\), and \(v_{\text{rot}}\) differed by − 24.2%, 28.8%, − 9.9%, − 18.3%, respectively.

Interestingly, the majority of “best performers” were baseline kinematic variables across many categories. For example, \(a_{\text{rot}}\) trained on the NFL dataset achieved the best prediction accuracy and testing AUC for the VT dataset (Table 4), and the same variable trained on VT dataset also achieved the best accuracy and AUC when predicting the NFL impacts (Table 5). However, the same predictor did not receive the best accuracy or AUC when predicting impacts in the SF dataset, regardless of which training dataset used. This observation confirmed that the injury prediction performance depended on the testing dataset. Further, kinematic variables could also yield the worst performances. For example, training \(a_{\text{lin}}\) based on the VT dataset led to a sensitivity of zero when predicting injury in the SF dataset (i.e., none of the two concussions were correctly predicted).

To summarize, these findings suggest that the performance of a specific injury predictor depends not only on the injury metric itself, but also on the training and testing datasets as well. Therefore, out-of-sample injury prediction is effectively a “three-way” fitting process.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, S., Zhao, W., Rowson, B. et al. A network-based response feature matrix as a brain injury metric. Biomech Model Mechanobiol 19, 927–942 (2020). https://doi.org/10.1007/s10237-019-01261-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10237-019-01261-y

Keywords

Navigation