§ Research

Research notes.

Current research on non-invasive intracranial pressure estimation, the senior thesis it grew out of, and earlier work on clinical-data ML.

§ I. Current research
№ 01

Non-invasive ICP estimation in pediatric neurocritical care

2025 – Present · ongoing research · PyTorch · LSTM, TCN, Transformer, Hybrid · LOPO cross-validation

Intracranial pressure is one of the more consequential numbers in a neurocritical-care unit, and getting it usually means a catheter through the skull. The work asks a simple question: how close can we get to that number from waveforms a bedside monitor already records?

Two channels, ABP and CBFV, resampled to 125 Hz, 10-second windows with a 2-second stride. Trained on PhysioNet's pediatric neurocritical-care waveform release (12 patients, ~18,500 windows), with PhysioNet CHARIS as the adult cross-dataset validation set. Validation is leave-one-patient-out.

Best LSTM (LOPO): MAE 2.92 mmHg, median AE 1.97 mmHg, RMSE 4.03 mmHg, bias −1.46 mmHg.

Supervisor: Prof. Zubaer Ibna Mannan, Smart Computing, Kyungdong University

§ II. Senior thesis
№ 01

Temporal heart-defect prediction (CheXchoNet)

2023 – 2024 · senior thesis · PyTorch · CNN with temporal metadata · CheXchoNet

Predicting heart defects from chest X-rays alongside the temporal metadata that comes with them, rather than treating each frame in isolation. A CNN trained over the CheXchoNet release reached AUC of 0.79 to 0.84 in cross-validation. The approach pointed me towards waveform-level signals, which is where the ICP work picks up.

Supervisor: Prof. Zubaer Ibna Mannan, Smart Computing, Kyungdong University

§ III. Earlier work
№ 01

Cardiovascular disease prediction

2023 · Scikit-learn · PyTorch · Cleveland Clinic dataset

First real ML research project. A binary classifier over the Cleveland Clinic dataset, predicting cardiovascular disease from a small set of patient features. The interesting work was less the model and more the surrounding craft: train/test splitting, feature engineering, calibration, and learning to write evaluation that doesn't lie to you. Final model reached 96% test accuracy.