Hi, my name is

Azad Md Abulkalam

I build AI for what matters — cardiac imaging, robotics, 3D reconstruction, and ready to go beyond.

PhD Researcher at NTNU working at the intersection of computer vision, deep learning, and ultrasound medical imaging — developing novel methods for myocardial motion tracking and strain analysis in echocardiography.

01.

About Me

I am a PhD Candidate in the Department of Circulation and Medical Imaging (ISB) at NTNU, Norway, developing AI-driven methods for echocardiographic analysis. My work sits at the intersection of computer vision, deep learning, and clinical cardiology.

Before my PhD I completed an Erasmus Mundus M.Sc. in Marine and Maritime Intelligent Robotics (jointly at UTLN and NTNU), interned at SINTEF Digital on 3D reconstruction with Neural Radiance Fields (NeRF), and worked 1.5 years as a Software Engineer at SAMSUNG R&D on Augmented Reality and various SDKs.

I hold a B.Sc. in Computer Science and Engineering from United International University, Bangladesh, with an Erasmus Mundus exchange and bachelor's thesis at Universität Bremen, Germany.

Research

Computer Vision Deep Learning Medical Imaging Point Tracking Vision Transformers 3D Reconstruction NeRF Video Classification Underwater Imaging

AI/DL/CV/MLOps/Programming/Robotics

Python PyTorch OpenCV GitHub Scikit-learn CUDA C/C++ MLFlow W&B ROS Gradio Control Systems Docker FastAPI CI/CD Rust Kubernetes DVC
Expert Proficient Familiar Exploring
02.

Projects (Highlights)

🫀

EchoTracker

Designed a novel coarse-to-fine deep learning architecture for long-range myocardial point tracking in 2D echocardiography—enabling automated cardiac motion analysis and enhancing clinical strain assessment. Try the Hugging Face demo to see it in action.

Python·PyTorch·Deep Learning·Computer Vision·Medical Imaging
📍

PTEcho

Identified directional bias in cardiac motion estimation, developed effective mitigation strategies, and fine-tuned state-of-the-art models—resulting in improved robustness and generalization across diverse echocardiography scenarios.

Python·PyTorch·CoTracker3·PIPs++·SpeckNet·EchoTracker·Benchmarking
🌊

MViST

Developed a multi-label video classification framework for automated underwater ship hull inspection—enabling simultaneous detection of multiple conditions in underwater video data collected by an ROV and advancing automation in subsea robotics and industrial inspection workflows.

Computer Vision·Underwater Robotics·ViT·Transformer·Video Understanding
NeRF 3D Reconstruction
🧊

NeRF 3D Reconstruction

Adapted Neural Radiance Fields to reconstruct detailed 3D models of challenging objects from monocular image sequences for robotic camera. Explored scene representation, volumetric rendering, and novel-view synthesis — work conducted at SINTEF Digital that directly led to the master's thesis.

Python·PyTorch·NeRF·3D MoMa·3D Vision·Volume Rendering
Galaxy AR Emoji SDK
🎭

Galaxy AR Emoji SDK

Built and documented an end-to-end Unity integration for Samsung's Galaxy AR Emoji SDK — enabling developers to drive custom avatar animations at runtime using Android intents, Animator controllers, and external rigs from tools like Mixamo. Published as an official Samsung Developer blog.

Unity·C#·AR Emoji SDK·Android·Mixamo·Samsung R&D
Samsung IAP for Unity
🛒

Samsung IAP for Unity

Designed and developed a reference Unity game demonstrating full Samsung In-App Purchase integration — covering plugin setup, consumable and non-consumable purchase flows, receipt validation, and item consumption via the Galaxy Store backend. Published as an official Samsung Developer blog.

Unity·C#·Samsung IAP SDK·Galaxy Store·Android·Samsung R&D
Galaxy Store via Unity UDP
🎮

Galaxy Store via Unity UDP

Built a Unity game pipeline demonstrating end-to-end publishing to Galaxy Store through the Unity Distribution Portal — integrating IAP, configuring UDP repacking, and producing store-ready APKs without per-store SDK rewrites. Published as an official Samsung Developer blog.

Unity·UDP·Galaxy Store·Samsung IAP·Android·Samsung R&D

Many more academic & toy projects from my M.Sc. and B.Sc. years live on GitHub ↗

03.

Research & Development

NTNU — PhD Research (2023 — Present)

Advancing Myocardial Function Imaging in Echocardiography using Vision Intelligence

Task 01
Done

Long-range Myocardial Point Tracking

Designed a coarse-to-fine deep learning model for long-range tissue motion estimation across full cardiac cycles in echocardiographic sequences. Achieved 67% average position accuracy and 2.86px median trajectory error, with 25% relative improvement in GLS computation. Published at MICCAI 2024.

Paper arXiv Code
Task 02
Done

Bias-Aware Multi-view Speckle Tracking

Identified systematic directional motion bias in modern point tracking models across cardiac views, and proposed impartial motion training with tailored augmentations to correct it. Achieved a 60.7% boost in position accuracy and 61.5% reduction in trajectory error, with improved GLS alignment to expert tools. Published at ICCV 2025 Workshop CVAMD.

arXiv
Task 03
Plan

AI-Assisted Human-in-the-Loop

Explore an AI-assisted, human-in-the-loop system that enhances strain measurement accuracy and supports efficient clinical data annotation workflows.

Task 04
Ongoing

Clinical Validation

Validate the fully automated strain estimation pipeline across relevant patient cohorts to establish clinical applicability and reliability for cardiac diagnosis.

SINTEF Digital & NTNU — Master's Thesis (2022 — 2023)

Multi-label Video Classification for Underwater Ship Inspection

Part of LIACI — Label Inspection of Autonomous and Cognitive ships · SINTEF Digital & NTNU

Task 01
Done

Dataset & Annotation Framework

Curated and annotated an ROV-collected underwater ship hull video dataset with multi-label ground truth covering concurrent defect categories — corrosion, marine growth, and coating degradation — establishing the benchmark for the full study.

Task 02
Done

Spatiotemporal Transformer Architecture

Designed MViST — a Vision Spatiotemporal Transformer that fuses spatial per-frame features with temporal self-attention across consecutive video frames, enabling simultaneous detection of multiple hull conditions from ROV footage.

Task 03
Done

Model Training & Benchmarking

Trained and evaluated MViST against CNN and single-frame ViT baselines, demonstrating superior multi-label classification performance and robustness under challenging underwater visibility conditions.

Task 04
Done

Publication & Dissemination

Published findings at OCEANS 2023, Limerick (IEEE) and presented at the NORA Annual Conference 2023, Tromsø — in collaboration with SINTEF Digital, NTNU, and the LIACI project team.

SINTEF Digital — Summer Research (2022)

Neural Radiance Field (NeRF) for 3D Object Reconstruction

Part of DeepStruct — 3D imaging of transparent & challenging objects for logistics automation · Client: Zivid

SINTEF Digital — NeRF Summer Research 2022
▶ Watch on YouTube — Recorded by SINTEF Digital
Task 01
Done

NeRF Theory & Instant NGP

Studied the Neural Radiance Field framework in depth: 5D coordinate-based continuous scene representation, volume rendering, positional encoding, and hierarchical volume sampling. Evaluated NVIDIA's Instant NGP with multiresolution hash encoding, which reduces training from hours to seconds while maintaining reconstruction quality.

Task 02
Done

Custom Dataset & Training Pipeline

Built a multi-view capture and processing pipeline for custom industrial objects. Used COLMAP structure-from-motion for camera pose estimation from high-resolution images, and deployed Instant NeRF training on both a Windows workstation and a Linux HPC cluster.

Task 03
Done

3D Reconstruction of Challenging Objects

Reconstructed 3D meshes of industrial objects including transparent and reflective surfaces — a core challenge for logistics automation. Studied the impact of image resolution, capture sparsity and direction, background removal quality, and marching cube resolution on the final mesh accuracy.

Task 04
Done

Benchmarking vs NVIDIA 3D MoMa

Compared Instant NeRF against NVIDIA's 3D MoMa (CVPR 2022) — which simultaneously extracts mesh, textures, and lighting — on the same object. NeRF achieved superior reconstruction quality in minutes with minimal GPU memory, versus hours for 3D MoMa, establishing it as the preferred approach for the project.

Samsung R&D Bangladesh — R&D Ops (2020 — 2021)

SDK Developer Evangelism & Multi-platform Engineering

Products: Galaxy Watch Face · Samsung IAP · AR Emoji SDK · Samsung DeX · Galaxy Store · Unity UDP

60%

Developer Evangelism, Cross-platform SDK Support & Pre-release QA

Served as the primary technical contact for third-party developers integrating Samsung SDKs — covering Galaxy Watch Face, Samsung IAP, AR Emoji SDK for Unity, Samsung DeX, Galaxy Store, and Unity Distribution Portal (UDP). Diagnosed and resolved integration issues across Android, Unity, Windows, and Tizen platforms, rapidly switching between C#, Java, Kotlin, and C++ as needed. Also executed structured pre-release validation cycles before every new SDK version — verifying previously reported issues were resolved, confirming backward compatibility, and stress-testing new features across target platforms.

20%

Technical Content & Developer Education

Authored integration guides and feature spotlights published on Samsung Developers — translating complex SDK internals into clear, step-by-step tutorials for external developers. Writing spanned Unity plugin setup, in-app purchase flows, AR Emoji customisation pipelines, and DeX multi-window adaptation patterns, directly reducing support ticket volume for new releases.

20%

Internal R&D & Engineering Excellence

Dedicated a portion of work time to Samsung-internal research initiatives and participated in internal professional programming competitions — maintaining high engineering standards and staying current with emerging platform capabilities. This cadence of structured research and competitive programming sharpened algorithmic problem-solving skills applied directly to SDK architecture decisions.

04.

Publications

Also on Google Scholar · ResearchGate · ORCID

2025
Taming Modern Point Tracking for Speckle Tracking Echocardiography via Impartial Motion

Azad M.A., Nyberg J., Dalen H., Grenne B., Lovstakken L., Østvik A.

ICCV 2025 Workshop CVAMD · Computer Vision for Automated Medical Diagnosis
2025
Low Complexity Point Tracking of the Myocardium in 2D Echocardiography

Chernyshov A., Nyberg J., Holmstrøm V., Azad M.A., Grenne B., Dalen H., Aase S.A., Lovstakken L., Østvik A.

IEEE Access 2025
2024
EchoTracker: Advancing Myocardial Point Tracking in Echocardiography

Azad M.A., Chernyshov A., Nyberg J., Tveten I., Lovstakken L., Dalen H., Grenne B., Østvik A.

Early Accept · Top 11% MICCAI 2024
2023
Multi-label Video Classification for Underwater Ship Inspection

Azad M.A., Mohammed A., Waszak M., Elvesæter B., Ludvigsen M.

OCEANS 2023 Limerick, Ireland
2021
Layered Ensemble Learning for Effective Binary Classification

Azad M.A., Islam S., Farid D.M., Shatabda S.

IEMIS 2020 Springer Singapore
2019
Big Data with Decision Tree Induction

Sabah S., Anwar S.Z.B., Afroze S., Azad M.A., Shatabda S., Farid D.M.

SKIMA 2019 Maldives
05.

Experience & Education

Work Experience

2023 — Present
PhD Researcher
  • Deep learning for myocardial point tracking and strain estimation in echocardiography
  • Published at MICCAI 2024 (top 11% early accept)
  • Open-source implementation with 56+ GitHub stars
Aug 2022 — Jun 2023
Master's Thesis Intern — Computer Vision
SINTEF Digital · Oslo, Norway
  • Multi-label video classification for underwater ship hull inspection
  • Designed MViST — a Vision Spatiotemporal Transformer for concurrent defect detection
  • Published at OCEANS 2023, Limerick (IEEE)
Jun 2022 — Aug 2022
Summer Research Intern — Computer Vision
SINTEF Digital · Oslo, Norway
  • 3D reconstruction of challenging objects using Neural Radiance Fields (NeRF)
  • Part of the DeepStruct project — client: Zivid
2019 — 2021
Software Engineer
Samsung R&D Bangladesh · Dhaka, Bangladesh
  • AR Emoji, Galaxy Watch Face, Samsung IAP, Samsung DeX SDK development
  • Augmented reality and Samsung ecosystem features

Education

Sep 2023 — 2026
Ph.D. in AI for Ultrasound Medical Imaging
NTNU · Trondheim, Norway
Thesis: Advancing Myocardial Function Imaging in Echocardiography using Vision Intelligence
Aug 2022 — Jun 2023
M.Sc. Marine & Maritime Intelligent Robotics
NTNU · Trondheim, Norway  Erasmus Mundus · M2
Specialization: Deep Learning & Computer Vision · Thesis: Multi-label Video Classification for Underwater Ship Inspection (SINTEF Digital)
Sep 2021 — Jun 2022
M.Sc. Engineering of Complex Systems
Université de Toulon · Toulon, France  Erasmus Mundus · M1
Specialization: AI — Deep / Machine / Reinforcement Learning
Feb 2014 — May 2019
B.Sc. Computer Science & Engineering
United International University · Dhaka, Bangladesh
Major: Machine Learning & Pattern Recognition
Sep 2016 — Jul 2017
Erasmus Mundus Exchange & Bachelor's Thesis
Universität Bremen · Bremen, Germany  Erasmus Mundus
06.

Talks

Mar 2024
EchoTracker: Advancing Myocardial Point Tracking in Echocardiography
CIUS Final Conference 2024 · Trondheim, Norway
Feb 2024
Speckle Tracking Echocardiography using Point Tracking: A Paradigm Shift?
Leuven Meeting on Myocardial Function Imaging 2024 · KU Leuven, Belgium
Aug 2023
From Scholar to Explorer: A Tale of Underwater Robotics and Erasmus Mundus Scholarship
United International University · Dhaka, Bangladesh
Jun 2023
Multi-label Video Classification for Underwater Ship Inspection
OCEANS 2023 Conference & Exposition · Limerick, Ireland
Jun 2023
MViST: A Multi-label Vision Spatiotemporal Transformer for Underwater Ship Inspection
NORA Annual Conference 2023 · Tromsø, Norway
07.

Get In Touch

I'm always open to research collaborations, new opportunities, and conversations about applying AI and computer vision to real-world industry challenges. Whether you're building something exciting or just want to connect — feel free to reach out!

Say Hello →