Welcome to CredenceX AI Research Lab

Building Intelligent Systems That Sense, Reason, and Assist

CredenceX AI Research Lab develops data-driven solutions for medical imaging to support early warning, clinical decision support, and risk-aware deployment in high-stakes environments. We aim to reduce disparities by ensuring our models perform reliably across devices and diverse patient populations. We also provide clear, clinically meaningful explanations and uncertainty, helping clinicians and patients know when to trust an output and when to be cautious.

Core Research Areas at CredenceX

Exploring cutting-edge technologies to build safer, smarter, and more trustworthy AI systems for tomorrow

01

Vision + Language for Healthcare

Link images, reports, and clinical data to power smarter image understanding and decision support.

02

Explainable Imaging AI

Make predictions easier to understand—with outputs designed for real clinical workflows.

03

Trustworthy & Risk-Aware AI

Safer AI through reliability testing, uncertainty estimates, and auditable explanations.

04

Efficient AI at the Edge

Lightweight models built for real-time use on mobile, web, and resource-limited devices.

05

Human-in-the-Loop Decision Support

Risk-aware outputs and clear evidence—so clinicians stay in control of the final call.

06

Robust Across Sites & Scanners

Built to generalize across hospitals, devices, and populations—reducing real-world failure.

Latest News from CredenceX

Stay updated with our recent achievements, announcements, and research breakthroughs

Announcement
2 min read

Conference Service: Organizer Role at IIMCSE 2026

By CredenceX Research Team

Contributing to the research community through conference organization and technical coordination.

Event
3 min read

Invited Talk: ROBOTICS-2026 (Rome) — Multimodal AI and Real-World Deployment

By CredenceX Research Team

Our team will present ongoing research on multimodal learning, explainability, and deployment-ready AI systems.

Event
3 min read

Invited Talk: BEACONGRESS2026 (Portugal) — Trustworthy Clinical AI

By CredenceX Research Team

Sharing progress on explainable decision support and calibration-aware medical AI workflows.

Award
3 min read

Best Paper Award: IEEE PEEIACON 2025

By CredenceX Research Team

Awarded for research excellence at the 2025 IEEE International Conference on Power, Electrical, Electronics and Industrial Applications (PEEIACON).

Award
3 min read

Best Paper Award: IEEE BECITHCON 2025

By CredenceX Research Team

Honored for impactful applied AI research at the 2025 IEEE BECITHCON conference.

Featured Projects at CredenceX

Showcasing our cutting-edge research projects that push the boundaries of AI innovation and real-world impact

Trustworthy & Calibrated AI
completed

DepTformer-XAI-SV

2025

A reproducible, explainable transformer pipeline for depression emotion/severity experiments, including ablations, XAI faithfulness checks, and a minimal Flask demo (research use only).

PyTorchPythonFlaskDocker
Explainable Medical Image Intelligence
completed

Explainable Lung Cancer Diagnosis

2025

Lightweight hybrid CNN–Transformer (MobileViT + attention + texture cues) for efficient and explainable lung cancer diagnosis on CT/histopathology with Grad-CAM and robust evaluation support.

PyTorchMobileViTCBAMGrad-CAM
Multimodal Vision–Language Foundation Models
completed

Multimodal Information Fusion

2025

A modular pipeline for audio-visual object recognition using hybrid, tensor, and FiLM-style fusion with flexible feature extraction and noise-robust training options.

PythonFiLM FusionXceptionxLSTM
Efficient Hybrid Transformers for Edge Deployment
completed

CottonVerse

2025

Flask-based web application for cotton leaf disease, fabric stain defect detection, and fabric composition classification with probability charts and Grad-CAM explanations.

FlaskPyTorchtimmpytorch-grad-cam
Decision Support & Human-in-the-Loop AI
completed

SoyScan

2025

MaxViT-based soybean leaf/seed disease classification web app with Grad-CAM heatmaps, probability visualization, and a clean UI for practical screening workflows.

MaxViTFlaskPyTorchGrad-CAM
Clinical Decision Support & Human-in-the-Loop AI
in progress

Calibrated Multimodal Radiology Copilot

2026

A risk-aware clinical decision support pipeline that fuses medical images with radiology notes and structured signals to produce calibrated predictions, uncertainty flags, and evidence-grounded outputs for safer triage and reporting assistance.

PyTorchTransformersLLM WorkflowsUncertainty Calibration
Robust Learning Under Domain Shift
in progress

Cross-Hospital Generalization

2026

A standardized evaluation suite to measure and improve model performance across hospitals, scanners, and patient subgroups—supporting domain-shift testing, fairness slices, and reproducible reporting for deployment-ready medical AI.

PythonPyTorchBenchmarkingFairness & Robustness
Trustworthy & Calibrated AI
in progress

Safe-to-Use Gatekeeper

2026

A safety layer that detects uncertain, out-of-distribution, or artifact-corrupted cases and defers them for human review. Includes coverage–risk analysis, abstention policies, and audit-friendly logs for high-stakes clinical deployment.

PyTorchOOD DetectionSelective PredictionCalibration
Explainable Medical Image Intelligence
in progress

Clinically Meaningful Explainability Suite

2026

A clinician-oriented explainability toolkit that goes beyond heatmaps—providing concept-based explanations, counterfactual evidence, faithfulness checks, and concise explanation report cards to support transparent and auditable medical AI.

Grad-CAMAttention AnalysisConcept ExplanationsFaithfulness Metrics
Vision + Language for Healthcare
in progress

Evidence-Grounded LLM Assistant

2026

A safety-first LLM workflow that drafts structured clinical summaries using only verified evidence (model outputs, metadata, and approved templates). Includes confidence-aware refusal, traceable citations, and guardrails for responsible use.

LLM WorkflowsRAGStructured ReportingSafety Guardrails
Trustworthy & Calibrated AI
planned

Federated Medical Foundation Model

2026

A privacy-preserving foundation model trained across institutions without centralizing patient data. Focuses on federated optimization, calibration under client shift, and robust performance across sites and scanners.

PyTorchFederated LearningDifferential PrivacySecure Aggregation
Clinical Decision Support & Human-in-the-Loop AI
planned

Longitudinal Disease Progression Forecasting

2026

Risk forecasting from serial scans to predict progression and time-to-event outcomes (e.g., glaucoma progression). Produces calibrated risk curves, uncertainty, and clinician-friendly timelines for follow-up planning.

TransformersTime-Series ModelingSurvival AnalysisUncertainty Estimation

Research Publications at CredenceX

Cutting-edge research contributions advancing the field of artificial intelligence

Journal
Published

Vision-audio multimodal object recognition using hybrid and tensor fusion techniques

Md Redwan Ahmed, Rezaul Haque, SM Arafat Rahman, Ahmed Wasif Reza, Nazmul Siddique, Hui Wang

Information Fusion (Elsevier)

Audio–visual multimodal object recognition with hybrid + tensor fusion strategies designed for robust real-world performance.

Journal
Published

Ensemble Transformer with Post-hoc Explanations for Depression Emotion and Severity Detection

Sazzadul Islam, Rezaul Haque, Mahbub Alam Khan, Arafath Bin Mohiuddin, Md Ismail Hossain Siddiqui, Zishad Hossain Limon, Katura Gania Khushbu, SM Masfequier Rahman Swapno, Md Redwan Ahmed, Abhishek Appaji

iScience (Cell Press / Elsevier)

DepTformer-XAI-SV: ensemble transformers for depression emotion/severity detection with LIME explanations and a web app.

Journal
Published

Explainable Transformer Framework for Fast Cotton Leaf Diagnostics and Fabric Defect Detection

SM Masfequier Rahman Swapno, Anamul Sakib, Amira Hossain, Jesika Debnath, Abdullah Al Noman, Abdullah Al Sakib, Md Redwan Ahmed, Rezaul Haque, Abhishek Appaji

iScience (Cell Press / Elsevier)

Explainable transformer framework spanning agriculture (cotton leaf) and textile inspection (fabric defect) with practical interpretability.

Journal
Published

LMVT: A hybrid vision transformer with attention mechanisms for efficient and explainable lung cancer diagnosis

Jesika Debnath, Amira Hossain, Anamul Sakib, Hamdadur Rahman, Rezaul Haque, Md Redwan Ahmed, Ahmed Wasif Reza, SM Masfequier Rahman Swapno, Abhishek Appaji

Informatics in Medicine Unlocked (Elsevier)

Hybrid ViT with attention and XAI for efficient and explainable lung cancer diagnosis (deployment-oriented).

Journal
Published

Accelerated and accurate cervical cancer diagnosis using a novel stacking ensemble method with explainable AI

Md Ismail Hossain Siddiqui, Shakil Khan, Zishad Hossain Limon, Hamdadur Rahman, Mahbub Alam Khan, Abdullah Al Sakib, SM Masfequier Rahman Swapno, Rezaul Haque, Ahmed Wasif Reza, Abhishek Appaji

Informatics in Medicine Unlocked (Elsevier)

Stacking ensemble + explainability for reliable cervical cancer diagnosis using Pap smear imaging.

Journal
Published

Hierarchical Swin Transformer Ensemble with Explainable AI for Robust and Decentralized Breast Cancer Diagnosis

Md Redwan Ahmed, Hamdadur Rahman, Zishad Hossain Limon, Md Ismail Hossain Siddiqui, Mahbub Alam Khan, Al Shahriar Uddin Khondakar Pranta, Rezaul Haque, SM Masfequier Rahman Swapno, Young-Im Cho, Mohamed S Abdallah

Bioengineering (MDPI)

Federation-ready Swin-Transformer ensemble with post-hoc explainability for robust breast cancer diagnosis.

Journal
Published

Explainable deep stacking ensemble model for accurate and transparent brain tumor diagnosis

Rezaul Haque, Mahbub Alam Khan, Hamdadur Rahman, Shakil Khan, Md Ismail Hossain Siddiqui, Zishad Hossain Limon, SM Masfequier Rahman Swapno, Abhishek Appaji

Computers in Biology and Medicine (Elsevier)

Explainable deep stacking ensemble for brain tumor diagnosis with transparency-focused reporting.

Journal
In Review

Hallucination-Resistant Tri-Modal Information Fusion with a Multi-Granularity Text-Aware Multimodal LLM

CredenceX AI Team

Information Fusion (Elsevier)

Tri-modal fusion with multi-granularity alignment for hallucination-resistant multimodal LLM decision-making.

Journal
Accepted

LightVTD: Lightweight Explainable Vision Transformer with Multi-Path Token Fusion for Drowsiness Detection

AL RAFY, Md Najmul Gony, Md Mashfiquer Rahman, Mohammad Shahadat Hossain, Sd Maria Khatun Shuvra, Rezaul Haque, Md. Redwan Ahmed, S M Masfequier Rahman Swapno, Tahani Jaser Alahmadi, Mohammad Ali Moni

Scientific Reports (Nature Portfolio)

Lightweight explainable ViT with multi-path token fusion for early drowsiness detection in safety-critical scenarios.

Stay Updated with CredenceX

Subscribe to receive the latest research updates, breakthrough discoveries, and exclusive insights from our AI research lab.

We respect your privacy. Unsubscribe at any time.