Conference Service: Organizer Role at IIMCSE 2026
By CredenceX Research Team
Contributing to the research community through conference organization and technical coordination.
CredenceX AI Research Lab develops data-driven solutions for medical imaging to support early warning, clinical decision support, and risk-aware deployment in high-stakes environments. We aim to reduce disparities by ensuring our models perform reliably across devices and diverse patient populations. We also provide clear, clinically meaningful explanations and uncertainty, helping clinicians and patients know when to trust an output and when to be cautious.
Exploring cutting-edge technologies to build safer, smarter, and more trustworthy AI systems for tomorrow
Link images, reports, and clinical data to power smarter image understanding and decision support.
Make predictions easier to understand—with outputs designed for real clinical workflows.
Safer AI through reliability testing, uncertainty estimates, and auditable explanations.
Lightweight models built for real-time use on mobile, web, and resource-limited devices.
Risk-aware outputs and clear evidence—so clinicians stay in control of the final call.
Built to generalize across hospitals, devices, and populations—reducing real-world failure.
Stay updated with our recent achievements, announcements, and research breakthroughs
By CredenceX Research Team
Contributing to the research community through conference organization and technical coordination.
By CredenceX Research Team
Our team will present ongoing research on multimodal learning, explainability, and deployment-ready AI systems.
By CredenceX Research Team
Sharing progress on explainable decision support and calibration-aware medical AI workflows.
By CredenceX Research Team
Awarded for research excellence at the 2025 IEEE International Conference on Power, Electrical, Electronics and Industrial Applications (PEEIACON).
By CredenceX Research Team
Honored for impactful applied AI research at the 2025 IEEE BECITHCON conference.
Showcasing our cutting-edge research projects that push the boundaries of AI innovation and real-world impact
A reproducible, explainable transformer pipeline for depression emotion/severity experiments, including ablations, XAI faithfulness checks, and a minimal Flask demo (research use only).
Lightweight hybrid CNN–Transformer (MobileViT + attention + texture cues) for efficient and explainable lung cancer diagnosis on CT/histopathology with Grad-CAM and robust evaluation support.
A modular pipeline for audio-visual object recognition using hybrid, tensor, and FiLM-style fusion with flexible feature extraction and noise-robust training options.
Flask-based web application for cotton leaf disease, fabric stain defect detection, and fabric composition classification with probability charts and Grad-CAM explanations.
MaxViT-based soybean leaf/seed disease classification web app with Grad-CAM heatmaps, probability visualization, and a clean UI for practical screening workflows.
A risk-aware clinical decision support pipeline that fuses medical images with radiology notes and structured signals to produce calibrated predictions, uncertainty flags, and evidence-grounded outputs for safer triage and reporting assistance.
A standardized evaluation suite to measure and improve model performance across hospitals, scanners, and patient subgroups—supporting domain-shift testing, fairness slices, and reproducible reporting for deployment-ready medical AI.
A safety layer that detects uncertain, out-of-distribution, or artifact-corrupted cases and defers them for human review. Includes coverage–risk analysis, abstention policies, and audit-friendly logs for high-stakes clinical deployment.
A clinician-oriented explainability toolkit that goes beyond heatmaps—providing concept-based explanations, counterfactual evidence, faithfulness checks, and concise explanation report cards to support transparent and auditable medical AI.
A safety-first LLM workflow that drafts structured clinical summaries using only verified evidence (model outputs, metadata, and approved templates). Includes confidence-aware refusal, traceable citations, and guardrails for responsible use.
A privacy-preserving foundation model trained across institutions without centralizing patient data. Focuses on federated optimization, calibration under client shift, and robust performance across sites and scanners.
Risk forecasting from serial scans to predict progression and time-to-event outcomes (e.g., glaucoma progression). Produces calibrated risk curves, uncertainty, and clinician-friendly timelines for follow-up planning.
Cutting-edge research contributions advancing the field of artificial intelligence
Md Redwan Ahmed, Rezaul Haque, SM Arafat Rahman, Ahmed Wasif Reza, Nazmul Siddique, Hui Wang
Information Fusion (Elsevier)
Audio–visual multimodal object recognition with hybrid + tensor fusion strategies designed for robust real-world performance.
Sazzadul Islam, Rezaul Haque, Mahbub Alam Khan, Arafath Bin Mohiuddin, Md Ismail Hossain Siddiqui, Zishad Hossain Limon, Katura Gania Khushbu, SM Masfequier Rahman Swapno, Md Redwan Ahmed, Abhishek Appaji
iScience (Cell Press / Elsevier)
DepTformer-XAI-SV: ensemble transformers for depression emotion/severity detection with LIME explanations and a web app.
SM Masfequier Rahman Swapno, Anamul Sakib, Amira Hossain, Jesika Debnath, Abdullah Al Noman, Abdullah Al Sakib, Md Redwan Ahmed, Rezaul Haque, Abhishek Appaji
iScience (Cell Press / Elsevier)
Explainable transformer framework spanning agriculture (cotton leaf) and textile inspection (fabric defect) with practical interpretability.
Jesika Debnath, Amira Hossain, Anamul Sakib, Hamdadur Rahman, Rezaul Haque, Md Redwan Ahmed, Ahmed Wasif Reza, SM Masfequier Rahman Swapno, Abhishek Appaji
Informatics in Medicine Unlocked (Elsevier)
Hybrid ViT with attention and XAI for efficient and explainable lung cancer diagnosis (deployment-oriented).
Md Ismail Hossain Siddiqui, Shakil Khan, Zishad Hossain Limon, Hamdadur Rahman, Mahbub Alam Khan, Abdullah Al Sakib, SM Masfequier Rahman Swapno, Rezaul Haque, Ahmed Wasif Reza, Abhishek Appaji
Informatics in Medicine Unlocked (Elsevier)
Stacking ensemble + explainability for reliable cervical cancer diagnosis using Pap smear imaging.
Md Redwan Ahmed, Hamdadur Rahman, Zishad Hossain Limon, Md Ismail Hossain Siddiqui, Mahbub Alam Khan, Al Shahriar Uddin Khondakar Pranta, Rezaul Haque, SM Masfequier Rahman Swapno, Young-Im Cho, Mohamed S Abdallah
Bioengineering (MDPI)
Federation-ready Swin-Transformer ensemble with post-hoc explainability for robust breast cancer diagnosis.
Rezaul Haque, Mahbub Alam Khan, Hamdadur Rahman, Shakil Khan, Md Ismail Hossain Siddiqui, Zishad Hossain Limon, SM Masfequier Rahman Swapno, Abhishek Appaji
Computers in Biology and Medicine (Elsevier)
Explainable deep stacking ensemble for brain tumor diagnosis with transparency-focused reporting.
CredenceX AI Team
Information Fusion (Elsevier)
Tri-modal fusion with multi-granularity alignment for hallucination-resistant multimodal LLM decision-making.
AL RAFY, Md Najmul Gony, Md Mashfiquer Rahman, Mohammad Shahadat Hossain, Sd Maria Khatun Shuvra, Rezaul Haque, Md. Redwan Ahmed, S M Masfequier Rahman Swapno, Tahani Jaser Alahmadi, Mohammad Ali Moni
Scientific Reports (Nature Portfolio)
Lightweight explainable ViT with multi-path token fusion for early drowsiness detection in safety-critical scenarios.
Subscribe to receive the latest research updates, breakthrough discoveries, and exclusive insights from our AI research lab.