5-Year Impact Factor: 0.9
Volume 35, 12 Issues, 2025
  Systematic Review Article     December 2025  

Artificial Intelligence in the Diagnosis and Assessment of Periodontitis: A Systematic Review

By Farzeen Tanwir, Eesha Hameed, Tauqeer Bibi

Affiliations

  1. Department of Periodontology, Bahria University Health Sciences, Karachi, Pakistan
doi: 10.29271/jcpsp.2025.12.1590

ABSTRACT
The present systematic review aimed to evaluate the utilisation of artificial intelligence (AI) across several aspects of periodontal diagnosis and treatment planning by studying and analysing recent literature on the assessment of periodontitis through various radiographic analysis models using AI. The databases including PubMed, Cochrane, ScienceDirect, and Google Scholar were searched from 1st June to August 2024. From the shortlisted studies, 15 original research articles were included in the review and were assessed for risk of bias using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool by the Cochrane Collaboration as a quality evaluation tool. All the models showed comparable sensitivity compared to that of examiners. AI can serve as a time-saving aid for clinicians; however, further studies are required using well-defined and accepted gold standard, applied in clinical setting with datasets of intraoral periapical series.

Key Words: Artificial intelligence, Periodontitis, Diagnosis.

INTRODUCTION

Alan Turing introduced the term artificial intelligence (AI) in 1950;1 however, McCarthy provided the first definition of AI in 1956.2 AI is an abstract term describing various fundamental technologies that allow electronic machines to perform actions that pertain to human-like ability.3 As the name indicates, artificial neural networks use artificial neurons, similar to human neurons, and thus can mimic the human brain and reproduce similar cognitive skills such as problem resolution, acquisition, and judgement. This system consists of three layers: an input layer that handles data input, a hidden layer that refines the data, and an output layer that makes the final decisions for the task.4

Recently, this technology has been applied across various fields, particularly in engineering and medicine. The research of Alatrany et al. focuses on detecting early Alzheimer's disease on MRI data through AI-based models,5 verifying the role of this technology in the early recognition and treatment of leading diseases. AI in dentistry is thriving, serving clinicians, and delivering optimal care. Several studies worldwide have been  published  highlighting  the  various  uses  of  AI.6-8
 

The use of AI has increased in all fields of dentistry; however, it remains in its immaturity and has not yet found satisfactory implementation  in  periodontology.9

Periodontal disease, often known as gum malady, is a common oral health disorder.10 It is a complex, microbial, inflammatory condition that causes progressive breakdown of the tooth's supporting tissues, leading to periodontal attachment and bone loss.11 This disease is diagnosed clinically by probing and measuring recession.12 However, this method is not satisfactory, as its reliability depends on the force, type, tip diameter, and angulation of the instrument.13 Measuring the amount of bone loss (ABL) in radiographs is another method of diagnosis; however, multiple evaluators have limited accuracy and decreased reliability, as demonstrated in several studies.14 In recent years, AI has shown promise in improving diagnostic accuracy  across  various  medical  and  dental  disciplines.15

Despite increasing interest and a number of essential studies addressing AI applications in periodontology, the current body of evidence was fragmented due to variability in AI models used, sample sizes, performance metrics, and validation techniques.16 As a result, there was a lack of consolidated synthesis of the current capabilities, limitations, and future directions of AI in diagnosing and assessing chronic periodontitis. Therefore, this systematic review aimed to evaluate the present application of AI across several aspects of periodontal diagnosis and treatment planning by studying and analysing recent literature on the assessment of periodontitis through radiographic  analysis.
 

METHODOLOGY

This systematic review followed the guidelines given by Preferred Reporting Items for Systematic Reviews and Meta- Analysis (PRISMA).17 The research question was formulated by following the PICO format. In patients undergoing diagnosis and assessment for periodontitis (P), how do convolutional neural network (CNN)-based AI models (I) compare to clinicians using a well-defined gold standard or other AI-based models (C) in terms of their clinical utility for diagnosis, detection, prognosis, or treatment planning of periodontitis (O) based on the performance  metrics  of  models.

In terms of eligibility criteria, studies were included if they were original research articles available in open access, published between 2018 and 2024, and used artificial neural networks (ANNs) or convolutional neural networks (CNNs) for the diagnosis, assessment, or evaluation of periodontal bone loss, comparing AI-based models with clinicians or a well-defined gold standard. Systematic reviews, randomised controlled trials, editorials, and book chapters published before 2018, not written in English, or those that did not clearly explain the methodology for AI model construction were excluded.

Databases, including PubMed, Cochrane, ScienceDirect, and Google Scholar, were initially searched from 1st June to August 2024 by using the following keywords: AI [Mesh] OR Machine Learning [Mesh] OR Deep Learning OR Neural Networks OR Computer-aided Diagnosis OR Decision Support Systems, Clinical [Mesh] OR AI OR ML) AND (Periodontitis, Chronic [Mesh] OR Chronic Periodontitis OR Periodontal Disease OR Gum Disease) AND (Diagnosis [Mesh] OR Assessment OR Detection OR Classification OR Grading OR Staging). The search was repeated once a week for the next two months. To increase the data, reference articles from selected studies were also added if they met the inclusion criteria.

One independent researcher carefully evaluated the generated studies. First, titles and then abstracts of the articles were reviewed and retained for full-text assessment. The selected articles were reviewed by a senior professor for a more accurate selection. Any conflicts were resolved through mutual agreement.

The selected articles underwent full-text review by another researcher. After the application of inclusion criteria, the following information was extracted in an Excel sheet: first author’s name and year of publication, country, AI model used, the entity to which the model was compared, source of data, data set,  and  main  findings  (Table  I).18-32

The selected studies were evaluated for risk of bias using the Quality Assessment of Diagnostic Accuracy Studies (QUDAS-2) tool. Two examiners carefully evaluated the studies by following the guidelines  (Table  II).33

RESULTS

A total of fifteen studies employing AI for the detection and evaluation of periodontal bone loss were examined. The research encompassed multiple countries, such as Turkiye, South Korea, China, Germany, the United States, Saudi Arabia, the United Kingdom, Russia, the Netherlands, and Thailand. All studies utilised CNNs or other deep learning models as the main AI model, using radiographic images—such as orthopantomograms (OPGs), periapical radiographs, or cone-beam computed tomography (CBCT)—as input data.

In most studies, AI models demonstrated moderate to high levels of diagnostic accuracy, often similar to that of skilled dental professionals. Bayrakdar et al. found high accuracy in identifying diseased cases, with the model misclassifying only 6 out of 105 instances.18 Chang et al. noted a significant relationship between the model and experienced radiolo- gists.19 Jiang et al. showed that AI models outperformed general dentists in detecting early-stage bone loss, while human examiners achieved superior performance in cases of advanced lesions.20

Krois et al. observed that stricter diagnostic thresholds led to a decline in model sensitivity, resulting in poorer performance than that of clinicians (p = 0.067).21 Cerda Mardini et al. disco-vered that the AI model was effective at detecting mild to moderate bone loss (F1-score = 0.29), but completely ineffective for severe loss (F1-score = 0). Periodontists exceeded the model in every parameter.22

Kim et al. revealed that after several training cycles, the model performance greatly improved; however, third molar regions remained a compromising area of the model.23 Alotaibi et al. reported low diagnostic quality for severe bone loss cases using the VGG-16 model.24

Figure   1:   Characteristics   of   the   included   studies.

Table I: Characteristics of the included studies.

First Authors (years)

Countries

Types of AI model (CNN)

Input types

Comparisons

Datasets

Main findings

Sevda Kurt

(2020)

Turkiye

Pre-trained CNN

GoogleNet, Inception v3 network

 

Orthopantomograms (OPG)

 

 

Compared with

maxillofacial radiologist and periodontologist (≥9 years of experience)

 

A total of 2,276

panoramic images are divided into the training (1,856), validation (210), and testing (210) sets.

Of 105 diseased cases, the model system evaluated six incorrectly and 99 correctly. Furthermore, in disease-free cases, it was less accurate with 12 incorrect diagnoses.18

Chang HJ

(2020)

South Korea

R-CNN (supported by pyramid network with ResNet-101)

OPG

 

Compared with three OMFS radiologists

(professor working for 10 years of experience, fellow: 5 years, resident: 3 years)

A total of 340 OPG were used. To evaluate multiple variables,

330, 115, and 73 images were analysed. The images in each set were distributed into a training set (306) and a testing set (34).

The mean average difference values were lower for canines than that for incisors and molars.19

 

The ICC between the professor and AI algorithm showed the highest co-relation, showing superior reliability.19

Linhong Jiang

(2022)

 

China

CNN (U-Net and YOLO-v4)

 

 

OPG

 

Compared with three general dentists (three years of experience each)

A total of 640 panoramic radiographs, after segmentation, were separated into the preparation set (512 images) and the two trial sets of 64.

The model holds improved scores compared to the dentists in stage I and II lesions; however, general dentists showed better results for stage III lesions.20

Joachim Krois

(2019)

 

Germany

 

Seven-layered network supported by the TensorFlow framework and the Keras software.

 

OPG

 

Compared with 6 dentists: one

periodontist, one endodontist, and four general dentists

A total of 2,001 hand-operated single tooth images obtained from 85 radiographs, randomly divided into preparation and validation sets by reordering.

The AI model was not accurate compared to the investigators (p = 0.067). Increasing the cut-off value resulted in decreased model sensitivity compared to the examiners.21

Diego Cerda Mardini

(2024)

The United States of America

 

Deep Neural Network (DCNN), trained with Google TensorFlow Keras and designed by Xception Networks.

OPG

 

Compared with

two radiologists,

two periodontists, and one general dentist

A total of 500 panoramic radiographs segmented into 2,010 rectangular images, divided into 1,576 for training, 394 for internal testing, and 40 for final testing.

The network showed satisfactory performance in diagnosing low to medium levels of bone loss (F1-score = 0.29) but was ineffective for diagnosing severe bone loss (F1-score = 0). The periodontist outperformed the model in all features.22

Jaeyoung Kim

(2019)

 

Korea

 

DCNNs called DeNTNet.

OPG

 

Compared clinicians with 5, 9, 16, 17, and 19 years of experience

A total of 12,179 panoramic dental x-rays images randomly divided into training = 11,189, validation (190), and test (800) sets.

 

The baseline DeNTNet model, trained directly, showed satisfactory performance compared to clinicians. However, after multiple rounds of training, the model accomplished better results; its performance was considerably inferior on third molars as compared to clinicians.23

Ghala Alotaibi

(2022)

 

Saudi Arabia

VGG-16 (Visual Geometry Group) network supported by TensorFlow and Keras.

Intraoral Periapical films

 

 

 

Three examiners, including a periodontist

A total of 1,724 intraoral periapical images were arbitrarily divided into preparation (70%), validation (20%), and testing (10%) sets.

Diagnostic quality was the lowest for severe bone loss.24

Raymond P

(2021)

The United Kingdom

Deep network with symmetric hourglass blocks.

Periapical films

 

 

 

 

Compared a modified hourglass network with a baseline ResNet-based regression model.

A total of 340 fully anonymised periapical radiographs were divided into three groups for 3-fold cross-validation.

 

The projected model was assessed to show its performance on radicular structure, showing a high performance of 88.9% for anteriors.25

Kubra Ertas

(2022)

 

Turkiye

Multiple artificial neural networks, such as Support Vector Machines (SVM),

Nearest Neighbours Random Forest, Naive Bayes, and Logistic Regression.28

OPG

 

.

 

 

Compared with

multiple algorithms

A total of 280 OPG, 236 were selected.

 

In the models evaluated in this study, success was higher when objective and radiographic evaluations were used. The ResNet50 + SVM dual model showed the highest performance in pre-processed images, achieving a classification accuracy value of 88.2%.26

Bilge Cansu

Uzun Saylan

(2023)

 

 

Turkiye

PyTorch-implemented YOLO-v5 model.

 

 

 

OPG

 

 

Compared domain-specific or local bone loss detection with general bone loss detection by the same AI model.

A total of 685 panoramic x-rays were divided into training (80%) and assessment (20%) sets.

 

When assessing the maxillofacial region, the model demonstrated higher efficacy in determining bone loss in the maxilla. Furthermore, it showed greater accuracy in uncovering regional bone loss.27

Ezhov M

(2021)

USA and Russia

Diagnocat.

Cone-Beam Computed Tomography (CBCT)

 

 

Compared the aided group (dentists assisted by AI models) with the unaided group (dentists not assisted by AI models)

For the periodontitis module, the data set of 99 CBCTs was used.

The AI-assisted unit had higher operational efficiency. The introduction of the model decreased the time required to evaluate a single CBCT by 1.19 min (6.78%).28

Nektaros

Tsoromokos

(2022)

Netherlands

A 13-layer deep model with ReLU and four MaxPooling layers.

Periapical radiographs

 

 

Manual annotation by a radiologist

A total of 446 radiographs were annotated. The training set included 327 images, the validation set 49, and the test set 70 images.

Overall, the AI model underestimated bone loss values, which was significant for multi-rooted teeth (8.5%) and for teeth with angled defect (10% accuracy).29

Bhornsawan Thanathornwong

(2020)

Thailand

R-CNN network with a ResNet architecture.

OPG

 

 

Manual annotation by 3 experts in periodontology

A total of 100 anonymised panoramic radiographs were used,

with 70% randomly selected for training,, 10% for validation, and 20% for testing.

The model achieved an exemplary Average Recall Rate, showing that the bone loss region demarcated by the model excluded most areas of normal teeth.30

Jae-Hong

Lee

(2018)

Korea

13-Layered model, based on the Keras framework in Python.

 

 

Periapical radiographs

 

 

A single periodontist

A total of 1,740 radiographs were divided into training (n = 1,044), validation (n = 348), and test (n = 348) sets.

 

The model had an increased AUC value for premolars compared to clinicians; however, clinicians showed superior AUC values for molars in the evaluation of tooth prognosis. Both of these values were not significant.31

Patrick Hoss

(2023)

Germany

Multiple types of pre-trained CNNs,

including ResNet-18,

MobileNet V2, ConvNeXT/Small, ConvNeXT/Base, and ConvNeXT/Large

 

Periapical films

 

 

Dentists classified all X-rays as healthy, mild, intermediate, or severe bone losses. Experienced examiners re-evaluated each diagnosis independently.

A total of 21,819 radiographs were divided into a training set (n = 18,819) and a test set (n = 3,000).

 

 

The model demonstrated superior performance for mandibular teeth compared to maxillary teeth, with an accuracy ranging between 82% and 84%. However, none of the evaluated models reached an accuracy of 90%. The model also showed different quadrants.32

Danks et al. achieved an 88.9% accuracy for the anterior teeth using a symmetric HRG-CNN,25 and Uzun Saylan et al. found that their YOLO-v5 model had a higher accuracy for bone loss in the maxilla.27 For studies using model comparisons, such as SVM, Random Forest, ResNet50, Ertas et al. reported best results with the ResNet50 + SVM combination with 88.2% accuracy.26 Hoss et al. observed mandibular teeth were more accurately examined than maxillary teeth, and no models attained more than 90% precision.32

Table II: Risk of bias of the selected studies.

Studies

Years

Countries

Patient selection

Index tests

Reference standards

Flow and timing

Overall risk of bias

Hoss et al.32

2023

Germany

Low

Low

Low

Low

Low

Lee et al.23

2019

Korea

Intermediate

Low

Intermediate

Intermediate

Intermediate

Thanathornwong et al.30

2020

Thailand

Intermediate

Low

Intermediate

Intermediate

Intermediate

Tsoromokos et al.29

2022

Netherlands

Intermediate

Low

Intermediate

Intermediate

Intermediate

Krois et al.21

2019

Germany

Intermediate

Low

Intermediate

Low

Intermediate

Cerda Mardini et al.22

2024

USA

Intermediate

Intermediate

Intermediate

Intermediate

Intermediate

Jiang et al.20

2022

China

Intermediate

Low

Low

Intermediate

Intermediate

Chang et al.19

2020

Korea

Intermediate

Low

Intermediate

Low

Intermediate

Alotaibi et al.24

2022

Saudi Arabia

Intermediate

Low

High

Low

High

Danks et al.25

2021

United Kingdom

Intermediate

Low

Intermediate

Low

Intermediate

Uzun Saylan et al.27

2023

Turkiye

Intermediate

Low

Intermediate

Low

Intermediate

Ezhov et al.28

2021

Russia

Intermediate

Low

Intermediate

Intermediate

Intermediate

Ertas et al.26

2022

Turkiye

Intermediate

Low

High

Low

High

Lee et al.31

2018

Korea

Intermediate

Low

Low

Intermediate

Intermediate

Bayrakdar et al.18

2020

Turkey

Intermediate

Low

Intermediate

Low

Intermediate

Green = Low risk; Yellow = Moderate risk; Red = High risk.

Table III: Values of model performance in the selected studies.

Authors/Years

Sensitivities

Specificities

F1-sore

Accuracies

Precisions

Bayrakdar et al.18 2020

0.9429

0.8857

0.9167

0.9143

0.8919

Jiang et al.20 2022

0.77

0.88

0.77

0.77

0.77

Krois et al.21 2019

0.81

0.81

0.78

0.81

NA

Diego et al.22 2024

0.230

0.260

0.150

NA

0.110

Kim et al.23 2019

0.77

0.95

0.75

NA

NA

Uzun Saylan et al.27 2023

0.75

NA

0.75

NA

0.76

Thanathornwong et al.30 2020

0.84

0.88

0.81

NA

0.81

Alotalbi et al.24 2022

73%

0.79

0.73

0.73

0.73

Tsoromokos et al.29 2022

0.96

0.41

NA

0.80

NA

Lee et al.31 2018

NA

NA

NA

82.8-73.4%

NA

Hoss et al.32 2023

93.9

72.7

NA

82.0-84.8%

NA

Ezhov et al.28 2021

0.9489

0.9661

NA

NA

NA

Chang et al.19 2020

NA

NA

NA

0.8143

NA

Danks et al.25 2021

NA

NA

NA

0.58

NA

Ertas et al.26 2022

NA

NA

0.872

0.882

0.864

Ezhov et al. bordered the usefulness of AI-assisted diagnosis since it shortened examination time.28 Tsoromokos et al. and Lee and Kim showed limited performance in molar prognosis (Table III).29,31

DISCUSSION

Periodontitis is a prevalent chronic inflammatory condition characterised by microbial-induced destruction of tissue, including gingival recession, widening of the periodontal ligament space, and ultimately loss of attachment. These pathological changes contribute to increased tooth mobility and may result in tooth loss, thereby adversely affecting patients’ oral health-related quality of life.34,35 As a result, timely and accurate diagnosis is essential.

Recent studies have explored the use of AI, particularly CNNs, for the detection of periodontitis. AI systems simulate the cognitive functions of the human brain and offer several advantages over conventional diagnostic methods.36 Typically, clinicians annotate datasets using gold-standard criteria, which are subsequently used to train, validate, and test AI algorithms.37 Among the included studies, CNN-based systems demonstrated reduced diagnostic error rates, particularly when compared with human assessors, who are more inclined to fatigue-related inaccuracies.38 Additionally, these systems can identify subtle radiographic attributes that may be overlooked by clinicians and offer data storage capabilities, enabling future reference and model improvement.39

A significant source of heterogeneity among the included studies was the variation in AI models employed. For instance, Ozdan et al. compared multiple algorithms and reported that the decision tree classifier showed higher diagnostic accuracy for disease detection.40 However, the absence of a standardised model led to inconsistent accuracy rates across studies. To address this issue, an agreement on baseline model architecture, preferably applying decision tree classifiers, should be constituted. This would allow uniformity in performance assessment while maintaining adaptability for future improvement.

Regarding the type of radiographic inputs, most studies used OPGs, while five studies engaged periapical radiographs, and one study used CBCT. OPGs have gained popularity due to their lower cost, shorter acquisition time, and patient compliance.41 However, these advantages are followed by inherent limitations, such as lower resolution and magnification-related impairment.42 Moreover, segmenting OPGs into individual tooth regions for AI input is labour-intensive and may further degrade image quality.43 To resolve these issues, future research should prioritise high-resolution periapical radiographs and employ paralleling techniques for improved accuracy and consistency.44

Further analysis revealed that AI models demonstrated higher diagnostic accuracy for maxillary anterior teeth and premolars. Conversely, diagnostic accuracy was consistently diminished for mandibular anterior teeth and maxillary molars. This discrepancy is credited to two main factors: overlapping anatomical structures in the mandibular anterior region on OPGs, which conceal radiographic details45 and significant anatomical variability in maxillary molar furcation areas.46 These findings highlight the need for employing diverse and increased numbers of input datasets to train AI systems that can reliably detect anatomical variations.47

Although AI offers respectable potential, current models still heavily depend on clinician input for data annotation and ground truth constitution. The training process is sensitive to the expertise level of the annotating clinician, resulting in variability in diagnostic accuracy across studies.48,49 This suggests that human supervision remains an integral part of AI model development, underscoring the need for standardised training and annotation protocols.

An overarching limitation in the examined literature is the disproportionate emphasis placed on radiographic data, frequently to the detriment of clinical judgment. Periodontitis is a multifactorial disease that necessitates a comprehensive clinical assessment, including probing depth, bleeding on probing, and clinical attachment level.50 Future studies should explore integrative models that combine radiographic and clinical parameters to support long-term treatment planning.

CONCLUSION

With the increasing use of digital or radiographic diagnosis tools, AI models can be used as an auxiliary tool by examiners to evaluate radiographic bone loss in the assessment of periodontitis. These models, despite their limitations, show acceptable accuracy and precision when compared with clinicians. However, for these models to fully replace the clinician in the diagnostic process, multiple limitations should be considered, including the absence of a baseline model for the evaluation of periodontitis, insufficient studies with an acceptable data set, and inadequate reference standard for these models associated with AI.51

COMPETING INTEREST:
The authors declared no conflict of interest.

AUTHORS’ CONTRIBUTION:
FT: Conception, literature review, intellectual content, and proofreading.
EH: Manuscript write-up, data analysis, interpretation, literature search, and data collection.
TB: Literature search, plagiarism check and improvement, data collection, and assessment.
All authors approved the final version of the manuscript to be published.

REFERENCES
  1. Joudi NAE, Othmani MB, Bourzgui F, Mahboub O, Lazaar M. Review of the role of artificial intelligence in dentistry: Current applications and trends. Procedia Comput Sci 2022; 210:173-80. doi: 10.1016/j.procs.2022.10.134.
  2. Kishimoto T, Goto T, Matsuda T, Iwawaki Y, Ichikawa T. Application of artificial intelligence in the dental field: A literature review. J Prosthodont Res 2022; 66(1):19-28. doi: 10. 2186/jpr.JPR_D_20_00139.
  3. Ahmed N, Abbasi MS, Zuberi F, Qamar W, Halim MSB, Maqsood A, et al. Artificial intelligence techniques: Analysis, application, and outcome in dentistry—A systematic review. Biomed Res Int 2021; 2021:9751564. doi: 10.1155/2021/ 9751564.
  4. Stafie CS, Sufaru IG, Ghiciuc CM, Stafie II, Sufaru EC, Solomon SM, et al. Exploring the intersection of artificial intelligence and clinical healthcare: A multidisciplinary review. Diagnostics (Basel) 2023; 13(12):1995. doi: 10. 3390/diagnostics13121995.
  5. Alatrany AS, Khan W, Hussain A, Kolivand H, Al-Jumeily D. An explainable machine-learning approach for Alzheimer’s disease classification. Sci Rep 2024; 14(1):2637. doi: 10. 1038/s41598-024-51985-w.
  6. Lee JH, Kim DH, Jeong SN, Choi SH. Detection and diagnosis of dental caries using a deep-learning-based convolutional neural network algorithm. J Dent 2018; 77:106-11. doi: 10.1016/j.jdent.2018.07.015.
  7. Jung SK, Kim TW. New approach for the diagnosis of extractions with neural-network machine learning. Am J Orthod Dentofacial Orthop 2016; 149(1):127-33. doi: 10.1016/j. ajodo.2015.07.030.
  8. Ibraheem WI. Accuracy of artificial-intelligence models in dental-implant fixture identification and classification from radiographs: A systematic review. Diagnostics (Basel) 2024; 14(8):806. doi: 10.3390/diagnostics14080806.
  9. Nazir M, Al-Ansari A, Al-Khalifa K, Alhareky M, Gaffar B, Almas K. Global prevalence of periodontal disease and lack of its surveillance. Sci World J 2020; 2020:1-8. doi: 10. 1155/2020/2146160.
  10. Kononen E, Gursoy M, Gursoy UK. Periodontitis: A multifaceted disease of tooth-supporting tissues. J Clin Med 2019; 8(8):1135. doi: 10.3390/jcm8081135.
  11. Armitage GC. Development of a classification system for periodontal diseases and conditions. Ann Periodontol. 1999; 4(1):1-6. doi: 10.1902/annals.1999.4.1.1.
  12. Tonetti MS, Greenwell H, Kornman KS. Staging and grading of periodontitis: Framework and proposal of a new classification and case definition. J Periodontol 2018; 89 (Suppl 1): S159-72. doi: 10.1002/JPER.18-0006.
  13. Garnick JJ, Silverstein L. Periodontal probing: Probe-tip diameter. J Periodontol 2000; 71(1):96-103. doi: 10.1902/jop. 2000.71.1.96.
  14. Wang CW, Huang CT, Lee JH, Li CH, Chang SW, Siao MJ, et al. A benchmark for comparison of dental-radiography analysis algorithms. Med Image Anal 2016; 31:63-76. doi: 10.1016/j.media.2016.02.004.
     
  15. Chatzopoulos GS, Koidou VP, Tsalikis L, Kaklamanos EG. Clinical applications of artificial intelligence in periodontology: A scoping review. Medicina (Kaunas) 2025; 61(6):1066. doi: 10.3390/medicina61061066.
  16. Zhang J, Deng S, Zou T, Jin Z, Jiang S. Artificial-intelligence models for periodontitis classification: A systematic review. J Dent 2025; 156:105690. doi: 10.1016/j.jdent.2025.105690.
  17. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021; 372:n71. doi: 10.1136/bmj.n71.
  18. Bayrakdar SK, Celik O, Bayrakdar SI, Orhan K, Bilgir E, Odabas A, et al. Success of artificial-intelligence system in determining alveolar bone loss from dental panoramic radiography images. Cumhur Dent J 2020; 23(4):318-24. doi: 10.7126/cumudj.777057.
  19. Chang HJ, Lee SJ, Yong TH, Shin NY, Jang BG, Kim JE, et al. Deep-learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis. Sci Rep 2020; 10(1):7531. doi: 10.1038/s41598-020-64509-z.
  20. Jiang L, Chen D, Cao Z, Wu F, Zhu H, Zhu F. A two-stage deep-learning architecture for the radiographic staging of periodontal bone loss. BMC Oral Health 2022; 22(1):106. doi: 10.1186/s12903-022-02119-z.
  21. Krois J, Ekert T, Meinhold L, Golla T, Kharbot B, Wittemeier A, et al. Deep learning for the radiographic detection of periodontal bone loss. Sci Rep 2019; 9(1):8495. doi: 10.1038/ s41598-019-44839-3.
  22. Cerda Mardini D, Cerda Mardini P, Vicuna Iturriaga DP, Ortuno Borroto DR. Determining the efficacy of a machine-learning model for measuring periodontal bone loss. BMC Oral Health 2024; 24(1):100. doi: 10.1186/s12903-023-03819-w.
  23. Kim J, Lee HS, Song IS, Jung KH. DeNTNet: Deep neural transfer network for the detection of periodontal bone loss using panoramic dental radiographs. Sci Rep 2019; 9(1): 17615. doi: 10.1038/s41598-019-53758-2.
  24. Alotaibi G, Awawdeh M, Farook FF, Aljohani M, Aldhafiri RM, Aldhoayan M. Artificial-intelligence diagnostic tools: Utilizing a convolutional neural network to assess periodontal bone level radiographically—a retrospective study. BMC Oral Health 2022; 22(1):399. doi: 10.1186/s12903-022-02436-3.
  25. Danks R, Bano S, Orishko A, Tan HJ, Moreno Sancho F, D'Aiuto F, et al. Automating periodontal bone-loss measurement via dental-landmark localization. Int J Comput Assist Radiol Surg 2021; 16(7):1189-99. doi: 10.1007/s11548- 021-02431-z.
  26. Ertas K, Pence I, Siseci M, Ay Z. Determination of the stage and grade of periodontitis using machine-learning algorithms. J Periodontal Implant Sci 2022; 53(1):38-53. doi: 10.5051/jpis.2201060053.
  27. Uzun Saylan BC, Baydar O, Yesilova E, Kurt Bayrakdar S, Bilgir E, Bayrakdar IS, et al. Assessing the effectiveness of artificial-intelligence models for detecting alveolar bone loss in periodontal disease. Diagnostics (Basel) 2023; 13(10): 1800. doi: 10.3390/diagnostics13101800.
  28. Ezhov M, Gusarev M, Golitsyna M, Yates M J, Kushnerev E, Tamimi D, et al. Clinically applicable artificial-intelligence system for dental diagnosis with CBCT. Sci Rep 2021; 11(1): 22217. doi: 10.1038/s41598-021-01678-5.
  29. Tsoromokos N, Parinussa S, Claessen F, Moin DA, Loos BG. Estimation of alveolar bone loss in periodontitis using machine learning. Int Dent J 2022; 72(5):621-7. doi: 10. 1016/j.identj.2022.02.009.
  30. Thanathornwong B, Suebnukarn S. Automatic detection of periodontal-compromised teeth in digital panoramic radiographs using faster-regional convolutional neural networks. Imaging Sci Dent 2020; 50(2):169-74. doi: 10.5624/isd. 2020.50.2.169.
  31. Lee JH, Kim DH, Jeong SN, Choi SH. Diagnosis and prediction of periodontally compromised teeth using a deep-learning-based convolutional neural-network algorithm. J Periodontal Implant Sci 2018; 48(2):114-23. doi: 10.5051/jpis. 2018.48.2.114.
  32. Hoss P, Meyer O, Wolfle UC, Wulk A, Meusburger T, Meier L, et al. Detection of periodontal bone loss on periapical radiographs—A diagnostic study using different convolutional neural networks. J Clin Med 2023; 12(22):7189. doi: 10. 3390/jcm12227189.
  33. Bossuyt PM, Leeflang MMG. Developing criteria for including studies. In: Deeks JJ, Bossuyt PM, Gatsonis C, Eds. Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy. Version 1.0.0. The Cochrane Collaboration; 2009. Available from: httpss://methods.cochrane.org/sites/methods. cochrane.org.sdt/files/uploads/Chapter06-Including-Studies% 20(September-2008).pdf.
  34. Yin J, Li Y, Feng M, Li L. Understanding the feelings and experiences of patients with periodontal disease: A qualitative meta-synthesis. Health Qual Life Outcomes 2022; 20(1): 126. doi: 10.1186/s12955-022-02042-5.
  35. Armitage GC. Clinical evaluation of periodontal diseases. Periodontol 2000 1995; 7:39-53. doi: 10.1111/j.1600‑0757. 1995.tb00035.x.
  36. Ansai T, Awano S, Soh I. Problems and future approaches to assessing periodontal disease. Front Public Health 2014; 2:54. doi: 10.3389/fpubh.2014.00054.
  37. Chapple IL. Periodontal diagnosis and treatment—where does the future lie? Periodontol 2000 2009; 51:9-24. doi: 10.1111/j.1600-0757.2009.00319.x.
  38. Ding H, Wu J, Zhao W, Matinlinna JP, Burrow MF, Tsoi JKH. Artificial intelligence in dentistry—A review. Front Dent Med 2023; 4:1085251. doi: 10.3389/fdmed.2023.1085251.
  39. Danial NH, Setiawati D. Convolutional-neural-network-based artificial intelligence in periodontal-disease diagnosis. Interdental J Kedokteran Gigi 2024; 20(1):139-48. doi: 10. 46862/interdental.v20i1.8641.
  40. Ozden FO, Ozgonenel O, Ozden B, Aydogdu A. Diagnosis of periodontal diseases using different classification algorithms: A preliminary study. Niger J Clin Pract 2015; 18(3): 416-21. doi: 10.4103/1119-3077.151785.
  41. Machado V, Proenca L, Morgado M, Mendes JJ, Botelho J. Accuracy of panoramic radiograph for diagnosing periodontitis compared with clinical examination. J Clin Med 2020; 9(7):2313. doi: 10.3390/jcm9072313.
  42. Walker C, Thomson D, McKenna G. Case study: Limitations of panoramic radiography in the anterior mandible. Dent Update 2009; 36(10):620-3. doi: 10.12968/denu.2009.36. 10.620.
     
  43. Rubiu G, Bologna M, Cellina M, Ce M, Sala D, Pagani R, et al. Teeth segmentation in panoramic dental X-ray using Mask-RCNN. Appl Sci 2023; 13(13):7947. doi: 10.3390/app13 137947.
  44. Hilmi A, Patel S, Mirza K, Galicia JC. Efficacy of imaging techniques for the diagnosis of apical periodontitis: A systematic review. Int Endod J 2023; 56 Suppl 3:326-39. doi: 10. 1111/iej.13921.
  45. Peretz B, Gotler M, Kaffe I. Common errors in digital panoramic radiographs of patients with mixed dentition and patients with permanent dentition. Int J Dent 2012; 2012: 584138. doi: 10.1155/2012/584138.
  46. Svardstrom G, Wennstrom JL. Furcation topography of the maxillary and mandibular first molars. J Clin Periodontol 1988; 15(5):271-5. doi: 10.1111/j.1600-051x.1988.tb 01583.x.
  47. Bailly A, Blanc C, Francis E, Guillotin T, Jamal F, Wakim B, et al. Effects of dataset size and interactions on the prediction performance of logistic-regression and deep-learning models. Comput Methods Programs Biomed 2022; 213: 106504. doi: 10.1016/j.cmpb.2021.106504.
  48. Tran DT, Gay I, Du XL, Fu Y, Bebermeyer RD, Neumann AS, et al. Assessment of partial-mouth periodontal-examination protocols for periodontitis surveillance. J Clin Periodontol 2014; 41(9):846-52. doi: 10.1111/jcpe.12285.
  49. Hicks SA, Strumke I, Thambawita V, Hammou M, Riegler MA, Halvorsen P, et al. On evaluation metrics for medical applications of artificial intelligence. Sci Rep 2022; 12(1):5979. doi: 10.1038/s41598-022-09954-8.
  50. Buch VH, Ahmed I, Maruthappu M. Artificial intelligence in medicine: Current trends and future possibilities. Br J Gen Pract 2018; 68(668):143-4. doi: 10.3399/bjgp18X695213.
  51. Salvi GE, Roccuzzo A, Imber JC, Stahli A, Klinge B, Lang NP. Clinical periodontal diagnosis. Periodontol 2000 2023. doi: 10.1111/prd.12487.