-
Notifications
You must be signed in to change notification settings - Fork 94
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
20 changed files
with
690 additions
and
0 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,79 @@ | ||
# Bone-Fracture-Detection | ||
## Introduction | ||
Since long ago, bone fractures was a long standing issue for mankind, and it's classification via x-ray has always depended on human diagnostics – which may be sometimes flawed. | ||
In recent years, Machine learning and AI based solutions have become an integral part of our lives, in all aspects, as well as in the medical field. | ||
In the scope of our research and project, we have been studying this issue of classification and have been trying, based on previous attempts and researches, to develop and fine tune a feasible solution for the medical field in terms of identification and classification of various bone fractures, using CNN ( Convolutional Neural Networks ) in the scope of modern models, such as ResNet, DenseNet, VGG16, and so forth. | ||
After performing multiple model fine tuning attempts for various models, we have achieved classification results lower then the predefined threshold of confidence agreed upon later in this research, but with the promising results we did achieve, we believe that systems of this type, machine learning and deep learning based solutions for identification and classification of bone fractures, with further fine tuning and applications of more advanced techniques such as Feature Extraction, may replace the traditional methods currently employed in the medical field, with much better results. | ||
|
||
|
||
## Dataset | ||
The data set we used called MURA and included 3 different bone parts, MURA is a dataset of musculoskeletal radiographs and contains 20,335 images described below: | ||
|
||
|
||
| **Part** | **Normal** | **Fractured** | **Total** | | ||
|--------------|:----------:|--------------:|----------:| | ||
| **Elbow** | 3160 | 2236 | 5396 | | ||
| **Hand** | 4330 | 1673 | 6003 | | ||
| **Shoulder** | 4496 | 4440 | 8936 | | ||
|
||
The data is separated into train and valid where each folder contains a folder of a patient and for each patient between 1-3 images for the same bone part | ||
|
||
## Algorithm | ||
Our data contains about 20,000 x-ray images, including three different types of bones - elbow, hand, and shoulder. After loading all the images into data frames and assigning a label to each image, we split our images into 72% training, 18% validation and 10% test. The algorithm starts with data augmentation and pre-processing the x-ray images, such as flip horizontal. The second step uses a ResNet50 neural network to classify the type of bone in the image. Once the bone type has been predicted, A specific model will be loaded for that bone type prediction from 3 different types that were each trained to identify a fracture in another bone type and used to detect whether the bone is fractured. | ||
This approach utilizes the strong image classification capabilities of ResNet50 to identify the type of bone and then employs a specific model for each bone to determine if there is a fracture present. Utilizing this two-step process, the algorithm can efficiently and accurately analyze x-ray images, helping medical professionals diagnose patients quickly and accurately. | ||
The algorithm can determine whether the prediction should be considered a positive result, indicating that a bone fracture is present, or a negative result, indicating that no bone fracture is present. The results of the bone type classification and bone fracture detection will be displayed to the user in the application, allowing for easy interpretation. | ||
This algorithm has the potential to greatly aid medical professionals in detecting bone fractures and improving patient diagnosis and treatment. Its efficient and accurate analysis of x-ray images can speed up the diagnosis process and help patients receive appropriate care. | ||
|
||
|
||
|
||
![img_1.png](images/Architecture.png) | ||
|
||
|
||
## Results | ||
### Body Part Prediction | ||
|
||
<img src="plots/BodyPartAcc.png" width=300> <img src="plots/BodyPartLoss.png" width=300> | ||
|
||
### Fracture Prediction | ||
#### Elbow | ||
|
||
<img src="plots/FractureDetection/Elbow/_Accuracy.jpeg" width=300> <img src="plots/FractureDetection/Elbow/_Loss.jpeg" width=300> | ||
|
||
#### Hand | ||
<img src="plots/FractureDetection/Hand/_Accuracy.jpeg" width=300> <img src="plots/FractureDetection/Hand/_Loss.jpeg" width=300> | ||
|
||
#### Shoulder | ||
<img src="plots/FractureDetection/Shoulder/_Accuracy.jpeg" width=300> <img src="plots/FractureDetection/Shoulder/_Loss.jpeg" width=300> | ||
|
||
|
||
# Installations | ||
### PyCharm IDE | ||
### Python v3.7.x | ||
### Install requirements.txt | ||
|
||
* customtkinter~=5.0.3 | ||
* PyAutoGUI~=0.9.53 | ||
* PyGetWindow~=0.0.9 | ||
* Pillow~=8.4.0 | ||
* numpy~=1.19.5 | ||
* tensorflow~=2.6.2 | ||
* keras~=2.6.0 | ||
* pandas~=1.1.5 | ||
* matplotlib~=3.3.4 | ||
* scikit-learn~=0.24.2 | ||
* colorama~=0.4.5 | ||
|
||
Run mainGUI.Py | ||
|
||
# GUI | ||
### Main | ||
<img src="images/GUI/main.png" width=400> | ||
|
||
### Info-Rules | ||
<img src="images/GUI/Rules.png" width=400> | ||
|
||
### Test Normal & Fractured | ||
<img src="images/GUI/normal.png" width=300> <img src="images/GUI/fractured.png" width=300> | ||
|
||
|
||
|
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,130 @@ | ||
import os | ||
from tkinter import filedialog | ||
import customtkinter as ctk | ||
import pyautogui | ||
import pygetwindow | ||
from PIL import ImageTk, Image | ||
|
||
from predictions import predict | ||
|
||
# global variables | ||
|
||
project_folder = os.path.dirname(os.path.abspath(__file__)) | ||
folder_path = project_folder + '/images/' | ||
|
||
filename = "" | ||
|
||
|
||
class App(ctk.CTk): | ||
def __init__(self): | ||
super().__init__() | ||
|
||
self.title("Bone Fracture Detection") | ||
self.geometry(f"{500}x{740}") | ||
self.head_frame = ctk.CTkFrame(master=self) | ||
self.head_frame.pack(pady=20, padx=60, fill="both", expand=True) | ||
self.main_frame = ctk.CTkFrame(master=self) | ||
self.main_frame.pack(pady=20, padx=60, fill="both", expand=True) | ||
self.head_label = ctk.CTkLabel(master=self.head_frame, text="Bone Fracture Detection", | ||
font=(ctk.CTkFont("Roboto"), 28)) | ||
self.head_label.pack(pady=20, padx=10, anchor="nw", side="left") | ||
img1 = ctk.CTkImage(Image.open(folder_path + "info.png")) | ||
|
||
self.img_label = ctk.CTkButton(master=self.head_frame, text="", image=img1, command=self.open_image_window, | ||
width=40, height=40) | ||
|
||
self.img_label.pack(pady=10, padx=10, anchor="nw", side="right") | ||
|
||
self.info_label = ctk.CTkLabel(master=self.main_frame, | ||
text="Bone fracture detection system, upload an x-ray image for fracture detection.", | ||
wraplength=300, font=(ctk.CTkFont("Roboto"), 18)) | ||
self.info_label.pack(pady=10, padx=10) | ||
|
||
self.upload_btn = ctk.CTkButton(master=self.main_frame, text="Upload Image", command=self.upload_image) | ||
self.upload_btn.pack(pady=0, padx=1) | ||
|
||
self.frame2 = ctk.CTkFrame(master=self.main_frame, fg_color="transparent", width=256, height=256) | ||
self.frame2.pack(pady=10, padx=1) | ||
|
||
img = Image.open(folder_path + "Question_Mark.jpg") | ||
img_resized = img.resize((int(256 / img.height * img.width), 256)) # new width & height | ||
img = ImageTk.PhotoImage(img_resized) | ||
|
||
self.img_label = ctk.CTkLabel(master=self.frame2, text="", image=img) | ||
self.img_label.pack(pady=1, padx=10) | ||
|
||
|
||
self.predict_btn = ctk.CTkButton(master=self.main_frame, text="Predict", command=self.predict_gui) | ||
self.predict_btn.pack(pady=0, padx=1) | ||
|
||
self.result_frame = ctk.CTkFrame(master=self.main_frame, fg_color="transparent", width=200, height=100) | ||
self.result_frame.pack(pady=5, padx=5) | ||
|
||
self.loader_label = ctk.CTkLabel(master=self.main_frame, width=100, height=100, text="") | ||
self.loader_label.pack(pady=3, padx=3) | ||
|
||
self.res1_label = ctk.CTkLabel(master=self.result_frame, text="") | ||
self.res1_label.pack(pady=5, padx=20) | ||
|
||
self.res2_label = ctk.CTkLabel(master=self.result_frame, text="") | ||
self.res2_label.pack(pady=5, padx=20) | ||
|
||
self.save_btn = ctk.CTkButton(master=self.result_frame, text="Save Result", command=self.save_result) | ||
|
||
self.save_label = ctk.CTkLabel(master=self.result_frame, text="") | ||
|
||
|
||
|
||
def upload_image(self): | ||
global filename | ||
f_types = [("All Files", "*.*")] | ||
filename = filedialog.askopenfilename(filetypes=f_types, initialdir=project_folder+'/test/Wrist/') | ||
self.save_label.configure(text="") | ||
self.res2_label.configure(text="") | ||
self.res1_label.configure(text="") | ||
self.img_label.configure(self.frame2, text="", image="") | ||
img = Image.open(filename) | ||
img_resized = img.resize((int(256 / img.height * img.width), 256)) # new width & height | ||
img = ImageTk.PhotoImage(img_resized) | ||
self.img_label.configure(self.frame2, image=img, text="") | ||
self.img_label.image = img | ||
self.save_btn.pack_forget() | ||
self.save_label.pack_forget() | ||
|
||
def predict_gui(self): | ||
global filename | ||
bone_type_result = predict(filename) | ||
result = predict(filename, bone_type_result) | ||
print(result) | ||
if result == 'fractured': | ||
self.res2_label.configure(text_color="RED", text="Result: Fractured", font=(ctk.CTkFont("Roboto"), 24)) | ||
else: | ||
self.res2_label.configure(text_color="GREEN", text="Result: Normal", font=(ctk.CTkFont("Roboto"), 24)) | ||
bone_type_result = predict(filename, "Parts") | ||
self.res1_label.configure(text="Type: " + bone_type_result, font=(ctk.CTkFont("Roboto"), 24)) | ||
print(bone_type_result) | ||
self.save_btn.pack(pady=10, padx=1) | ||
self.save_label.pack(pady=5, padx=20) | ||
|
||
def save_result(self): | ||
tempdir = filedialog.asksaveasfilename(parent=self, initialdir=project_folder + '/PredictResults/', | ||
title='Please select a directory and filename', defaultextension=".png") | ||
screenshots_dir = tempdir | ||
window = pygetwindow.getWindowsWithTitle('Bone Fracture Detection')[0] | ||
left, top = window.topleft | ||
right, bottom = window.bottomright | ||
pyautogui.screenshot(screenshots_dir) | ||
im = Image.open(screenshots_dir) | ||
im = im.crop((left + 10, top + 35, right - 10, bottom - 10)) | ||
im.save(screenshots_dir) | ||
self.save_label.configure(text_color="WHITE", text="Saved!", font=(ctk.CTkFont("Roboto"), 16)) | ||
|
||
def open_image_window(self): | ||
im = Image.open(folder_path + "rules.jpeg") | ||
im = im.resize((700, 700)) | ||
im.show() | ||
|
||
|
||
if __name__ == "__main__": | ||
app = App() | ||
app.mainloop() |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
import os | ||
from colorama import Fore | ||
from predictions import predict | ||
|
||
|
||
# load images to predict from paths | ||
# .... / elbow1.jpg | ||
# Hand fractured -- elbow2.png | ||
# / / \ ..... | ||
# test - Elbow ------ | ||
# \ \ / elbow1.png | ||
# Shoulder normal -- elbow2.jpg | ||
# .... \ | ||
# | ||
def load_path(path): | ||
dataset = [] | ||
for body in os.listdir(path): | ||
body_part = body | ||
path_p = path + '/' + str(body) | ||
for lab in os.listdir(path_p): | ||
label = lab | ||
path_l = path_p + '/' + str(lab) | ||
for img in os.listdir(path_l): | ||
img_path = path_l + '/' + str(img) | ||
dataset.append( | ||
{ | ||
'body_part': body_part, | ||
'label': label, | ||
'image_path': img_path, | ||
'image_name': img | ||
} | ||
) | ||
return dataset | ||
|
||
|
||
categories_parts = ["Elbow", "Hand", "Shoulder"] | ||
categories_fracture = ['fractured', 'normal'] | ||
|
||
|
||
def reportPredict(dataset): | ||
total_count = 0 | ||
part_count = 0 | ||
status_count = 0 | ||
|
||
print(Fore.YELLOW + | ||
'{0: <28}'.format('Name') + | ||
'{0: <14}'.format('Part') + | ||
'{0: <20}'.format('Predicted Part') + | ||
'{0: <20}'.format('Status') + | ||
'{0: <20}'.format('Predicted Status')) | ||
for img in dataset: | ||
body_part_predict = predict(img['image_path']) | ||
fracture_predict = predict(img['image_path'], body_part_predict) | ||
if img['body_part'] == body_part_predict: | ||
part_count = part_count + 1 | ||
if img['label'] == fracture_predict: | ||
status_count = status_count + 1 | ||
color = Fore.GREEN | ||
else: | ||
color = Fore.RED | ||
print(color + | ||
'{0: <28}'.format(img['image_name']) + | ||
'{0: <14}'.format(img['body_part']) + | ||
'{0: <20}'.format(body_part_predict) + | ||
'{0: <20}'.format((img['label'])) + | ||
'{0: <20}'.format(fracture_predict)) | ||
|
||
print(Fore.BLUE + '\npart acc: ' + str("%.2f" % (part_count / len(dataset) * 100)) + '%') | ||
print(Fore.BLUE + 'status acc: ' + str("%.2f" % (status_count / len(dataset) * 100)) + '%') | ||
return | ||
|
||
|
||
THIS_FOLDER = os.path.dirname(os.path.abspath(__file__)) | ||
test_dir = THIS_FOLDER + '/test/' | ||
reportPredict(load_path(test_dir)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
import numpy as np | ||
import tensorflow as tf | ||
from keras.preprocessing import image | ||
|
||
# load the models when import "predictions.py" | ||
model_elbow_frac = tf.keras.models.load_model("weights/ResNet50_Elbow_frac.h5") | ||
model_hand_frac = tf.keras.models.load_model("weights/ResNet50_Hand_frac.h5") | ||
model_shoulder_frac = tf.keras.models.load_model("weights/ResNet50_Shoulder_frac.h5") | ||
model_parts = tf.keras.models.load_model("weights/ResNet50_BodyParts.h5") | ||
|
||
# categories for each result by index | ||
|
||
# 0-Elbow 1-Hand 2-Shoulder | ||
categories_parts = ["Elbow", "Hand", "Shoulder"] | ||
|
||
# 0-fractured 1-normal | ||
categories_fracture = ['fractured', 'normal'] | ||
|
||
|
||
# get image and model name, the default model is "Parts" | ||
# Parts - bone type predict model of 3 classes | ||
# otherwise - fracture predict for each part | ||
def predict(img, model="Parts"): | ||
size = 224 | ||
if model == 'Parts': | ||
chosen_model = model_parts | ||
else: | ||
if model == 'Elbow': | ||
chosen_model = model_elbow_frac | ||
elif model == 'Hand': | ||
chosen_model = model_hand_frac | ||
elif model == 'Shoulder': | ||
chosen_model = model_shoulder_frac | ||
|
||
# load image with 224px224p (the training model image size, rgb) | ||
temp_img = image.load_img(img, target_size=(size, size)) | ||
x = image.img_to_array(temp_img) | ||
x = np.expand_dims(x, axis=0) | ||
images = np.vstack([x]) | ||
prediction = np.argmax(chosen_model.predict(images), axis=1) | ||
|
||
# chose the category and get the string prediction | ||
if model == 'Parts': | ||
prediction_str = categories_parts[prediction.item()] | ||
else: | ||
prediction_str = categories_fracture[prediction.item()] | ||
|
||
return prediction_str |
Binary file not shown.
Oops, something went wrong.