Search results for: contrast-enhanced computed tomography scan
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1354

Search results for: contrast-enhanced computed tomography scan

994 Utility of CT Perfusion Imaging for Diagnosis and Management of Delayed Cerebral Ischaemia Following Subarachnoid Haemorrhage

Authors: Abdalla Mansour, Dan Brown, Adel Helmy, Rikin Trivedi, Mathew Guilfoyle

Abstract:

Introduction: Diagnosing delayed cerebral ischaemia (DCI) following aneurysmal subarachnoid haemorrhage (SAH) can be challenging, particularly in poor-grade patients. Objectives: This study sought to assess the value of routine CTP in identifying (or excluding) DCI and in guiding management. Methods: Eight-year retrospective neuroimaging study at a large UK neurosurgical centre. Subjects included a random sample of adult patients with confirmed aneurysmal SAH that had a CTP scan during their inpatient stay, over a 8-year period (May 2014 - May 2022). Data collected through electronic patient record and PACS. Variables included age, WFNS scale, aneurysm site, treatment, the timing of CTP, radiologist report, and DCI management. Results: Over eight years, 916 patients were treated for aneurysmal SAH; this study focused on 466 patients that were randomly selected. Of this sample, 181 (38.84%) had one or more CTP scans following brain aneurysm treatment (Total 318). The first CTP scan in each patient was performed at 1-20 days following ictus (median 4 days). There was radiological evidence of DCI in 83, and no reversible ischaemia was found in 80. Findings were equivocal in the remaining 18. Of the 103 patients treated with clipping, 49 had DCI radiological evidence, in comparison to 31 of 69 patients treated with endovascular embolization. The remaining 9 patients are either unsecured aneurysms or non-aneurysmal SAH. Of the patients with radiological evidence of DCI, 65 had a treatment change following the CTP directed at improving cerebral perfusion. In contrast, treatment was not changed for (61) patients without radiological evidence of DCI. Conclusion: CTP is a useful adjunct to clinical assessment in the diagnosis of DCI and is helpful in identifying patients that may benefit from intensive therapy and those in whom it is unlikely to be effective.

Keywords: SAH, vasospasm, aneurysm, delayed cerebral ischemia

Procedia PDF Downloads 48
993 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 248
992 Comparative Evaluation of a Dynamic Navigation System Versus a Three-Dimensional Microscope in Retrieving Separated Endodontic Files: An in Vitro Study

Authors: Mohammed H. Karim, Bestoon M. Faraj

Abstract:

Introduction: instrument separation is a common challenge in the endodontic field. Various techniques and technologies have been developed to improve the retrieval success rate. This study aimed to compare the effectiveness of a Dynamic Navigation System (DNS) and a three-dimensional microscope in retrieving broken rotary NiTi files when using trepan burs and the extractor system. Materials and Methods: Thirty maxillary first bicuspids with sixty separate roots were split into two comparable groups based on a comprehensive Cone-Beam Computed Tomography (CBCT) analysis of the root length and curvature. After standardised access opening, glide paths, and patency attainment with the K file (sizes 10 and 15), the teeth were arranged on 3D models (three per quadrant, six per model). Subsequently, controlled-memory heat-treated NiTi rotary files (#25/0.04) were notched 4 mm from the tips and fractured at the apical third of the roots. The C-FR1 Endo file removal system was employed under both guidance to retrieve the fragments, and the success rate, canal aberration, treatment time and volumetric changes were measured. The statistical analysis was performed using IBM SPSS software at a significance level of 0.05. Results: The microscope-guided group had a higher success rate than the DNS guidance, but the difference was insignificant (p > 0.05). In addition, the microscope-guided drills resulted in a substantially lower proportion of canal aberration, required less time to retrieve the fragments and caused a minor change in the root canal volume (p < 0.05). Conclusion: Although dynamically guided trephining with the extractor can retrieve separated instruments, it is inferior to three-dimensional microscope guidance regarding treatment time, procedural errors, and volume change.

Keywords: dynamic navigation system, separated instruments retrieval, trephine burs and extractor system, three-dimensional video microscope

Procedia PDF Downloads 75
991 Lessons from Patients Expired due to Severe Head Injuries Treated in Intensive Care Unit of Lady Reading Hospital Peshawar

Authors: Mumtaz Ali, Hamzullah Khan, Khalid Khanzada, Shahid Ayub, Aurangzeb Wazir

Abstract:

Objective: To analyse the death of patients treated in neuro-surgical ICU for severe head injuries from different perspectives. The evaluation of the data so obtained to help improve the health care delivery to this group of patients in ICU. Study Design: It is a descriptive study based on retrospective analysis of patients presenting to neuro-surgical ICU in Lady Reading Hospital, Peshawar. Study Duration: It covered the period between 1st January 2009 to 31st December 2009. Material and Methods: The Clinical record of all the patients presenting with the clinical radiological and surgical features of severe head injuries, who expired in neuro-surgical ICU was collected. A separate proforma which mentioned age, sex, time of arrival and death, causes of head injuries, the radiological features, the clinical parameters, the surgical and non surgical treatment given was used. The average duration of stay and the demographic and domiciliary representation of these patients was noted. The record was analyzed accordingly for discussion and recommendations. Results: Out of the total 112 (n-112) patients who expired in one year in the neuro-surgical ICU the young adults made up the majority 64 (57.14%) followed by children, 34 (30.35%) and then the elderly age group: 10 (8.92%). Road traffic accidents were the major cause of presentation, 75 (66.96%) followed by history of fall; 23 (20.53%) and then the fire arm injuries; 13 (11.60%). The predominant CT scan features of these patients on presentation was cerebral edema, and midline shift (diffuse neuronal injuries). 46 (41.07%) followed by cerebral contusions. 28 (25%). The correctable surgical causes were present only in 18 patients (16.07%) and the majority 94 (83.92%) were given conservative management. Of the 69 (n=69) patients in which CT scan was repeated; 62 (89.85%) showed worsening of the initial CT scan abnormalities while in 7 cases (10.14%) the features were static. Among the non surgical cases both ventilatory therapy in 7 (6.25%) and tracheostomy in 39 (34.82%) failed to change the outcome. The maximum stay in the neuro ICU leading upto the death was 48 hours in 35 (31.25%) cases followed by 31 (27.67%) cases in 24 hours; 24 (21.42%) in one week and 16 (14.28%) in 72 hours. Only 6 (5.35%) patients survived more than a week. Patients were received from almost all the districts of NWFP except. The Hazara division. There were some Afghan refugees as well. Conclusion: Mortality following the head injuries is alarmingly high despite repeated claims about the professional and administrative improvement. Even places like ICU could not change the out come according to the desired aims and objectives in the present set up. A rethinking is needed both at the individual and institutional level among the concerned quarters with a clear aim at the more scientific grounds. Only then one can achieve the desired results.

Keywords: Glasgow Coma Scale, pediatrics, geriatrics, Peshawar

Procedia PDF Downloads 327
990 The Use of Ultrasound as a Safe and Cost-Efficient Technique to Assess Visceral Fat in Children with Obesity

Authors: Bassma A. Abdel Haleem, Ehab K. Emam, George E. Yacoub, Ashraf M. Salem

Abstract:

Background: Obesity is an increasingly common problem in childhood. Childhood obesity is considered the main risk factor for the development of metabolic syndrome (MetS) (diabetes type 2, dyslipidemia, and hypertension). Recent studies estimated that among children with obesity 30-60% will develop MetS. Visceral fat thickness is a valuable predictor of the development of MetS. Computed tomography and dual-energy X-ray absorptiometry are the main techniques to assess visceral fat. However, they carry the risk of radiation exposure and are expensive procedures. Consequently, they are seldom used in the assessment of visceral fat in children. Some studies explored the potential of ultrasound as a substitute to assess visceral fat in the elderly and found promising results. Given the vulnerability of children to radiation exposure, we sought to evaluate ultrasound as a safer and more cost-efficient alternative for measuring visceral fat in obese children. Additionally, we assessed the correlation between visceral fat and obesity indicators such as insulin resistance. Methods: A cross-sectional study was conducted on 46 children with obesity (aged 6–16 years). Their visceral fat was evaluated by ultrasound. Subcutaneous fat thickness (SFT), i.e., the measurement from the skin-fat interface to the linea alba, and visceral fat thickness (VFT), i.e., the thickness from the linea alba to the aorta, were measured and correlated with anthropometric measures, fasting lipid profile, homeostatic model assessment for insulin resistance (HOMA-IR) and liver enzymes (ALT). Results: VFT assessed via ultrasound was found to strongly correlate with the BMI, HOMA-IR with AUC for VFT as a predictor of insulin resistance of 0.858 and cut off point of >2.98. VFT also correlates positively with serum triglycerides and serum ALT. VFT correlates negatively with HDL. Conclusions: Ultrasound, a safe and cost-efficient technique, could be a useful tool for measuring the abdominal fat thickness in children with obesity. Ultrasound-measured VFT could be an appropriate prognostic factor for insulin resistance, hypertriglyceridemia, and elevated liver enzymes in obese children.

Keywords: metabolic syndrome, pediatric obesity, sonography, visceral fat

Procedia PDF Downloads 102
989 A Review of Deep Learning Methods in Computer-Aided Detection and Diagnosis Systems based on Whole Mammogram and Ultrasound Scan Classification

Authors: Ian Omung'a

Abstract:

Breast cancer remains to be one of the deadliest cancers for women worldwide, with the risk of developing tumors being as high as 50 percent in Sub-Saharan African countries like Kenya. With as many as 42 percent of these cases set to be diagnosed late when cancer has metastasized and or the prognosis has become terminal, Full Field Digital [FFD] Mammography remains an effective screening technique that leads to early detection where in most cases, successful interventions can be made to control or eliminate the tumors altogether. FFD Mammograms have been proven to multiply more effective when used together with Computer-Aided Detection and Diagnosis [CADe] systems, relying on algorithmic implementations of Deep Learning techniques in Computer Vision to carry out deep pattern recognition that is comparable to the level of a human radiologist and decipher whether specific areas of interest in the mammogram scan image portray abnormalities if any and whether these abnormalities are indicative of a benign or malignant tumor. Within this paper, we review emergent Deep Learning techniques that will prove relevant to the development of State-of-The-Art FFD Mammogram CADe systems. These techniques will span self-supervised learning for context-encoded occlusion, self-supervised learning for pre-processing and labeling automation, as well as the creation of a standardized large-scale mammography dataset as a benchmark for CADe systems' evaluation. Finally, comparisons are drawn between existing practices that pre-date these techniques and how the development of CADe systems that incorporate them will be different.

Keywords: breast cancer diagnosis, computer aided detection and diagnosis, deep learning, whole mammogram classfication, ultrasound classification, computer vision

Procedia PDF Downloads 76
988 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software

Authors: Junior Akunzi

Abstract:

In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.

Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance

Procedia PDF Downloads 193
987 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 382
986 The Importance of the Fluctuation in Blood Sugar and Blood Pressure of Insulin-Dependent Diabetic Patients with Chronic Kidney Disease

Authors: Hitoshi Minakuchi, Izumi Takei, Shu Wakino, Koichi Hayashi, Hiroshi Itoh

Abstract:

Objectives: Among type 2 diabetics, patients with CKD(chronic kidney disease), insulin resistance, impaired glyconeogenesis in kidney and reduced degradation of insulin are recognized, and we observed different fluctuational patterns of blood sugar between CKD patients and non-CKD patients. On the other hand, non-dipper type blood pressure change is the risk of organ damage and mortality. We performed cross-sectional study to elucidate the characteristic of the fluctuation of blood glucose and blood pressure at insulin-treated diabetic patients with chronic kidney disease. Methods: From March 2011 to April 2013, at the Ichikawa General Hospital of Tokyo Dental College, we recruited 20 outpatients. All participants are insulin-treated type 2 diabetes with CKD. We collected serum samples, urine samples for several hormone measurements, and performed CGMS(Continuous glucose measurement system), ABPM (ambulatory blood pressure monitoring), brain computed tomography, carotid artery thickness, ankle brachial index, PWV, CVR-R, and analyzed these data statistically. Results: Among all 20 participants, hypoglycemia was decided blood glucose 70mg/dl by CGMS of 9 participants (45.0%). The event of hypoglycemia was recognized lower eGFR (29.8±6.2ml/min:41.3±8.5ml/min, P<0.05), lower HbA1c (6.44±0.57%:7.53±0.49%), higher PWV (1858±97.3cm/s:1665±109.2cm/s), higher serum glucagon (194.2±34.8pg/ml:117.0±37.1pg/ml), higher free cortisol of urine (53.8±12.8μg/day:34.8±7.1μg/day), and higher metanephrin of urine (0.162±0.031mg/day:0.076±0.029mg/day). Non-dipper type blood pressure change in ABPM was detected 8 among 9 participants with hypoglycemia (88.9%), 4 among 11 participants (36.4%) without hypoglycemia. Multiplex logistic-regression analysis revealed that the event of hypoglycemia is the independent factor of non-dipper type blood pressure change. Conclusions: Among insulin-treated type 2 diabetic patients with CKD, the events of hypoglycemia were frequently detected, and can associate with the organ derangements through the medium of non-dipper type blood pressure change.

Keywords: chronic kidney disease, hypoglycemia, non-dipper type blood pressure change, diabetic patients

Procedia PDF Downloads 393
985 Upgrading of Problem-Based Learning with Educational Multimedia to the Undergraduate Students

Authors: Sharifa Alduraibi, Abir El Sadik, Ahmed Elzainy, Alaa Alduraibi, Ahmed Alsolai

Abstract:

Introduction: Problem-based learning (PBL) is an active student-centered educational modality, influenced by the students' interest that required continuous motivation to improve their engagement. The new era of professional information technology facilitated the utilization of educational multimedia, such as videos, soundtracks, and photographs promoting students' learning. The aim of the present study was to introduce multimedia-enriched PBL scenarios for the first time in college of medicine, Qassim University, as an incentive for better students' engagement. In addition, students' performance and satisfaction were evaluated. Methodology: Two multimedia-enhanced PBL scenarios were implemented to the third years' students in the urinary system block. Radiological images, plain CT scan, and X-ray of the abdomen and renal nuclear scan correlated with their pathological gross photographs were added to the scenarios. One week before the first sessions, pre-recorded orientation videos for PBL tutors were submitted to clarify the multimedia incorporated in the scenarios. Other two traditional PBL scenarios devoid of multimedia demonstrating the pathological and radiological findings were designed. Results and Discussion: Comparison between the formative assessments' results by the end of the two PBL modalities was done. It revealed significant increase in students' engagement, critical thinking and practical reasoning skills during the multimedia-enhanced sessions. Students' perception survey showed great satisfaction with the new strategy. Conclusion: It could be concluded from the current work that multimedia created technology-based teaching strategy inspiring the student for self-directed thinking and promoting students' overall achievement.

Keywords: multimedia, pathology and radiology images, problem-based learning, videos

Procedia PDF Downloads 132
984 Merging and Comparing Ontologies Generically

Authors: Xiuzhan Guo, Arthur Berrill, Ajinkya Kulkarni, Kostya Belezko, Min Luo

Abstract:

Ontology operations, e.g., aligning and merging, were studied and implemented extensively in different settings, such as categorical operations, relation algebras, and typed graph grammars, with different concerns. However, aligning and merging operations in the settings share some generic properties, e.g., idempotence, commutativity, associativity, and representativity, labeled by (I), (C), (A), and (R), respectively, which are defined on an ontology merging system (D~M), where D is a non-empty set of the ontologies concerned, ~ is a binary relation on D modeling ontology aligning and M is a partial binary operation on D modeling ontology merging. Given an ontology repository, a finite set O ⊆ D, its merging closure Ô is the smallest set of ontologies, which contains the repository and is closed with respect to merging. If (I), (C), (A), and (R) are satisfied, then both D and Ô are partially ordered naturally by merging, Ô is finite and can be computed, compared, and sorted efficiently, including sorting, selecting, and querying some specific elements, e.g., maximal ontologies and minimal ontologies. We also show that the ontology merging system, given by ontology V -alignment pairs and pushouts, satisfies the properties: (I), (C), (A), and (R) so that the merging system is partially ordered and the merging closure of a given repository with respect to pushouts can be computed efficiently.

Keywords: ontology aligning, ontology merging, merging system, poset, merging closure, ontology V-alignment pair, ontology homomorphism, ontology V-alignment pair homomorphism, pushout

Procedia PDF Downloads 868
983 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models

Authors: Ethan James

Abstract:

Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.

Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina

Procedia PDF Downloads 154
982 Comparison with Mechanical Behaviors of Mastication in Teeth Movement Cases

Authors: Jae-Yong Park, Yeo-Kyeong Lee, Hee-Sun Kim

Abstract:

Purpose: This study aims at investigating the mechanical behaviors of mastication, according to various teeth movement. There are three masticatory cases which are general case and 2 cases of teeth movement. General case includes the common arrange of all teeth and 2 cases of teeth movement are that one is the half movement location case of molar teeth in no. 14 tooth seat after extraction of no. 14 tooth and the other is no. 14 tooth seat location case of molar teeth after extraction in the same case before. Materials and Methods: In order to analyze these cases, 3 dimensional finite element (FE) model of the skull were generated based on computed tomography images, 964 dicom files of 38 year old male having normal occlusion status. An FE model in general occlusal case was used to develop CAE procedure. This procedure was applied to FE models in other occlusal cases. The displacement controls according to loading condition were applied effectively to simulate occlusal behaviors in all cases. From the FE analyses, von Mises stress distribution of skull and teeth was observed. The von Mises stress, effective stress, had been widely used to determine the absolute stress value, regardless of stress direction and yield characteristics of materials. Results: High stress was distributed over the periodontal area of mandible under molar teeth when the mandible was transmitted to the coronal-apical direction in the general occlusal case. According to the stress propagation from teeth to cranium, stress distribution decreased as the distribution propagated from molar teeth to infratemporal crest of the greater wing of the sphenoid bone and lateral pterygoid plate in general case. In 2 cases of teeth movement, there were observed that high stresses were distributed over the periodontal area of mandible under teeth where they are located under the moved molar teeth in cranium. Conclusion: The predictions of the mechanical behaviors of general case and 2 cases of teeth movement during the masticatory process were investigated including qualitative validation. The displacement controls as the loading condition were applied effectively to simulate occlusal behaviors in 2 cases of teeth movement of molar teeth.

Keywords: cranium, finite element analysis, mandible, masticatory action, occlusal force

Procedia PDF Downloads 375
981 Growth and Bone Health in Children following Liver Transplantation

Authors: Faris Alkhalil, Rana Bitar, Amer Azaz, Hisham Natour, Noora Almeraikhi, Mohamad Miqdady

Abstract:

Background: Children with liver transplantation are achieving very good survival and so there is now a need to concentrate on achieving good health in these patients and preventing disease. Immunosuppressive medications have side effects that need to be monitored and if possible avoided. Glucocorticoids and calcineurin inhibitors are detrimental to bone and mineral homeostasis in addition steroids can also affect linear growth. Steroid sparing regimes in renal transplant children has shown to improve children’s height. Aim: We aim to review the growth and bone health of children post liver transplant by measuring bone mineral density (BMD) using dual energy X-ray absorptiometry (DEXA) scan and assessing if there is a clear link between poor growth and impaired bone health and use of long term steroids. Subjects and Methods: This is a single centre retrospective Cohort study, we reviewed the medical notes of children (0-16 years) who underwent a liver transplantation between November 2000 to November 2016 and currently being followed at our centre. Results: 39 patients were identified (25 males and 14 females), the median transplant age was 2 years (range 9 months - 16 years), and the median follow up was 6 years. Four patients received a combined transplant, 2 kidney and liver transplant and 2 received a liver and small bowel transplant. The indications for transplant included, Biliary Atresia (31%), Acute Liver failure (18%), Progressive Familial Intrahepatic Cholestasis (15%), transplantable metabolic disease (10%), TPN related liver disease (8%), Primary Hyperoxaluria (5%), Hepatocellular carcinoma (3%) and other causes (10%). 36 patients (95%) were on a calcineurin inhibitor (34 patients were on Tacrolimus and 2 on Cyclosporin). The other three patients were on Sirolimus. Low dose long-term steroids was used in 21% of the patients. A considerable proportion of the patients had poor growth. 15% were below the 3rd centile for weight for age and 21% were below the 3rd centile for height for age. Most of our patients with poor growth were not on long term steroids. 49% of patients had a DEXA scan post transplantation. 21% of these children had low bone mineral density, one patient had met osteoporosis criteria with a vertebral fracture. Most of our patients with impaired bone health were not on long term steroids. 20% of the patients who did not undergo a DEXA scan developed long bone fractures and 50% of them were on long term steroid use which may suggest impaired bone health in these patients. Summary and Conclusion: The incidence of impaired bone health, although studied in limited number of patients; was high. Early recognition and treatment should be instituted to avoid fractures and improve bone health. Many of the patients were below the 3rd centile for weight and height however there was no clear relationship between steroid use and impaired bone health, reduced weight and reduced linear height.

Keywords: bone, growth, pediatric, liver, transplantation

Procedia PDF Downloads 262
980 Rare Diagnosis in Emergency Room: Moyamoya Disease

Authors: Ecem Deniz Kırkpantur, Ozge Ecmel Onur, Tuba Cimilli Ozturk, Ebru Unal Akoglu

Abstract:

Moyamoya disease is a unique chronic progressive cerebrovascular disease characterized by bilateral stenosis or occlusion of the arteries around the circle of Willis with prominent arterial collateral circulation. The occurrence of Moyamoya disease is related to immune, genetic and other factors. There is no curative treatment for Moyamoya disease. Secondary prevention for patients with symptomatic Moyamoya disease is largely centered on surgical revascularization techniques. We present here a 62-year old male presented with headache and vision loss for 2 days. He was previously diagnosed with hypertension and glaucoma. On physical examination, left eye movements were restricted medially, both eyes were hyperemic and their movements were painful. Other neurological and physical examination were normal. His vital signs and laboratory results were within normal limits. Computed tomography (CT) showed dilated vascular structures around both lateral ventricles and atherosclerotic changes inside the walls of internal carotid artery (ICA). Magnetic resonance imaging (MRI) and angiography (MRA) revealed dilated venous vascular structures around lateral ventricles and hyper-intense gliosis in periventricular white matter. Ischemic gliosis around the lateral ventricles were present in the Digital Subtracted Angiography (DSA). After the neurology, ophthalmology and neurosurgery consultation, the patient was diagnosed with Moyamoya disease, pulse steroid therapy was started for vision loss, and super-selective DSA was planned for further investigation. Moyamoya disease is a rare condition, but it can be an important cause of stroke in both children and adults. It generally affects anterior circulation, but posterior cerebral circulation may also be affected, as well. In the differential diagnosis of acute vision loss, occipital stroke related to Moyamoya disease should be considered. Direct and indirect surgical revascularization surgeries may be used to effectively revascularize affected brain areas, and have been shown to reduce risk of stroke.

Keywords: headache, Moyamoya disease, stroke, visual loss

Procedia PDF Downloads 248
979 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 170
978 Metabolic Variables and Associated Factors in Acute Pancreatitis Patients Correlates with Health-Related Quality of Life

Authors: Ravinder Singh, Pratima Syal

Abstract:

Background: The rising prevalence and incidence of Acute Pancreatitis (AP) and its associated metabolic variables known as metabolic syndrome (MetS) are common medical conditions with catastrophic consequences and substantial treatment costs. The correlation between MetS and AP, as well as their impact on Health Related Quality of Life (HRQoL) is uncertain, and because there are so few published studies, further research is needed. As a result, we planned this study to determine the relationship between MetS components impact on HRQoL in AP patients. Patients and Methods: A prospective, observational study involving the recruitment of patients with AP with and without MetS was carried out in tertiary care hospital of North India. Patients were classified with AP if they were diagnosed with two or more components of the following criteria, abdominal pain, serum amylase and lipase levels two or more times normal, imaging trans-abdominal ultrasound, computed tomography, or magnetic resonance. The National Cholesterol Education Program–Adult Treatment Panel III (NCEP-ATP III) criterion was used to diagnose the MetS. The various socio-demographic variables were also taken into consideration for the calculation of statistical significance (P≤.05) in AP patients. Finally, the correlation between AP and MetS, along with their impact on HRQoL was assessed using Student's t test, Pearson Correlation Coefficient, and Short Form-36 (SF-36). Results: AP with MetS (n = 100) and AP without MetS (n = 100) patients were divided into two groups. Gender, Age, Educational Status, Tobacco use, Body Mass Index (B.M.I), and Waist Hip Ratio (W.H.R) were the socio-demographic parameters found to be statistically significant (P≤.05) in AP patients with MetS. Also, all the metabolic variables were also found to statistically significant (P≤.05) and found to be increased in patients with AP with MetS as compared to AP without MetS except HDL levels. Using the SF-36 form, a greater significant decline was observed in physical component summary (PCS) and mental component summary (MCS) in patients with AP with MetS as compared to patients without MetS (P≤.05). Furthermore, a negative association between all metabolic variables with the exception of HDL, and AP was found to be producing deterioration in PCS and MCS. Conclusion: The study demonstrated that patients with AP with MetS had a worse overall HRQOL than patients with AP without MetS due to number of socio-demographic and metabolic variables having direct correlation impacting physical and mental health of patients.

Keywords: metabolic disorers, QOL, cost effectiveness, pancreatitis

Procedia PDF Downloads 91
977 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration

Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger

Abstract:

Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.

Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration

Procedia PDF Downloads 25
976 A Study of Stress and Coping Strategies of School Teachers

Authors: G.S. Patel

Abstract:

In this research paper the discussion have been made on teachers work mental stress and coping strategies. Stress Measurement scale was developed for school teachers. All the scientific steps of test construction was followed. For this test construction, different factors like teachers workplace, teachers' residential area, teachers' family life, teachers' ability and skills, economic factors and other factors to construct teachers stress measurement scale. In this research tool, situational statements have been made and teachers have to give a response in each statement on five-point rating scale what they experienced in their daily life. Special features of the test also established like validity and reliability of this test and also computed norms for its interpretation. A sample of 320 teachers of school teachers of Gujarat state was selected by Cluster sampling technique. t-test was computed for testing null hypothesis. The main findings of the present study are Urban area teachers feel more stressful situation compare to rural area teachers. Those teachers who live in the joint family feel less stress compare to teachers who live in a nuclear family. This research work is very useful to prepare list of activities to reduce teachers mental stress.

Keywords: stress measurement scale, level of stress, validity, reliability, norms

Procedia PDF Downloads 166
975 Flexural Properties of Carbon/Polypropylene Composites: Influence of Matrix Forming Polypropylene in Fiber, Powder, and Film States

Authors: Vijay Goud, Ramasamy Alagirusamy, Apurba Das, Dinesh Kalyanasundaram

Abstract:

Thermoplastic composites render new opportunities as effective processing technology while crafting newer complications into processing. One of the notable challenges is in achieving thorough wettability that is significantly deterred by the high viscosity of the long molecular chains of the thermoplastics. As a result of high viscosity, it is very difficult to impregnate the resin into a tightly interlaced textile structure to fill the voids present in the structure. One potential solution to the above problem, is to pre-deposit resin on the fiber, prior to consolidation. The current study compares DREF spinning, powder coating and film stacking methods of predeposition of resin onto fibers. An investigation into the flexural properties of unidirectional composites (UDC) produced from blending of carbon fiber and polypropylene (PP) matrix in varying forms of fiber, powder and film are reported. Dr. Ernst Fehrer (DREF) yarns or friction spun hybrid yarns were manufactured from PP fibers and carbon tows. The DREF yarns were consolidated to yield unidirectional composites (UDCs) referred to as UDC-D. PP in the form of powder was coated on carbon tows by electrostatic spray coating. The powder-coated towpregs were consolidated to form UDC-P. For the sake of comparison, a third UDC referred as UDC-F was manufactured by the consolidation of PP films stacked between carbon tows. The experiments were designed to yield a matching fiber volume fraction of about 50 % in all the three UDCs. A comparison of mechanical properties of the three composites was studied to understand the efficiency of matrix wetting and impregnation. Approximately 19% and 68% higher flexural strength were obtained for UDC-P than UDC-D and UDC-F respectively. Similarly, 25% and 81% higher modulus were observed in UDC-P than UDC-D and UDC-F respectively. Results from micro-computed tomography, scanning electron microscopy, and short beam tests indicate better impregnation of PP matrix in UDC-P obtained through electrostatic spray coating process and thereby higher flexural strength and modulus.

Keywords: DREF spinning, film stacking, flexural strength, powder coating, thermoplastic composite

Procedia PDF Downloads 209
974 Advancing Hydrogen Production Through Additive Manufacturing: Optimising Structures of High Performance Electrodes

Authors: Fama Jallow, Melody Neaves, Professor Mcgregor

Abstract:

The quest for sustainable energy sources has driven significant interest in hydrogen production as a clean and efficient fuel. Alkaline water electrolysis (AWE) has emerged as a prominent method for generating hydrogen, necessitating the development of advanced electrode designs with improved performance characteristics. Additive manufacturing (AM) by laser powder bed fusion (LPBF) method presents an opportunity to tailor electrode microstructures and properties, enhancing their performance. This research proposes investigating the AM of electrodes with different lattice structures to optimize hydrogen production. The primary objective is to employ advanced modeling techniques to identify and select two optimal lattice structures for electrode fabrication. LPBF will be used to fabricate electrodes with precise control over lattice geometry, pore size, and distribution. The performance evaluation will encompass energy consumption and porosity analysis. AWE will assess energy efficiency, aiming to identify lattice structures with enhanced hydrogen production rates and reduced power requirements. Computed tomography (CT) scanning will analyze porosity to determine material integrity and mass transport characteristics. The research aims to bridge the gap between AM and hydrogen production by investigating lattice structures potential in electrode design. By systematically exploring lattice structures and their impact on performance, this study aims to provide valuable insights into the design and fabrication of highly efficient and cost-effective electrodes for AWE. The outcomes hold promise for advancing hydrogen production through AM. The research will have a significant impact on the development of sustainable energy sources. The findings from this study will help to improve the efficiency of AWE, making it a more viable option for hydrogen production. This could lead to a reduction in our reliance on fossil fuels, which would have a positive impact on the environment. The research is also likely to have a commercial impact. The findings could be used to develop new electrode designs that are more efficient and cost-effective. This could lead to the development of new hydrogen production technologies, which could have a significant impact on the energy market.

Keywords: hydrogen production, electrode, lattice structure, Africa

Procedia PDF Downloads 52
973 Impact of Increased Radiology Staffing on After-Hours Radiology Reporting Efficiency and Quality

Authors: Peregrine James Dalziel, Philip Vu Tran

Abstract:

Objective / Introduction: Demand for radiology services from Emergency Departments (ED) continues to increase with greater demands placed on radiology staff providing reports for the management of complex cases. Queuing theory indicates that wide variability of process time with the random nature of request arrival increases the probability of significant queues. This can lead to delays in the time-to-availability of radiology reports (TTA-RR) and potentially impaired ED patient flow. In addition, greater “cognitive workload” of greater volume may lead to reduced productivity and increased errors. We sought to quantify the potential ED flow improvements obtainable from increased radiology providers serving 3 public hospitals in Melbourne Australia. We sought to assess the potential productivity gains, quality improvement and the cost-effectiveness of increased labor inputs. Methods & Materials: The Western Health Medical Imaging Department moved from single resident coverage on weekend days 8:30 am-10:30 pm to a limited period of 2 resident coverage 1 pm-6 pm on both weekend days. The TTA-RR for weekend CT scans was calculated from the PACs database for the 8 month period symmetrically around the date of staffing change. A multivariate linear regression model was developed to isolate the improvement in TTA-RR, between the two 4-months periods. Daily and hourly scan volume at the time of each CT scan was calculated to assess the impact of varying department workload. To assess any improvement in report quality/errors a random sample of 200 studies was assessed to compare the average number of clinically significant over-read addendums to reports between the 2 periods. Cost-effectiveness was assessed by comparing the marginal cost of additional staffing against a conservative estimate of the economic benefit of improved ED patient throughput using the Australian national insurance rebate for private ED attendance as a revenue proxy. Results: The primary resident on call and the type of scan accounted for most of the explained variability in time to report availability (R2=0.29). Increasing daily volume and hourly volume was associated with increased TTA-RR (1.5m (p<0.01) and 4.8m (p<0.01) respectively per additional scan ordered within each time frame. Reports were available 25.9 minutes sooner on average in the 4 months post-implementation of double coverage (p<0.01) with additional 23.6 minutes improvement when 2 residents were on-site concomitantly (p<0.01). The aggregate average improvement in TTA-RR was 24.8 hours per weekend day This represents the increased decision-making time available to ED physicians and potential improvement in ED bed utilisation. 5% of reports from the intervention period contained clinically significant addendums vs 7% in the single resident period but this was not statistically significant (p=0.7). The marginal cost was less than the anticipated economic benefit based assuming a 50% capture of improved TTA-RR inpatient disposition and using the lowest available national insurance rebate as a proxy for economic benefit. Conclusion: TTA-RR improved significantly during the period of increased staff availability, both during the specific period of increased staffing and throughout the day. Increased labor utilisation is cost-effective compared with the potential improved productivity for ED cases requiring CT imaging.

Keywords: workflow, quality, administration, CT, staffing

Procedia PDF Downloads 90
972 Ground Track Assessment Using Electrical Resistivity Tomography Application

Authors: Noryani Natasha Yahaya, Anas Ibrahim, Juraidah Ahmad, Azura Ahmad, Mohd Ikmal Fazlan Rosli, Zailan Ramli, Muhd Sidek Muhd Norhasri

Abstract:

The subgrade formation is an important element of the railway structure which holds overall track stability. Conventional track maintenance involves many substructure component replacements, as well as track re-ballasting on a regular basis is partially contributed to the embankment's long-term settlement problem. For subgrade long-term stability analysis, the geophysical method is commonly being used to diagnose those hidden sources/mechanisms of track deterioration problems that the normal visual method is unable to detect. Electrical resistivity tomography (ERT) is one of the applicable geophysical tools that are helpful in railway subgrade inspection/track monitoring due to its flexibility and reliability of the analysis. The ERT was conducted at KM 23.0 of Pinang Tunggal track to investigate the subgrade of railway track through the characterization/mapping on track formation profiling which was directly generated using 2D analysis of Res2dinv software. The profiles will allow examination of the presence and spatial extent of a significant subgrade layer and screening of any poor contact of soil boundary. Based on the finding, there is a mix/interpretation/intermixing of an interlayer between the sub-ballast and the sand. Although the embankment track considered here is at no immediate risk of settlement effect or any failure, the regular monitoring of track’s location will allow early correction maintenance if necessary. The developed data of track formation clearly shows the similarity of the side view with the assessed track. The data visualization in the 2D section of the track embankment agreed well with the initial assumption based on the main element structure general side view.

Keywords: ground track, assessment, resistivity, geophysical railway, method

Procedia PDF Downloads 132
971 Applications of Artificial Intelligence (AI) in Cardiac imaging

Authors: Angelis P. Barlampas

Abstract:

The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.

Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine

Procedia PDF Downloads 53
970 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.

Keywords: wavelet transform, computational error, computational duration, strong ground motion data

Procedia PDF Downloads 361
969 Improving Search Engine Performance by Removing Indexes to Malicious URLs

Authors: Durga Toshniwal, Lokesh Agrawal

Abstract:

As the web continues to play an increasing role in information exchange, and conducting daily activities, computer users have become the target of miscreants which infects hosts with malware or adware for financial gains. Unfortunately, even a single visit to compromised web site enables the attacker to detect vulnerabilities in the user’s applications and force the downloading of multitude of malware binaries. We provide an approach to effectively scan the so-called drive-by downloads on the Internet. Drive-by downloads are result of URLs that attempt to exploit their visitors and cause malware to be installed and run automatically. To scan the web for malicious pages, the first step is to use a crawler to collect URLs that live on the Internet, and then to apply fast prefiltering techniques to reduce the amount of pages that are needed to be examined by precise, but slower, analysis tools (such as honey clients or antivirus programs). Although the technique is effective, it requires a substantial amount of resources. A main reason is that the crawler encounters many pages on the web that are legitimate and needs to be filtered. In this paper, to characterize the nature of this rising threat, we present implementation of a web crawler on Python, an approach to search the web more efficiently for pages that are likely to be malicious, filtering benign pages and passing remaining pages to antivirus program for detection of malwares. Our approaches starts from an initial seed of known, malicious web pages. Using these seeds, our system generates search engines queries to identify other malicious pages that are similar to the ones in the initial seed. By doing so, it leverages the crawling infrastructure of search engines to retrieve URLs that are much more likely to be malicious than a random page on the web. The results shows that this guided approach is able to identify malicious web pages more efficiently when compared to random crawling-based approaches.

Keywords: web crawler, malwares, seeds, drive-by-downloads, security

Procedia PDF Downloads 216
968 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology

Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani

Abstract:

Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.

Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography

Procedia PDF Downloads 406
967 Assessment the Infiltration of the Wastewater Ponds and Its Impact on the Water Quality of Pleistocene Aquifer at El Sadat City Using 2-D Electrical Resistivity Tomography and Water Chemistry

Authors: Abeer A. Kenawy, Usama Massoud, El-Said A. Ragab, Heba M. El-Kosery

Abstract:

2-D Electrical Resistivity Tomography (ERT) and hydrochemical study have been conducted at El Sadat industrial city. The study aims to investigate the area around the wastewater ponds to determine the possibility of water percolation from the wastewater ponds to the Pleistocene aquifer and to inspect the effect of this seepage on the groundwater chemistry. Pleistocene aquifer is the main groundwater reservoir in this area, where El Sadat city and its vicinities depend totally on this aquifer for water supplies needed for drinking, agricultural, and industrial activities. In this concern, seven ERT profiles were measured around the wastewater ponds. Besides, 10 water samples were collected from the ponds and the nearby groundwater wells. The water samples have been chemically analyzed for major cations, anions, nutrients, and heavy elements. Also, the physical parameters (pH, Alkalinity, EC, TDS) of the water samples were measured. Inspection of the ERT sections shows that they exhibit lower resistivity values towards the water ponds and higher values in opposite sides. In addition, the water table was detected at shallower depths at the same sides of lower resistivity. This could indicate a wastewater infiltration to the groundwater aquifer near the oxidation ponds. Correlation of the physical parameters and ionic concentrations of the wastewater samples with those of the groundwater samples indicates that; the ionic levels are randomly varying and no specific trend could be obtained. In addition, the wastewater samples shows some ionic levels lower than those detected in other groundwater samples. Besides, the nitrate level is higher in samples taken from the cultivated land than the wastewater samples due to the over using of nitrogen fertilizers. Then, we can say that the infiltrated water from wastewater ponds are not the main controller of the groundwater chemistry in this area, but rather the variable ionic concentrations could be attributed to local, natural, and anthropogenic processes.

Keywords: El Sadat city, ERT, hydrochemistry, percolation, wastewater ponds

Procedia PDF Downloads 324
966 Positron Emission Tomography Parameters as Predictors of Pathologic Response and Nodal Clearance in Patients with Stage IIIA NSCLC Receiving Trimodality Therapy

Authors: Andrea L. Arnett, Ann T. Packard, Yolanda I. Garces, Kenneth W. Merrell

Abstract:

Objective: Pathologic response following neoadjuvant chemoradiation (CRT) has been associated with improved overall survival (OS). Conflicting results have been reported regarding the pathologic predictive value of positron emission tomography (PET) response in patients with stage III lung cancer. The aim of this study was to evaluate the correlation between post-treatment PET response and pathologic response utilizing novel FDG-PET parameters. Methods: This retrospective study included patients with non-metastatic, stage IIIA (N2) NSCLC cancer treated with CRT followed by resection. All patients underwent PET prior to and after neoadjuvant CRT. Univariate analysis was utilized to assess correlations between PET response, nodal clearance, pCR, and near-complete pathologic response (defined as the microscopic residual disease or less). Maximal standard uptake value (SUV), standard uptake ratio (SUR) [normalized independently to the liver (SUR-L) and blood pool (SUR-BP)], metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were measured pre- and post-chemoradiation. Results: A total of 44 patients were included for review. Median age was 61.9 years, and median follow-up was 2.6 years. Histologic subtypes included adenocarcinoma (72.2%) and squamous cell carcinoma (22.7%), and the majority of patients had the T2 disease (59.1%). The rate of pCR and near-complete pathologic response within the primary lesion was 28.9% and 44.4%, respectively. The average reduction in SUVmₐₓ was 9.2 units (range -1.9-32.8), and the majority of patients demonstrated some degree of favorable treatment response. SUR-BP and SUR-L showed a mean reduction of 4.7 units (range -0.1-17.3) and 3.5 units (range –1.7-12.6), respectively. Variation in PET response was not significantly associated with histologic subtype, concurrent chemotherapy type, stage, or radiation dose. No significant correlation was found between pathologic response and absolute change in MTV or TLG. Reduction in SUVmₐₓ and SUR were associated with increased rate of pathologic response (p ≤ 0.02). This correlation was not impacted by normalization of SUR to liver versus mediastinal blood pool. A threshold of > 75% decrease in SUR-L correlated with near-complete response, with a sensitivity of 57.9% and specificity of 85.7%, as well as positive and negative predictive values of 78.6% and 69.2%, respectively (diagnostic odds ratio [DOR]: 5.6, p=0.02). A threshold of >50% decrease in SUR was also significantly associated pathologic response (DOR 12.9, p=0.2), but specificity was substantially lower when utilizing this threshold value. No significant association was found between nodal PET parameters and pathologic nodal clearance. Conclusions: Our results suggest that treatment response to neoadjuvant therapy as assessed on PET imaging can be a predictor of pathologic response when evaluated via SUV and SUR. SUR parameters were associated with higher diagnostic odds ratios, suggesting improved predictive utility compared to SUVmₐₓ. MTV and TLG did not prove to be significant predictors of pathologic response but may warrant further investigation in a larger cohort of patients.

Keywords: lung cancer, positron emission tomography (PET), standard uptake ratio (SUR), standard uptake value (SUV)

Procedia PDF Downloads 210
965 Microstructure and Mechanical Properties Evaluation of Graphene-Reinforced AlSi10Mg Matrix Composite Produced by Powder Bed Fusion Process

Authors: Jitendar Kumar Tiwari, Ajay Mandal, N. Sathish, A. K. Srivastava

Abstract:

Since the last decade, graphene achieved great attention toward the progress of multifunction metal matrix composites, which are highly demanded in industries to develop energy-efficient systems. This study covers the two advanced aspects of the latest scientific endeavor, i.e., graphene as reinforcement in metallic materials and additive manufacturing (AM) as a processing technology. Herein, high-quality graphene and AlSi10Mg powder mechanically mixed by very low energy ball milling with 0.1 wt. % and 0.2 wt. % graphene. Mixed powder directly subjected to the powder bed fusion process, i.e., an AM technique to produce composite samples along with bare counterpart. The effects of graphene on porosity, microstructure, and mechanical properties were examined in this study. The volumetric distribution of pores was observed under X-ray computed tomography (CT). On the basis of relative density measurement by X-ray CT, it was observed that porosity increases after graphene addition, and pore morphology also transformed from spherical pores to enlarged flaky pores due to improper melting of composite powder. Furthermore, the microstructure suggests the grain refinement after graphene addition. The columnar grains were able to cross the melt pool boundaries in case of the bare sample, unlike composite samples. The smaller columnar grains were formed in composites due to heterogeneous nucleation by graphene platelets during solidification. The tensile properties get affected due to induced porosity irrespective of graphene reinforcement. The optimized tensile properties were achieved at 0.1 wt. % graphene. The increment in yield strength and ultimate tensile strength was 22% and 10%, respectively, for 0.1 wt. % graphene reinforced sample in comparison to bare counterpart while elongation decreases 20% for the same sample. The hardness indentations were taken mostly on the solid region in order to avoid the collapse of the pores. The hardness of the composite was increased progressively with graphene content. Around 30% of increment in hardness was achieved after the addition of 0.2 wt. % graphene. Therefore, it can be concluded that powder bed fusion can be adopted as a suitable technique to develop graphene reinforced AlSi10Mg composite. Though, some further process modification required to avoid the induced porosity after the addition of graphene, which can be addressed in future work.

Keywords: graphene, hardness, porosity, powder bed fusion, tensile properties

Procedia PDF Downloads 108