Search results for: data driven diagnosis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27380

Search results for: data driven diagnosis

26840 Optimal Beam for Accelerator Driven Systems

Authors: M. Paraipan, V. M. Javadova, S. I. Tyutyunnikov

Abstract:

The concept of energy amplifier or accelerator driven system (ADS) involves the use of a particle accelerator coupled with a nuclear reactor. The accelerated particle beam generates a supplementary source of neutrons, which allows the subcritical functioning of the reactor, and consequently a safe exploitation. The harder neutron spectrum realized ensures a better incineration of the actinides. The almost generalized opinion is that the optimal beam for ADS is represented by protons with energy around 1 GeV (gigaelectronvolt). In the present work, a systematic analysis of the energy gain for proton beams with energy from 0.5 to 3 GeV and ion beams from deuteron to neon with energies between 0.25 and 2 AGeV is performed. The target is an assembly of metallic U-Pu-Zr fuel rods in a bath of lead-bismuth eutectic coolant. The rods length is 150 cm. A beryllium converter with length 110 cm is used in order to maximize the energy released in the target. The case of a linear accelerator is considered, with a beam intensity of 1.25‧10¹⁶ p/s, and a total accelerator efficiency of 0.18 for proton beam. These values are planned to be achieved in the European Spallation Source project. The energy gain G is calculated as the ratio between the energy released in the target to the energy spent to accelerate the beam. The energy released is obtained through simulation with the code Geant4. The energy spent is calculating by scaling from the data about the accelerator efficiency for the reference particle (proton). The analysis concerns the G values, the net power produce, the accelerator length, and the period between refueling. The optimal energy for proton is 1.5 GeV. At this energy, G reaches a plateau around a value of 8 and a net power production of 120 MW (megawatt). Starting with alpha, ion beams have a higher G than 1.5 GeV protons. A beam of 0.25 AGeV(gigaelectronvolt per nucleon) ⁷Li realizes the same net power production as 1.5 GeV protons, has a G of 15, and needs an accelerator length 2.6 times lower than for protons, representing the best solution for ADS. Beams of ¹⁶O or ²⁰Ne with energy 0.75 AGeV, accelerated in an accelerator with the same length as 1.5 GeV protons produce approximately 900 MW net power, with a gain of 23-25. The study of the evolution of the isotopes composition during irradiation shows that the increase in power production diminishes the period between refueling. For a net power produced of 120 MW, the target can be irradiated approximately 5000 days without refueling, but only 600 days when the net power reaches 1 GW (gigawatt).

Keywords: accelerator driven system, ion beam, electrical power, energy gain

Procedia PDF Downloads 140
26839 Assessment of Hydrogen Demand for Different Technological Pathways to Decarbonise the Aviation Sector in Germany

Authors: Manish Khanra, Shashank Prabhu

Abstract:

The decarbonization of hard-to-abate sectors is currently high on the agenda in the EU and its member states, as these sectors have substantial shares in overall GHG emissions while it is facing serious challenges to decarbonize. In particular, the aviation sector accounts for 2.8% of global anthropogenic CO₂ emissions. These emissions are anticipated to grow dramatically unless immediate mitigating efforts are implemented. Hydrogen and its derivatives based on renewable electricity can have a key role in the transition towards CO₂-neutral flights. The substantial shares of energy carriers in the form of drop-in fuel, direct combustion and Hydrogen-to-Electric are promising in most scenarios towards 2050. For creating appropriate policies to ramp up the production and utilisation of hydrogen commodities in the German aviation sector, a detailed analysis of the spatial distribution of supply-demand sites is essential. The objective of this research work is to assess the demand for hydrogen-based alternative fuels in the German aviation sector to achieve the perceived goal of the ‘Net Zero’ scenario by 2050. Here, the analysis of the technological pathways for the production and utilisation of these fuels in various aircraft options is conducted for reaching mitigation targets. Our method is based on data-driven bottom-up assessment, considering production and demand sites and their spatial distribution. The resulting energy demand and its spatial distribution with consideration of technology diffusion lead to a possible transition pathway of the aviation sector to meet short-term and long-term mitigation targets. Additionally, to achieve mitigation targets in this sector, costs and policy aspects are discussed, which would support decision-makers from airline industries, policymakers and the producers of energy commodities.

Keywords: the aviation sector, hard-to-abate sectors, hydrogen demand, alternative fuels, technological pathways, data-driven approach

Procedia PDF Downloads 130
26838 Detect Cable Force of Cable Stayed Bridge from Accelerometer Data of SHM as Real Time

Authors: Nguyen Lan, Le Tan Kien, Nguyen Pham Gia Bao

Abstract:

The cable-stayed bridge belongs to the combined system, in which the cables is a major strutual element. Cable-stayed bridges with large spans are often arranged with structural health monitoring systems to collect data for bridge health diagnosis. Cables tension monitoring is a structural monitoring content. It is common to measure cable tension by a direct force sensor or cable vibration accelerometer sensor, thereby inferring the indirect cable tension through the cable vibration frequency. To translate cable-stayed vibration acceleration data to real-time tension requires some necessary calculations and programming. This paper introduces the algorithm, labview program that converts cable-stayed vibration acceleration data to real-time tension. The research results are applied to the monitoring system of Tran Thi Ly cable-stayed bridge and Song Hieu cable-stayed bridge in Vietnam.

Keywords: cable-stayed bridge, cable fore, structural heath monitoring (SHM), fast fourie transformed (FFT), real time, vibrations

Procedia PDF Downloads 71
26837 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging

Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.

Keywords: breast, machine learning, MRI, radiomics

Procedia PDF Downloads 267
26836 Artificial Intelligence in Disease Diagnosis

Authors: Shalini Tripathi, Pardeep Kumar

Abstract:

The method of translating observed symptoms into disease names is known as disease diagnosis. The ability to solve clinical problems in a complex manner is critical to a doctor's effectiveness in providing health care. The accuracy of his or her expertise is crucial to the survival and well-being of his or her patients. Artificial Intelligence (AI) has a huge economic influence depending on how well it is applied. In the medical sector, human brain-simulated intellect can help not only with classification accuracy, but also with reducing diagnostic time, cost and pain associated with pathologies tests. In light of AI's present and prospective applications in the biomedical, we will identify them in the paper based on potential benefits and risks, social and ethical consequences and issues that might be contentious but have not been thoroughly discussed in publications and literature. Current apps, personal tracking tools, genetic tests and editing programmes, customizable models, web environments, virtual reality (VR) technologies and surgical robotics will all be investigated in this study. While AI holds a lot of potential in medical diagnostics, it is still a very new method, and many clinicians are uncertain about its reliability, specificity and how it can be integrated into clinical practice without jeopardising clinical expertise. To validate their effectiveness, more systemic refinement of these implementations, as well as training of physicians and healthcare facilities on how to effectively incorporate these strategies into clinical practice, will be needed.

Keywords: Artificial Intelligence, medical diagnosis, virtual reality, healthcare ethical implications 

Procedia PDF Downloads 132
26835 Application of Support Vector Machines in Fault Detection and Diagnosis of Power Transmission Lines

Authors: I. A. Farhat, M. Bin Hasan

Abstract:

A developed approach for the protection of power transmission lines using Support Vector Machines (SVM) technique is presented. In this paper, the SVM technique is utilized for the classification and isolation of faults in power transmission lines. Accurate fault classification and location results are obtained for all possible types of short circuit faults. As in distance protection, the approach utilizes the voltage and current post-fault samples as inputs. The main advantage of the method introduced here is that the method could easily be extended to any power transmission line.

Keywords: fault detection, classification, diagnosis, power transmission line protection, support vector machines (SVM)

Procedia PDF Downloads 559
26834 Methods Used to Perform Requirements Elicitation for FinTech Application Development

Authors: Zhao Pengcheng, Yin Siyuan

Abstract:

Fintech is the new hot topic of the 21st century, a discipline that combines financial theory with computer modelling. It can provide both digital analysis methods for investment banks and investment decisions for users. Given the variety of services available, it is necessary to provide a superior method of requirements elicitation to ensure that users' needs are addressed in the software development process. The accuracy of traditional software requirements elicitation methods is not sufficient, so this study attempts to use a multi-perspective based requirements heuristic framework. Methods such as interview and questionnaire combination, card sorting, and model driven are proposed. The collection results from PCA show that the new methods can better help with requirements elicitation. However, the method has some limitations and, there are some efficiency issues. However, the research in this paper provides a good theoretical extension that can provide researchers with some new research methods and perspectives viewpoints.

Keywords: requirement elicitation, FinTech, mobile application, survey, interview, model-driven

Procedia PDF Downloads 103
26833 Modified Model-Based Systems Engineering Driven Approach for Defining Complex Energy Systems

Authors: Akshay S. Dalvi, Hazim El-Mounayri

Abstract:

The internal and the external interactions between the complex structural and behavioral characteristics of the complex energy system result in unpredictable emergent behaviors. These emergent behaviors are not well understood, especially when modeled using the traditional top-down systems engineering approach. The intrinsic nature of current complex energy systems has called for an elegant solution that provides an integrated framework in Model-Based Systems Engineering (MBSE). This paper mainly presents a MBSE driven approach to define and handle the complexity that arises due to emergent behaviors. The approach provides guidelines for developing system architecture that leverages in predicting the complexity index of the system at different levels of abstraction. A framework that integrates indefinite and definite modeling aspects is developed to determine the complexity that arises during the development phase of the system. This framework provides a workflow for modeling complex systems using Systems Modeling Language (SysML) that captures the system’s requirements, behavior, structure, and analytical aspects at both problem definition and solution levels. A system architecture for a district cooling plant is presented, which demonstrates the ability to predict the complexity index. The result suggests that complex energy systems like district cooling plant can be defined in an elegant manner using the unconventional modified MBSE driven approach that helps in estimating development time and cost.

Keywords: district cooling plant, energy systems, framework, MBSE

Procedia PDF Downloads 130
26832 Fault Diagnosis in Induction Motor

Authors: Kirti Gosavi, Anita Bhole

Abstract:

The paper demonstrates simulation and steady-state performance of three phase squirrel cage induction motor and detection of rotor broken bar fault using MATLAB. This simulation model is successfully used in the fault detection of rotor broken bar for the induction machines. A dynamic model using PWM inverter and mathematical modelling of the motor is developed. The dynamic simulation of the small power induction motor is one of the key steps in the validation of the design process of the motor drive system and it is needed for eliminating advertent design errors and the resulting error in the prototype construction and testing. The simulation model will be helpful in detecting the faults in three phase induction motor using Motor current signature analysis.

Keywords: squirrel cage induction motor, pulse width modulation (PWM), fault diagnosis, induction motor

Procedia PDF Downloads 633
26831 Model-Based Software Regression Test Suite Reduction

Authors: Shiwei Deng, Yang Bao

Abstract:

In this paper, we present a model-based regression test suite reducing approach that uses EFSM model dependence analysis and probability-driven greedy algorithm to reduce software regression test suites. The approach automatically identifies the difference between the original model and the modified model as a set of elementary model modifications. The EFSM dependence analysis is performed for each elementary modification to reduce the regression test suite, and then the probability-driven greedy algorithm is adopted to select the minimum set of test cases from the reduced regression test suite that cover all interaction patterns. Our initial experience shows that the approach may significantly reduce the size of regression test suites.

Keywords: dependence analysis, EFSM model, greedy algorithm, regression test

Procedia PDF Downloads 427
26830 Geospatial Information for Smart City Development

Authors: Simangele Dlamini

Abstract:

Smart city development is seen as a way of facing the challenges brought about by the growing urban population the world over. Research indicates that cities have a role to play in combating urban challenges like crime, waste disposal, greenhouse gas emissions, and resource efficiency. These solutions should be such that they do not make city management less sustainable but should be solutions-driven, cost and resource-efficient, and smart. This study explores opportunities on how the City of Johannesburg, South Africa, can use Geographic Information Systems, Big Data and the Internet of Things (IoT) in identifying opportune areas to initiate smart city initiatives such as smart safety, smart utilities, smart mobility, and smart infrastructure in an integrated manner. The study will combine Big Data, using real-time data sources to identify hotspot areas that will benefit from ICT interventions. The GIS intervention will assist the city in avoiding a silo approach in its smart city development initiatives, an approach that has led to the failure of smart city development in other countries.

Keywords: smart cities, internet of things, geographic information systems, johannesburg

Procedia PDF Downloads 148
26829 Root Mean Square-Based Method for Fault Diagnosis and Fault Detection and Isolation of Current Fault Sensor in an Induction Machine

Authors: Ahmad Akrad, Rabia Sehab, Fadi Alyoussef

Abstract:

Nowadays, induction machines are widely used in industry thankful to their advantages comparing to other technologies. Indeed, there is a big demand because of their reliability, robustness and cost. The objective of this paper is to deal with diagnosis, detection and isolation of faults in a three-phase induction machine. Among the faults, Inter-turn short-circuit fault (ITSC), current sensors fault and single-phase open circuit fault are selected to deal with. However, a fault detection method is suggested using residual errors generated by the root mean square (RMS) of phase currents. The application of this method is based on an asymmetric nonlinear model of Induction Machine considering the winding fault of the three axes frame state space. In addition, current sensor redundancy and sensor fault detection and isolation (FDI) are adopted to ensure safety operation of induction machine drive. Finally, a validation is carried out by simulation in healthy and faulty operation modes to show the benefit of the proposed method to detect and to locate with, a high reliability, the three types of faults.

Keywords: induction machine, asymmetric nonlinear model, fault diagnosis, inter-turn short-circuit fault, root mean square, current sensor fault, fault detection and isolation

Procedia PDF Downloads 199
26828 A New Block Cipher for Resource-Constrained Internet of Things Devices

Authors: Muhammad Rana, Quazi Mamun, Rafiqul Islam

Abstract:

In the Internet of Things (IoT), many devices are connected and accumulate a sheer amount of data. These Internet-driven raw data need to be transferred securely to the end-users via dependable networks. Consequently, the challenges of IoT security in various IoT domains are paramount. Cryptography is being applied to secure the networks for authentication, confidentiality, data integrity and access control. However, due to the resource constraint properties of IoT devices, the conventional cipher may not be suitable in all IoT networks. This paper designs a robust and effective lightweight cipher to secure the IoT environment and meet the resource-constrained nature of IoT devices. We also propose a symmetric and block-cipher based lightweight cryptographic algorithm. The proposed algorithm increases the complexity of the block cipher, maintaining the lowest computational requirements possible. The proposed algorithm efficiently constructs the key register updating technique, reduces the number of encryption rounds, and adds a new layer between the encryption and decryption processes.

Keywords: internet of things, cryptography block cipher, S-box, key management, security, network

Procedia PDF Downloads 113
26827 Improvement of the Reliability and the Availability of a Production System

Authors: Lakhoua Najeh

Abstract:

Aims of the work: The aim of this paper is to improve the reliability and the availability of a Packer production line of cigarettes based on two methods: The SADT method (Structured Analysis Design Technique) and the FMECA approach (Failure Mode Effects and Critically Analysis). The first method enables us to describe the functionality of the Packer production line of cigarettes and the second method enables us to establish an FMECA analysis. Methods: The methodology adopted in order to contribute to the improvement of the reliability and the availability of a Packer production line of cigarettes has been proposed in this paper, and it is based on the use of Structured Analysis Design Technique (SADT) and Failure mode, effects, and criticality analysis (FMECA) methods. This methodology consists of using a diagnosis of the existing of all of the equipment of a production line of a factory in order to determine the most critical machine. In fact, we use, on the one hand, a functional analysis based on the SADT method of the production line and on the other hand, a diagnosis and classification of mechanical and electrical failures of the line production by their criticality analysis based on the FMECA approach. Results: Based on the methodology adopted in this paper, the results are the creation and the launch of a preventive maintenance plan. They contain the different elements of a Packer production line of cigarettes; the list of the intervention preventive activities and their period of realization. Conclusion: The diagnosis of the existing state helped us to found that the machine of cigarettes used in the Packer production line of cigarettes is the most critical machine in the factory. Then this enables us in the one hand, to describe the functionality of the production line of cigarettes by SADT method and on the other hand, to study the FMECA machine in order to improve the availability and the performance of this machine.

Keywords: production system, diagnosis, SADT method, FMECA method

Procedia PDF Downloads 143
26826 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children

Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura

Abstract:

Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.

Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification

Procedia PDF Downloads 301
26825 Using Multiomic Plasma Profiling From Liquid Biopsies to Identify Potential Signatures for Disease Diagnostics in Late-Stage Non-small Cell Lung Cancer (NSCLC) in Trinidad and Tobago

Authors: Nicole Ramlachan, Samuel Mark West

Abstract:

Lung cancer is the leading cause of cancer-associated deaths in North America, with the vast majority being non-small cell lung cancer (NSCLC), with a five-year survival rate of only 24%. Non-invasive discovery of biomarkers associated with early-diagnosis of NSCLC can enable precision oncology efforts using liquid biopsy-based multiomics profiling of plasma. Although tissue biopsies are currently the gold standard for tumor profiling, this method presents many limitations since these are invasive, risky, and sometimes hard to obtain as well as only giving a limited tumor profile. Blood-based tests provides a less-invasive, more robust approach to interrogate both tumor- and non-tumor-derived signals. We intend to examine 30 stage III-IV NSCLC patients pre-surgery and collect plasma samples.Cell-free DNA (cfDNA) will be extracted from plasma, and next-generation sequencing (NGS) performed. Through the analysis of tumor-specific alterations, including single nucleotide variants (SNVs), insertions, deletions, copy number variations (CNVs), and methylation alterations, we intend to identify tumor-derived DNA—ctDNA among the total pool of cfDNA. This would generate data to be used as an accurate form of cancer genotyping for diagnostic purposes. Using liquid biopsies offer opportunities to improve the surveillance of cancer patients during treatment and would supplement current diagnosis and tumor profiling strategies previously not readily available in Trinidad and Tobago. It would be useful and advantageous to use this in diagnosis and tumour profiling as well as to monitor cancer patients, providing early information regarding disease evolution and treatment efficacy, and reorient treatment strategies in, timethereby improving clinical oncology outcomes.

Keywords: genomics, multiomics, clinical genetics, genotyping, oncology, diagnostics

Procedia PDF Downloads 161
26824 Macroeconomic Implications of Artificial Intelligence on Unemployment in Europe

Authors: Ahmad Haidar

Abstract:

Modern economic systems are characterized by growing complexity, and addressing their challenges requires innovative approaches. This study examines the implications of artificial intelligence (AI) on unemployment in Europe from a macroeconomic perspective, employing data modeling techniques to understand the relationship between AI integration and labor market dynamics. To understand the AI-unemployment nexus comprehensively, this research considers factors such as sector-specific AI adoption, skill requirements, workforce demographics, and geographical disparities. The study utilizes a panel data model, incorporating data from European countries over the last two decades, to explore the potential short-term and long-term effects of AI implementation on unemployment rates. In addition to investigating the direct impact of AI on unemployment, the study also delves into the potential indirect effects and spillover consequences. It considers how AI-driven productivity improvements and cost reductions might influence economic growth and, in turn, labor market outcomes. Furthermore, it assesses the potential for AI-induced changes in industrial structures to affect job displacement and creation. The research also highlights the importance of policy responses in mitigating potential negative consequences of AI adoption on unemployment. It emphasizes the need for targeted interventions such as skill development programs, labor market regulations, and social safety nets to enable a smooth transition for workers affected by AI-related job displacement. Additionally, the study explores the potential role of AI in informing and transforming policy-making to ensure more effective and agile responses to labor market challenges. In conclusion, this study provides a comprehensive analysis of the macroeconomic implications of AI on unemployment in Europe, highlighting the importance of understanding the nuanced relationships between AI adoption, economic growth, and labor market outcomes. By shedding light on these relationships, the study contributes valuable insights for policymakers, educators, and researchers, enabling them to make informed decisions in navigating the complex landscape of AI-driven economic transformation.

Keywords: artificial intelligence, unemployment, macroeconomic analysis, european labor market

Procedia PDF Downloads 77
26823 A Case of Generalized Anxiety Disorder (GAD)

Authors: Muhammad Zeeshan

Abstract:

This case study is about a 54 years man named Mr. U, referred to Capital Hospital, Islamabad, with the presenting complaints of Generalized Anxiety Disorder (GAD). Contrary to his complaints, the client reported psychological symptoms such as restlessness, low mood and fear of darkness and fear from closed places from the last 30 days. He also had a fear of death and his existence in the grave. His sleep was also disturbed due to excessive urination due to diabetes. He was also suffering from semantic symptoms such as headache, numbness of feet and pain in the chest and blockage of the nose. A complete history was taken and informal assessment (clinical interview and MSE) and formal testing (BAI) was applied that showed the clear diagnosis of Generalized Anxiety Disorder. CBT, relaxation techniques, prayer chart and behavioural techniques were applied for the treatment purposes.

Keywords: generalized anxiety disorder, presenting complaints, formal and informal assessment, diagnosis

Procedia PDF Downloads 285
26822 First Experimental Evidence on Feasibility of Molecular Magnetic Particle Imaging of Tumor Marker Alpha-1-Fetoprotein Using Antibody Conjugated Nanoparticles

Authors: Kolja Them, Priyal Chikhaliwala, Sudeshna Chandra

Abstract:

Purpose: The purpose of this work is to examine possibilities for noninvasive imaging and identification of tumor markers for cancer diagnosis. The proposed method uses antibody conjugated iron oxide nanoparticles and multicolor Magnetic Particle Imaging (mMPI). The method has the potential for radiation exposure free real-time estimation of local tumor marker concentrations in vivo. In this study, the method is applied to human Alpha-1-Fetoprotein. Materials and Methods: As tracer material AFP antibody-conjugated Dendrimer-Fe3O4 nanoparticles were used. The nanoparticle bioconjugates were then incubated with bovine serum albumin (BSA) to block any possible nonspecific binding sites. Parts of the resulting solution were then incubated with AFP antigen. MPI measurements were done using the preclinical MPI scanner (Bruker Biospin MRI GmbH) and the multicolor method was used for image reconstruction. Results: In multicolor MPI images the nanoparticles incubated only with BSA were clearly distinguished from nanoparticles incubated with BSA and AFP antigens. Conclusion: Tomographic imaging of human tumor marker Alpha-1-Fetoprotein is possible using AFP antibody conjugated iron oxide nanoparticles in presence of BSA. This opens interesting perspectives for cancer diagnosis.

Keywords: noninvasive imaging, tumor antigens, antibody conjugated iron oxide nanoparticles, multicolor magnetic particle imaging, cancer diagnosis

Procedia PDF Downloads 303
26821 Two Cases of VACTERL Association in Pregnancy with Lymphocyte Therapy

Authors: Seyed Mazyar Mortazavi, Masod Memari, Hasan Ali Ahmadi, Zhaleh Abed

Abstract:

Introduction: VACTERL association is a rare disorder with various congenital malformations. The aetiology remains unknown. Combination of at least three congenital anomalies of the following criteria is required for diagnosis: vertebral defects, anal atresia, cardiac anomalies, tracheo-esophageal fistula, renal anomalies, and limb defects. Case presentation: The first case was 1-day old male neonate with multiple congenital anomalies was bore from 28 years old mother. The mother had history of pregnancy with lymphocyte therapy. His anomalies included: defects in thoracic and lumbar vertebral, anal atresia, bilateral hydronephrosis, atrial septal defect, and lower limb abnormality. Other anomalies were cryptorchidism and nasal canal narrowing. The second case was born with 32 weeks gestational age from mother with history of pregnancy with lymphocyte therapy. He had thoracic vertebral defect, cardiac anomalies and renal defect. Conclusion: diagnosis based on clinical finding is VACTERL association. Early diagnosis is very important to investigation and treatment of other coexistence anomalies. VACTERL association in mothers with history of pregnancy with lymphocyte therapy has suggested possibly of relationship between VACTERL association and this method of pregnancy.

Keywords: anal atresia, tracheo-esophageal fistula, atrial septal defect, lymphocyte therapy

Procedia PDF Downloads 455
26820 Influences of Separation of the Boundary Layer in the Reservoir Pressure in the Shock Tube

Authors: Bruno Coelho Lima, Joao F.A. Martos, Paulo G. P. Toro, Israel S. Rego

Abstract:

The shock tube is a ground-facility widely used in aerospace and aeronautics science and technology for studies on gas dynamic and chemical-physical processes in gases at high-temperature, explosions and dynamic calibration of pressure sensors. A shock tube in its simplest form is comprised of two separate tubes of equal cross-section by a diaphragm. The diaphragm function is to separate the two reservoirs at different pressures. The reservoir containing high pressure is called the Driver, the low pressure reservoir is called Driven. When the diaphragm is broken by pressure difference, a normal shock wave and non-stationary (named Incident Shock Wave) will be formed in the same place of diaphragm and will get around toward the closed end of Driven. When this shock wave reaches the closer end of the Driven section will be completely reflected. Now, the shock wave will interact with the boundary layer that was created by the induced flow by incident shock wave passage. The interaction between boundary layer and shock wave force the separation of the boundary layer. The aim of this paper is to make an analysis of influences of separation of the boundary layer in the reservoir pressure in the shock tube. A comparison among CDF (Computational Fluids Dynamics), experiments test and analytical analysis were performed. For the analytical analysis, some routines in Python was created, in the numerical simulations (Computational Fluids Dynamics) was used the Ansys Fluent, and the experimental tests were used T1 shock tube located in IEAv (Institute of Advanced Studies).

Keywords: boundary layer separation, moving shock wave, shock tube, transient simulation

Procedia PDF Downloads 315
26819 Multiple Fault Detection and Classification in a Coupled Motor with Rotor Using Artificial Neural Network

Authors: Mehrdad Nouri Khajavi, Gollamhassan Payganeh, Mohsen Fallah Tafti

Abstract:

Fault diagnosis is an important aspect of maintaining rotating machinery health and increasing productivity. Many researches has been done in this regards. Many faults such as unbalance, misalignment, looseness, bearing faults, etc. have been considered and diagnosed with different techniques. Most of the researches in fault diagnosis of rotating machinery deal with single fault. Where as in reality faults usually occur simultaneously and it is, therefore, necessary to recognize them at the same time. In this research, two of the most common faults namely unbalance and misalignment have been considered simultaneously with different intensity and then identified and classified with the use of Multi-Layer Perception Neural Network (MLPNN). Processed Vibration signals are used as the input to the MLPNN, and the class of mixed unbalancy, and misalignment is the output of the NN.

Keywords: unbalance, parallel misalignment, combined faults, vibration signals

Procedia PDF Downloads 354
26818 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 285
26817 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis

Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate

Abstract:

This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.

Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull

Procedia PDF Downloads 73
26816 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.

Keywords: mathematical sciences, data analytics, advances, unveiling

Procedia PDF Downloads 93
26815 A Qualitative Study of Parents' Recommendations for Improving the Notification Process and Communication between Health Professionals and Families for New Diagnosis of Cystic Fibrosis

Authors: Mohammad S. Razai, Jan Williams, Rachel Nestel, Dermot Dalton

Abstract:

Purpose: This descriptive qualitative study aimed to obtain parents recommendations for improving the notification process and communication of positive newborn screening result for cystic fibrosis (CF). Methods: Thematic analysis of semi-structured open-ended interviews with 11 parents of 7 children with confirmed diagnosis of CF between 2 months — 2 years of age. Results: Parents preferred face to face disclosure of positive NBS results by a pediatrician with CF professional qualification. They trusted a pediatrician more than any other professional in providing accurate, credible and comprehensive information about the diagnosis and its implications. Parents recommended that health professionals be knowledgeable and provide clear, succinct and understandable information. Providers should also explore parents concerns and acknowledge feelings and emotions. Most parents reported that they preferred to be notified immediately as soon as the results were available. Several parents preferred to be told once the diagnosis was certain. Most parents regarded open access to CF team as the most significant part of care coordination. In addition to health professionals, most parents used internet as an important source of information, interaction and exchange of experiences. Most parents also used social networking sites such as Facebook groups and smart phone apps. Conclusion: This study provides significant new evidence from parental perspective in emphasizing the pivotal role of good communication skills deployed by a knowledgeable CF specialist in person. Parents use of social media and internet has replaced some traditional methods of information exchange and may reduce the need for professional input for newly diagnosed CF patients.

Keywords: care coordination, cystic fibrosis, newborn screening, notification process, parental preferences, professional-paren communication

Procedia PDF Downloads 398
26814 An Audit on the Role of Sentinel Node Biopsy in High-Risk Ductal Carcinoma in Situ and Intracystic Papillary Carcinoma

Authors: M. Sulieman, H. Arabiyat, H. Ali, K. Potiszil, I. Abbas, R. English, P. King, I. Brown, P. Drew

Abstract:

Introduction: The incidence of breast ductal Carcinoma in Situ (DCIS) has been increasing; it currently represents up 20-25% of all breast carcinomas. Some aspects of DCIS management are still controversial, mainly due to the heterogeneity of its clinical presentation and of its biological and pathological characteristics. In DCIS, histological diagnosis obtained preoperatively, carries the risk of sampling error if the presence of invasive cancer is subsequently diagnosed. The mammographic extent over than 4–5 cm and the presence of architectural distortion, focal asymmetric density or mass on mammography are proven important risk factors of preoperative histological under staging. Intracystic papillary cancer (IPC) is a rare form of breast carcinoma. Despite being previously compared to DCIS it has been shown to present histologically with invasion of the basement membrane and even metastasis. SLNB – Carries the risk of associated comorbidity that should be considered when planning surgery for DCIS and IPC. Objectives: The aim of this Audit was to better define a ‘high risk’ group of patients with pre-op diagnosis of non-invasive cancer undergoing breast conserving surgery, who would benefit from sentinel node biopsy. Method: Retrospective data collection of all patients with ductal carcinoma in situ over 5 years. 636 patients identified, and after exclusion criteria applied: 394 patients were included. High risk defined as: Extensive micro-calcification >40mm OR any mass forming DCIS. IPC: Winpath search from for the term ‘papillary carcinoma’ in any breast specimen for 5 years duration;.29 patients were included in this group. Results: DCIS: 188 deemed high risk due to >40mm calcification or a mass forming (radiological or palpable) 61% of those had a mastectomy and 32% BCS. Overall, in that high-risk group - the number with invasive disease was 38%. Of those high-risk DCIS pts 85% had a SLN - 80% at the time of surgery and 5% at a second operation. For the BCS patients - 42% had SLN at time of surgery and 13% (8 patients) at a second operation. 15 (7.9%) pts in the high-risk group had a positive SLNB, 11 having a mastectomy and 4 having BCS. IPC: The provisional diagnosis of encysted papillary carcinoma is upgraded to an invasive carcinoma on final histology in around a third of cases. This has may have implications when deciding whether to offer sentinel node removal at the time of therapeutic surgery. Conclusions: We have defined a ‘high risk’ group of pts with pre-op diagnosis of non-invasive cancer undergoing BCS, who would benefit from SLNB at the time of the surgery. In patients with high-risk features; the risk of invasive disease is up to 40% but the risk of nodal involvement is approximately 8%. The risk of morbidity from SLN is up to about 5% especially the risk of lymphedema.

Keywords: breast ductal carcinoma in Situ (DCIS), intracystic papillary carcinoma (IPC), sentinel node biopsy (SLNB), high-risk, non-invasive, cancer disease

Procedia PDF Downloads 111
26813 The Types of Collaboration Models Driven by Public Art Establishment–Case Study of Taichung City

Authors: Cheng-Lung Yu, Ying-His Liao

Abstract:

Some evidence show that public art accelerates local economic growth. Even local governments award the collaboration of public-private partnership to sustain the creation of public art for urban economic development. Through the public-private partnership of public art establishment it is obvious that public construction projects have been led by the governmental policy yet the private developers have played crucial roles to drive the innovative business models such as tourism investment, real estate value up and community participation. This study shows that the types of collaboration have been driven by Taichung city governmental policy from the regulation of public art establishment in the past three years. Through some cases empirical analyzes the authors discover the trends concerning the public art development to support local economic growth in Taiwan.

Keywords: public art, public art establishment regulation, construction management, urban governance

Procedia PDF Downloads 32
26812 Theoretical and ML-Driven Identification of a Mispriced Credit Risk

Authors: Yuri Katz, Kun Liu, Arunram Atmacharan

Abstract:

Due to illiquidity, mispricing on Credit Markets is inevitable. This creates huge challenges to banks and investors as they seek to find new ways of risk valuation and portfolio management in a post-credit crisis world. Here, we analyze the difference in behavior of the spread-to-maturity in investment and high-yield categories of US corporate bonds between 2014 and 2023. Deviation from the theoretical dependency of this measure in the universe under study allows to identify multiple cases of mispriced credit risk. Remarkably, we observe mispriced bonds in both categories of credit ratings. This identification is supported by the application of the state-of-the-art machine learning model in more than 90% of cases. Noticeably, the ML-driven model-based forecasting of a category of bond’s credit ratings demonstrate an excellent out-of-sample accuracy (AUC = 98%). We believe that these results can augment conventional valuations of credit portfolios.

Keywords: credit risk, credit ratings, bond pricing, spread-to-maturity, machine learning

Procedia PDF Downloads 80
26811 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 131