Search results for: quantum algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2591

Search results for: quantum algorithms

251 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks

Authors: Mehrdad Shafiei Dizaji, Hoda Azari

Abstract:

The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.

Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven

Procedia PDF Downloads 40
250 Photoemission Momentum Microscopy of Graphene on Ir (111)

Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense

Abstract:

Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.

Keywords: band structure, graphene, momentum microscopy, LDAD

Procedia PDF Downloads 340
249 Optimized Scheduling of Domestic Load Based on User Defined Constraints in a Real-Time Tariff Scenario

Authors: Madia Safdar, G. Amjad Hussain, Mashhood Ahmad

Abstract:

One of the major challenges of today’s era is peak demand which causes stress on the transmission lines and also raises the cost of energy generation and ultimately higher electricity bills to the end users, and it was used to be managed by the supply side management. However, nowadays this has been withdrawn because of existence of potential in the demand side management (DSM) having its economic and- environmental advantages. DSM in domestic load can play a vital role in reducing the peak load demand on the network provides a significant cost saving. In this paper the potential of demand response (DR) in reducing the peak load demands and electricity bills to the electric users is elaborated. For this purpose the domestic appliances are modeled in MATLAB Simulink and controlled by a module called energy management controller. The devices are categorized into controllable and uncontrollable loads and are operated according to real-time tariff pricing pattern instead of fixed time pricing or variable pricing. Energy management controller decides the switching instants of the controllable appliances based on the results from optimization algorithms. In GAMS software, the MILP (mixed integer linear programming) algorithm is used for optimization. In different cases, different constraints are used for optimization, considering the comforts, needs and priorities of the end users. Results are compared and the savings in electricity bills are discussed in this paper considering real time pricing and fixed tariff pricing, which exhibits the existence of potential to reduce electricity bills and peak loads in demand side management. It is seen that using real time pricing tariff instead of fixed tariff pricing helps to save in the electricity bills. Moreover the simulation results of the proposed energy management system show that the gained power savings lie in high range. It is anticipated that the result of this research will prove to be highly effective to the utility companies as well as in the improvement of domestic DR.

Keywords: controllable and uncontrollable domestic loads, demand response, demand side management, optimization, MILP (mixed integer linear programming)

Procedia PDF Downloads 302
248 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 118
247 Women’s Colours in Digital Innovation

Authors: Daniel J. Patricio Jiménez

Abstract:

Digital reality demands new ways of thinking, flexibility in learning, acquisition of new competencies, visualizing reality under new approaches, generating open spaces, understanding dimensions in continuous change, etc. We need inclusive growth, where colors are not lacking, where lights do not give a distorted reality, where science is not half-truth. In carrying out this study, the documentary or bibliographic collection has been taken into account, providing a reflective and analytical analysis of current reality. In this context, deductive and inductive methods have been used on different multidisciplinary information sources. Women today and tomorrow are a strategic element in science and arts, which, under the umbrella of sustainability, implies ‘meeting current needs without detriment to future generations’. We must build new scenarios, which qualify ‘the feminine and the masculine’ as an inseparable whole, encouraging cooperative behavior; nothing is exclusive or excluding, and that is where true respect for diversity must be based. We are all part of an ecosystem, which we will make better as long as there is a real balance in terms of gender. It is the time of ‘the lifting of the veil’, in other words, it is the time to discover the pseudonyms, the women who painted, wrote, investigated, recorded advances, etc. However, the current reality demands much more; we must remove doors where they are not needed. Mass processing of data, big data, needs to incorporate algorithms under the perspective of ‘the feminine’. However, most STEM students (science, technology, engineering, and math) are men. Our way of doing science is biased, focused on honors and short-term results to the detriment of sustainability. Historically, the canons of beauty, the way of looking, of perceiving, of feeling, depended on the circumstances and interests of each moment, and women had no voice in this. Parallel to science, there is an under-representation of women in the arts, but not so much in the universities, but when we look at galleries, museums, art dealers, etc., colours impoverish the gaze and once again highlight the gender gap and the silence of the feminine. Art registers sensations by divining the future, science will turn them into reality. The uniqueness of the so-called new normality requires women to be protagonists both in new forms of emotion and thought, and in the experimentation and development of new models. This will result in women playing a decisive role in the so-called "5.0 society" or, in other words, in a more sustainable, more humane world.

Keywords: art, digitalization, gender, science

Procedia PDF Downloads 165
246 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 49
245 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor

Authors: Hao Yan, Xiaobing Zhang

Abstract:

The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.

Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model

Procedia PDF Downloads 90
244 Machine Learning in Gravity Models: An Application to International Recycling Trade Flow

Authors: Shan Zhang, Peter Suechting

Abstract:

Predicting trade patterns is critical to decision-making in public and private domains, especially in the current context of trade disputes among major economies. In the past, U.S. recycling has relied heavily on strong demand for recyclable materials overseas. However, starting in 2017, a series of new recycling policies (bans and higher inspection standards) was enacted by multiple countries that were the primary importers of recyclables from the U.S. prior to that point. As the global trade flow of recycling shifts, some new importers, mostly developing countries in South and Southeast Asia, have been overwhelmed by the sheer quantities of scrap materials they have received. As the leading exporter of recyclable materials, the U.S. now has a pressing need to build its recycling industry domestically. With respect to the global trade in scrap materials used for recycling, the interest in this paper is (1) predicting how the export of recyclable materials from the U.S. might vary over time, and (2) predicting how international trade flows for recyclables might change in the future. Focusing on three major recyclable materials with a history of trade, this study uses data-driven and machine learning (ML) algorithms---supervised (shrinkage and tree methods) and unsupervised (neural network method)---to decipher the international trade pattern of recycling. Forecasting the potential trade values of recyclables in the future could help importing countries, to which those materials will shift next, to prepare related trade policies. Such policies can assist policymakers in minimizing negative environmental externalities and in finding the optimal amount of recyclables needed by each country. Such forecasts can also help exporting countries, like the U.S understand the importance of healthy domestic recycling industry. The preliminary result suggests that gravity models---in addition to particular selection macroeconomic predictor variables--are appropriate predictors of the total export value of recyclables. With the inclusion of variables measuring aspects of the political conditions (trade tariffs and bans), predictions show that recyclable materials are shifting from more policy-restricted countries to less policy-restricted countries in international recycling trade. Those countries also tend to have high manufacturing activities as a percentage of their GDP.

Keywords: environmental economics, machine learning, recycling, international trade

Procedia PDF Downloads 168
243 A Density Function Theory Based Comparative Study of Trans and Cis - Resveratrol

Authors: Subhojyoti Chatterjee, Peter J. Mahon, Feng Wang

Abstract:

Resveratrol (RvL), a phenolic compound, is a key ingredient in wine and tomatoes that has been studied over the years because of its important bioactivities such as anti-oxidant, anti-aging and antimicrobial properties. Out of the two isomeric forms of resveratrol i.e. trans and cis, the health benefit is primarily associated with the trans form. Thus, studying the structural properties of the isomers will not only provide an insight into understanding the RvL isomers, but will also help in designing parameters for differentiation in order to achieve 99.9% purity of trans-RvL. In the present study, density function theory (DFT) study is conducted, using the B3LYP/6-311++G** model to explore the through bond and through space intramolecular interactions. Properties such as vibrational spectroscopy (IR and Raman), nuclear magnetic resonance (NMR) spectra, excess orbital energy spectrum (EOES), energy based decomposition analyses (EDA) and Fukui function are calculated. It is discovered that the structure of trans-RvL, although it is C1 non-planar, the backbone non-H atoms are nearly in the same plane; whereas the cis-RvL consists of two major planes of R1 and R2 that are not in the same plane. The absence of planarity gives rise to a H-bond of 2.67Å in cis-RvL. Rotation of the C(5)-C(8) single bond in trans-RvL produces higher energy barriers since it may break the (planar) entire conjugated structure; while such rotation in cis-RvL produces multiple minima and maxima depending on the positions of the rings. The calculated FT-IR spectrum shows very different spectral features for trans and cis-RvL in the region 900 – 1500 cm-1, where the spectral peaks at 1138-1158 cm-1 are split in cis-RvL compared to a single peak at 1165 cm-1 in trans-RvL. In the Raman spectra, there is significant enhancement of cis-RvL in the region above 3000cm-1. Further, the carbon chemical environment (13C NMR) of the RvL molecule exhibit a larger chemical shift for cis-RvL compared to trans-RvL (Δδ = 8.18 ppm) for the carbon atom C(11), indicating that the chemical environment of the C group in cis-RvL is more diverse than its other isomer. The energy gap between highest occupied molecular orbital (HOMO) and the lowest occupied molecular orbital (LUMO) is 3.95 eV for trans and 4.35 eV for cis-RvL. A more detailed inspection using the recently developed EOES revealed that most of the large energy differences i.e. Δεcis-trans > ±0.30 eV, in their orbitals are contributed from the outer valence shell. They are MO60 (HOMO), MO52-55 and MO46. The active sites that has been captured by Fukui function (f + > 0.08) are associated with the stilbene C=C bond of RvL and cis-RvL is more active at these sites than in trans-RvL, as cis orientation breaks the large conjugation of trans-RvL so that the hydroxyl oxygen’s are more active in cis-RvL. Finally, EDA highlights the interaction energy (ΔEInt) of the phenolic compound, where trans is preferred over the cis-RvL (ΔΔEi = -4.35 kcal.mol-1) isomer. Thus, these quantum mechanics results could help in unwinding the diversified beneficial activities associated with resveratrol.

Keywords: resveratrol, FT-IR, Raman, NMR, excess orbital energy spectrum, energy decomposition analysis, Fukui function

Procedia PDF Downloads 194
242 Development of a Real-Time Simulink Based Robotic System to Study Force Feedback Mechanism during Instrument-Object Interaction

Authors: Jaydip M. Desai, Antonio Valdevit, Arthur Ritter

Abstract:

Robotic surgery is used to enhance minimally invasive surgical procedure. It provides greater degree of freedom for surgical tools but lacks of haptic feedback system to provide sense of touch to the surgeon. Surgical robots work on master-slave operation, where user is a master and robotic arms are the slaves. Current, surgical robots provide precise control of the surgical tools, but heavily rely on visual feedback, which sometimes cause damage to the inner organs. The goal of this research was to design and develop a real-time simulink based robotic system to study force feedback mechanism during instrument-object interaction. Setup includes three Velmex XSlide assembly (XYZ Stage) for three dimensional movement, an end effector assembly for forceps, electronic circuit for four strain gages, two Novint Falcon 3D gaming controllers, microcontroller board with linear actuators, MATLAB and Simulink toolboxes. Strain gages were calibrated using Imada Digital Force Gauge device and tested with a hard-core wire to measure instrument-object interaction in the range of 0-35N. Designed simulink model successfully acquires 3D coordinates from two Novint Falcon controllers and transfer coordinates to the XYZ stage and forceps. Simulink model also reads strain gages signal through 10-bit analog to digital converter resolution of a microcontroller assembly in real time, converts voltage into force and feedback the output signals to the Novint Falcon controller for force feedback mechanism. Experimental setup allows user to change forward kinematics algorithms to achieve the best-desired movement of the XYZ stage and forceps. This project combines haptic technology with surgical robot to provide sense of touch to the user controlling forceps through machine-computer interface.

Keywords: surgical robot, haptic feedback, MATLAB, strain gage, simulink

Procedia PDF Downloads 534
241 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor

Authors: Pranav Gulati, Isha Sharma

Abstract:

Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.

Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring

Procedia PDF Downloads 279
240 Applications of Artificial Intelligence (AI) in Cardiac imaging

Authors: Angelis P. Barlampas

Abstract:

The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.

Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine

Procedia PDF Downloads 79
239 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 86
238 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 93
237 The Untreated Burden of Parkinson’s Disease: A Patient Perspective

Authors: John Acord, Ankita Batla, Kiran Khepar, Maude Schmidt, Charlotte Allen, Russ Bradford

Abstract:

Objectives: Despite the availability oftreatment options, Parkinson’s disease (PD) continues to impact heavily on a patient’s quality of life (QoL), as many symptoms that bother the patient remain unexplored and untreated in clinical settings. The aims of this research were to understand the burden of PDsymptoms from a patient perspective, particularly those which are the most persistent and debilitating, and to determine if current treatments and treatment algorithms adequately focus on their resolution. Methods: A13-question, online, patient-reported survey was created based on the MDS-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS)and symptoms listed on Parkinson’s Disease Patient Advocacy Groups websites, and then validated by 10 Parkinson’s patients. In the survey, patients were asked to choose both their most common and their most bothersome symptoms, whether they had received treatment for those and, if so, had it been effective in resolving those symptoms. Results: The most bothersome symptoms reported by the 111 participants who completed the survey were sleep problems (61%), feeling tired (56%), slowness of movements (54%), and pain in some parts of the body (49%). However, while 86% of patients reported receiving dopamine or dopamine like drugs to treat their PD, far fewer reported receiving targeted therapies for additional symptoms. For example, of the patients who reported having sleep problems, only 33% received some form of treatment for this symptom. This was also true for feeling tired (30% received treatment for this symptom), slowness of movements (62% received treatment for this symptom), and pain in some parts of the body (61% received treatment for this symptom). Additionally, 65% of patients reported that the symptoms they experienced were not adequately controlled by the treatments they received, and 9% reported that their current treatments had no effect on their symptoms whatsoever. Conclusion: The survey outcomes highlight that the majority of patients involved in the study received treatment focused on their disease, however, symptom-based treatments were less well represented. Consequently, patient-reported symptoms such as sleep problems and feeling tired tended to receive more fragmented intervention than ‘classical’ PD symptoms, such as slowness of movement, even though they were reported as being amongst the most bothersome symptoms for patients. This research highlights the need to explore symptom burden from the patient’s perspective and offer Customised treatment/support for both motor and non-motor symptoms maximize patients’ quality of life.

Keywords: survey, patient reported symptom burden, unmet needs, parkinson's disease

Procedia PDF Downloads 297
236 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature

Procedia PDF Downloads 129
235 Optimizing PharmD Education: Quantifying Curriculum Complexity to Address Student Burnout and Cognitive Overload

Authors: Frank Fan

Abstract:

PharmD (Doctor of Pharmacy) education has confronted an increasing challenge — curricular overload, a phenomenon resulting from the expansion of curricular requirements, as PharmD education strives to produce graduates who are practice-ready. The aftermath of the global pandemic has amplified the need for healthcare professionals, leading to a growing trend of assigning more responsibilities to them to address the global healthcare shortage. For instance, the pharmacist’s role has expanded to include not only compounding and distributing medication but also providing clinical services, including minor ailments management, patient counselling and vaccination. Consequently, PharmD programs have responded by continually expanding their curricula adding more requirements. While these changes aim to enhance the education and training of future professionals, they have also led to unintended consequences, including curricular overload, student burnout, and a potential decrease in program quality. To address the issue and ensure program quality, there is a growing need for evidence-based curriculum reforms. My research seeks to integrate Cognitive Load Theory, emerging machine learning algorithms within artificial intelligence (AI), and statistical approaches to develop a quantitative framework for optimizing curriculum design within the PharmD program at the University of Toronto, the largest PharmD program within Canada, to provide quantification and measurement of issues that currently are only discussed in terms of anecdote rather than data. This research will serve as a guide for curriculum planners, administrators, and educators, aiding in the comprehension of how the pharmacy degree program compares to others within and beyond the field of pharmacy. It will also shed light on opportunities to reduce the curricular load while maintaining its quality and rigor. Given that pharmacists constitute the third-largest healthcare workforce, their education shares similarities and challenges with other health education programs. Therefore, my evidence-based, data-driven curriculum analysis framework holds significant potential for training programs in other healthcare professions, including medicine, nursing, and physiotherapy.

Keywords: curriculum, curriculum analysis, health professions education, reflective writing, machine learning

Procedia PDF Downloads 61
234 The Quantum Theory of Music and Languages

Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, entanglement, langauge, science

Procedia PDF Downloads 81
233 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 70
232 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations

Authors: Milena Nanova, Radul Shishkov, Damyan Damov, Martin Georgiev

Abstract:

This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper places emphasis on algorithmic implementation of the logical constraint and intricacies in residential architecture by exploring the potential of generative design to create visually engaging and contextually harmonious structures. This exploration also contains an analysis of how these designs align with legal building parameters, showcasing the potential for creative solutions within the confines of urban building regulations. Concurrently, our methodology integrates functional, economic, and environmental factors. We investigate how generative design can be utilized to optimize buildings' performance, considering them, aiming to achieve a symbiotic relationship between the built environment and its natural surroundings. Through a blend of theoretical research and practical case studies, this research highlights the multifaceted capabilities of generative design and demonstrates practical applications of our framework. Our findings illustrate the rich possibilities that arise from an algorithmic design approach in the context of a vibrant urban landscape. This study contributes an alternative perspective to residential architecture, suggesting that the future of urban development lies in embracing the complex interplay between computational design innovation, regulatory adherence, and environmental responsibility.

Keywords: generative design, computational design, parametric design, algorithmic modeling

Procedia PDF Downloads 65
231 Conversational Assistive Technology of Visually Impaired Person for Social Interaction

Authors: Komal Ghafoor, Tauqir Ahmad, Murtaza Hanif, Hira Zaheer

Abstract:

Assistive technology has been developed to support visually impaired people in their social interactions. Conversation assistive technology is designed to enhance communication skills, facilitate social interaction, and improve the quality of life of visually impaired individuals. This technology includes speech recognition, text-to-speech features, and other communication devices that enable users to communicate with others in real time. The technology uses natural language processing and machine learning algorithms to analyze spoken language and provide appropriate responses. It also includes features such as voice commands and audio feedback to provide users with a more immersive experience. These technologies have been shown to increase the confidence and independence of visually impaired individuals in social situations and have the potential to improve their social skills and relationships with others. Overall, conversation-assistive technology is a promising tool for empowering visually impaired people and improving their social interactions. One of the key benefits of conversation-assistive technology is that it allows visually impaired individuals to overcome communication barriers that they may face in social situations. It can help them to communicate more effectively with friends, family, and colleagues, as well as strangers in public spaces. By providing a more seamless and natural way to communicate, this technology can help to reduce feelings of isolation and improve overall quality of life. The main objective of this research is to give blind users the capability to move around in unfamiliar environments through a user-friendly device by face, object, and activity recognition system. This model evaluates the accuracy of activity recognition. This device captures the front view of the blind, detects the objects, recognizes the activities, and answers the blind query. It is implemented using the front view of the camera. The local dataset is collected that includes different 1st-person human activities. The results obtained are the identification of the activities that the VGG-16 model was trained on, where Hugging, Shaking Hands, Talking, Walking, Waving video, etc.

Keywords: dataset, visually impaired person, natural language process, human activity recognition

Procedia PDF Downloads 58
230 Interfacial Reactions between Aromatic Polyamide Fibers and Epoxy Matrix

Authors: Khodzhaberdi Allaberdiev

Abstract:

In order to understand the interactions on the interface polyamide fibers and epoxy matrix in fiber- reinforced composites were investigated industrial aramid fibers: armos, svm, terlon using individual epoxy matrix components, epoxies: diglycidyl ether of bisphenol A (DGEBA), three- and diglycidyl derivatives of m, p-amino-, m, p-oxy-, o, m,p-carboxybenzoic acids, the models: curing agent, aniline and the compound, that depict of the structure the primary addition reaction the amine to the epoxy resin, N-di (oxyethylphenoxy) aniline. The chemical structure of the surface of untreated and treated polyamide fibers analyzed using Fourier transform infrared spectroscopy (FTIR). The impregnation of fibers with epoxy matrix components and N-di (oxyethylphenoxy) aniline has been carried out by heating 150˚C (6h). The optimum fiber loading is at 65%.The result a thermal treatment is the covalent bonds formation , derived from a combined of homopolymerization and crosslinking mechanisms in the interfacial region between the epoxy resin and the surface of fibers. The reactivity of epoxy resins on interface in microcomposites (MC) also depends from processing aids treated on surface of fiber and the absorbance moisture. The influences these factors as evidenced by the conversion of epoxy groups values in impregnated with DGEBA of the terlons: industrial, dried (in vacuum) and purified samples: 5.20 %, 4.65% and 14.10%, respectively. The same tendency for svm and armos fibers is observed. The changes in surface composition of these MC were monitored by X-ray photoelectron spectroscopy (XPS). In the case of the purified fibers, functional groups of fibers act as well as a catalyst and curing agent of epoxy resin. It is found that the value of the epoxy groups conversion for reinforced formulations depends on aromatic polyamides nature and decreases in the order: armos >svm> terlon. This difference is due of the structural characteristics of fibers. The interfacial interactions also examined between polyglycidyl esters substituted benzoic acids and polyamide fibers in the MC. It is found that on interfacial interactions these systems influences as well as the structure and the isomerism of epoxides. The IR-spectrum impregnated fibers with aniline showed that the polyamide fibers appreciably with aniline do not react. FTIR results of treated fibers with N-di (oxyethylphenoxy) aniline fibers revealed dramatically changes IR-characteristic of the OH groups of the amino alcohol. These observations indicated hydrogen bondings and covalent interactions between amino alcohol and functional groups of fibers. This result also confirms appearance of the exo peak on Differential Scanning Calorimetry (DSC) curve of the MC. Finally, the theoretical evaluation non-covalent interactions between individual epoxy matrix components and fibers has been performed using the benzanilide and its derivative contaning the benzimidazole moiety as a models of terlon and svm,armos, respectively. Quantum-topological analysis also demonstrated the existence hydrogen bond between amide group of models and epoxy matrix components.All the results indicated that on the interface polyamide fibers and epoxy matrix exist not only covalent, but and non-covalent the interactions during the preparation of MC.

Keywords: epoxies, interface, modeling, polyamide fibers

Procedia PDF Downloads 266
229 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 197
228 In Silico Analysis of Deleterious nsSNPs (Missense) of Dihydrolipoamide Branched-Chain Transacylase E2 Gene Associated with Maple Syrup Urine Disease Type II

Authors: Zainab S. Ahmed, Mohammed S. Ali, Nadia A. Elshiekh, Sami Adam Ibrahim, Ghada M. El-Tayeb, Ahmed H. Elsadig, Rihab A. Omer, Sofia B. Mohamed

Abstract:

Maple syrup urine (MSUD) is an autosomal recessive disease that causes a deficiency in the enzyme branched-chain alpha-keto acid (BCKA) dehydrogenase. The development of disease has been associated with SNPs in the DBT gene. Despite that, the computational analysis of SNPs in coding and noncoding and their functional impacts on protein level still remains unknown. Hence, in this study, we carried out a comprehensive in silico analysis of missense that was predicted to have a harmful influence on DBT structure and function. In this study, eight different in silico prediction algorithms; SIFT, PROVEAN, MutPred, SNP&GO, PhD-SNP, PANTHER, I-Mutant 2.0 and MUpo were used for screening nsSNPs in DBT including. Additionally, to understand the effect of mutations in the strength of the interactions that bind protein together the ELASPIC servers were used. Finally, the 3D structure of DBT was formed using Mutation3D and Chimera servers respectively. Our result showed that a total of 15 nsSNPs confirmed by 4 software (R301C, R376H, W84R, S268F, W84C, F276C, H452R, R178H, I355T, V191G, M444T, T174A, I200T, R113H, and R178C) were found damaging and can lead to a shift in DBT gene structure. Moreover, we found 7 nsSNPs located on the 2-oxoacid_dh catalytic domain, 5 nsSNPs on the E_3 binding domain and 3 nsSNPs on the Biotin Domain. So these nsSNPs may alter the putative structure of DBT’s domain. Furthermore, we detected all these nsSNPs are on the core residues of the protein and have the ability to change the stability of the protein. Additionally, we found W84R, S268F, and M444T have high significance, and they affected Leucine, Isoleucine, and Valine, which reduces or disrupt the function of BCKD complex, E2-subunit which the DBT gene encodes. In conclusion, based on our extensive in-silico analysis, we report 15 nsSNPs that have possible association with protein deteriorating and disease-causing abilities. These candidate SNPs can aid in future studies on Maple Syrup Urine Disease type II base in the genetic level.

Keywords: DBT gene, ELASPIC, in silico analysis, UCSF chimer

Procedia PDF Downloads 201
227 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure

Authors: Volodymyr Rombakh

Abstract:

This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.

Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress

Procedia PDF Downloads 92
226 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 171
225 Application of Artificial Intelligence in Market and Sales Network Management: Opportunities, Benefits, and Challenges

Authors: Mohamad Mahdi Namdari

Abstract:

In today's rapidly changing and evolving business competition, companies and organizations require advanced and efficient tools to manage their markets and sales networks. Big data analysis, quick response in competitive markets, process and operations optimization, and forecasting customer behavior are among the concerns of executive managers. Artificial intelligence, as one of the emerging technologies, has provided extensive capabilities in this regard. The use of artificial intelligence in market and sales network management can lead to improved efficiency, increased decision-making accuracy, and enhanced customer satisfaction. Specifically, AI algorithms can analyze vast amounts of data, identify complex patterns, and offer strategic suggestions to improve sales performance. However, many companies are still distant from effectively leveraging this technology, and those that do face challenges in fully exploiting AI's potential in market and sales network management. It appears that the general public's and even the managerial and academic communities' lack of knowledge of this technology has caused the managerial structure to lag behind the progress and development of artificial intelligence. Additionally, high costs, fear of change and employee resistance, lack of quality data production processes, the need for updating structures and processes, implementation issues, the need for specialized skills and technical equipment, and ethical and privacy concerns are among the factors preventing widespread use of this technology in organizations. Clarifying and explaining this technology, especially to the academic, managerial, and elite communities, can pave the way for a transformative beginning. The aim of this research is to elucidate the capacities of artificial intelligence in market and sales network management, identify its opportunities and benefits, and examine the existing challenges and obstacles. This research aims to leverage AI capabilities to provide a framework for enhancing market and sales network performance for managers. The results of this research can help managers and decision-makers adopt more effective strategies for business growth and development by better understanding the capabilities and limitations of artificial intelligence.

Keywords: artificial intelligence, market management, sales network, big data analysis, decision-making, digital marketing

Procedia PDF Downloads 42
224 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 154
223 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies

Authors: Paolino Di Felice

Abstract:

The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.

Keywords: quality of life, distance measurement error, Italian administrative units, spatial database

Procedia PDF Downloads 371
222 Floodnet: Classification for Post Flood Scene with a High-Resolution Aerial Imaginary Dataset

Authors: Molakala Mourya Vardhan Reddy, Kandimala Revanth, Koduru Sumanth, Beena B. M.

Abstract:

Emergency response and recovery operations are severely hampered by natural catastrophes, especially floods. Understanding post-flood scenarios is essential to disaster management because it facilitates quick evaluation and decision-making. To this end, we introduce FloodNet, a brand-new high-resolution aerial picture collection created especially for comprehending post-flood scenes. A varied collection of excellent aerial photos taken during and after flood occurrences make up FloodNet, which offers comprehensive representations of flooded landscapes, damaged infrastructure, and changed topographies. The dataset provides a thorough resource for training and assessing computer vision models designed to handle the complexity of post-flood scenarios, including a variety of environmental conditions and geographic regions. Pixel-level semantic segmentation masks are used to label the pictures in FloodNet, allowing for a more detailed examination of flood-related characteristics, including debris, water bodies, and damaged structures. Furthermore, temporal and positional metadata improve the dataset's usefulness for longitudinal research and spatiotemporal analysis. For activities like flood extent mapping, damage assessment, and infrastructure recovery projection, we provide baseline standards and evaluation metrics to promote research and development in the field of post-flood scene comprehension. By integrating FloodNet into machine learning pipelines, it will be easier to create reliable algorithms that will help politicians, urban planners, and first responders make choices both before and after floods. The goal of the FloodNet dataset is to support advances in computer vision, remote sensing, and disaster response technologies by providing a useful resource for researchers. FloodNet helps to create creative solutions for boosting communities' resilience in the face of natural catastrophes by tackling the particular problems presented by post-flood situations.

Keywords: image classification, segmentation, computer vision, nature disaster, unmanned arial vehicle(UAV), machine learning.

Procedia PDF Downloads 78