Search results for: backtracking algorithm (BT)
550 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System
Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu
Abstract:
The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter
Procedia PDF Downloads 254549 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics
Procedia PDF Downloads 135548 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band
Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant K. Srivastava
Abstract:
An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input-output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986, and 0.9214, respectively at HH-polarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373, and 0.9428, respectively.Keywords: bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE
Procedia PDF Downloads 429547 Efficient Human Motion Detection Feature Set by Using Local Phase Quantization Method
Authors: Arwa Alzughaibi
Abstract:
Human Motion detection is a challenging task due to a number of factors including variable appearance, posture and a wide range of illumination conditions and background. So, the first need of such a model is a reliable feature set that can discriminate between a human and a non-human form with a fair amount of confidence even under difficult conditions. By having richer representations, the classification task becomes easier and improved results can be achieved. The Aim of this paper is to investigate the reliable and accurate human motion detection models that are able to detect the human motions accurately under varying illumination levels and backgrounds. Different sets of features are tried and tested including Histogram of Oriented Gradients (HOG), Deformable Parts Model (DPM), Local Decorrelated Channel Feature (LDCF) and Aggregate Channel Feature (ACF). However, we propose an efficient and reliable human motion detection approach by combining Histogram of oriented gradients (HOG) and local phase quantization (LPQ) as the feature set, and implementing search pruning algorithm based on optical flow to reduce the number of false positive. Experimental results show the effectiveness of combining local phase quantization descriptor and the histogram of gradient to perform perfectly well for a large range of illumination conditions and backgrounds than the state-of-the-art human detectors. Areaunder th ROC Curve (AUC) of the proposed method achieved 0.781 for UCF dataset and 0.826 for CDW dataset which indicates that it performs comparably better than HOG, DPM, LDCF and ACF methods.Keywords: human motion detection, histograms of oriented gradient, local phase quantization, local phase quantization
Procedia PDF Downloads 258546 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization
Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang
Abstract:
The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling
Procedia PDF Downloads 255545 Sustainable Practices through Organizational Internal Factors among South African Construction Firms
Authors: Oluremi I. Bamgbade, Oluwayomi Babatunde
Abstract:
Governments and nonprofits have been in the support of sustainability as the goal of businesses especially in the construction industry because of its considerable impacts on the environment, economy, and society. However, to measure the degree to which an organisation is being sustainable or pursuing sustainable growth can be difficult as a result of the clear sustainability strategy required to assume their commitment to the goal and competitive advantage. This research investigated the influence of organisational culture and organisational structure in achieving sustainable construction among South African construction firms. A total of 132 consultants from the nine provinces in South Africa participated in the survey. The data collected were initially screened using SPSS (version 21) while Partial Least Squares Structural Equation Modeling (PLS-SEM) algorithm and bootstrap techniques were employed to test the hypothesised paths. The empirical evidence also supported the hypothesised direct effects of organisational culture and organisational structure on sustainable construction. Similarly, the result regarding the relationship between organisational culture and organisational structure was supported. Therefore, construction industry can record a considerable level of construction sustainability and establish suitable cultures and structures within the construction organisations. Drawing upon organisational control theory, these findings supported the view that these organisational internal factors have a strong contingent effect on sustainability adoption in construction project execution. The paper makes theoretical, practical and methodological contributions within the domain of sustainable construction especially in the context of South Africa. Some limitations of the study are indicated, suggesting opportunities for future research.Keywords: organisational culture, organisational structure, South African construction firms, sustainable construction
Procedia PDF Downloads 289544 The Moment of the Optimal Average Length of the Multivariate Exponentially Weighted Moving Average Control Chart for Equally Correlated Variables
Authors: Edokpa Idemudia Waziri, Salisu S. Umar
Abstract:
The Hotellng’s T^2 is a well-known statistic for detecting a shift in the mean vector of a multivariate normal distribution. Control charts based on T have been widely used in statistical process control for monitoring a multivariate process. Although it is a powerful tool, the T statistic is deficient when the shift to be detected in the mean vector of a multivariate process is small and consistent. The Multivariate Exponentially Weighted Moving Average (MEWMA) control chart is one of the control statistics used to overcome the drawback of the Hotellng’s T statistic. In this paper, the probability distribution of the Average Run Length (ARL) of the MEWMA control chart when the quality characteristics exhibit substantial cross correlation and when the process is in-control and out-of-control was derived using the Markov Chain algorithm. The derivation of the probability functions and the moments of the run length distribution were also obtained and they were consistent with some existing results for the in-control and out-of-control situation. By simulation process, the procedure identified a class of ARL for the MEWMA control when the process is in-control and out-of-control. From our study, it was observed that the MEWMA scheme is quite adequate for detecting a small shift and a good way to improve the quality of goods and services in a multivariate situation. It was also observed that as the in-control average run length ARL0¬ or the number of variables (p) increases, the optimum value of the ARL0pt increases asymptotically and as the magnitude of the shift σ increases, the optimal ARLopt decreases. Finally, we use the example from the literature to illustrate our method and demonstrate its efficiency.Keywords: average run length, markov chain, multivariate exponentially weighted moving average, optimal smoothing parameter
Procedia PDF Downloads 422543 Parameter Identification Analysis in the Design of Rock Fill Dams
Authors: G. Shahzadi, A. Soulaimani
Abstract:
This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS
Procedia PDF Downloads 146542 Interpretation and Prediction of Geotechnical Soil Parameters Using Ensemble Machine Learning
Authors: Goudjil kamel, Boukhatem Ghania, Jlailia Djihene
Abstract:
This paper delves into the development of a sophisticated desktop application designed to calculate soil bearing capacity and predict limit pressure. Drawing from an extensive review of existing methodologies, the study meticulously examines various approaches employed in soil bearing capacity calculations, elucidating their theoretical foundations and practical applications. Furthermore, the study explores the burgeoning intersection of artificial intelligence (AI) and geotechnical engineering, underscoring the transformative potential of AI- driven solutions in enhancing predictive accuracy and efficiency.Central to the research is the utilization of cutting-edge machine learning techniques, including Artificial Neural Networks (ANN), XGBoost, and Random Forest, for predictive modeling. Through comprehensive experimentation and rigorous analysis, the efficacy and performance of each method are rigorously evaluated, with XGBoost emerging as the preeminent algorithm, showcasing superior predictive capabilities compared to its counterparts. The study culminates in a nuanced understanding of the intricate dynamics at play in geotechnical analysis, offering valuable insights into optimizing soil bearing capacity calculations and limit pressure predictions. By harnessing the power of advanced computational techniques and AI-driven algorithms, the paper presents a paradigm shift in the realm of geotechnical engineering, promising enhanced precision and reliability in civil engineering projects.Keywords: limit pressure of soil, xgboost, random forest, bearing capacity
Procedia PDF Downloads 25541 Reconstruction Spectral Reflectance Cube Based on Artificial Neural Network for Multispectral Imaging System
Authors: Iwan Cony Setiadi, Aulia M. T. Nasution
Abstract:
The multispectral imaging (MSI) technique has been used for skin analysis, especially for distant mapping of in-vivo skin chromophores by analyzing spectral data at each reflected image pixel. For ergonomic purpose, our multispectral imaging system is decomposed in two parts: a light source compartment based on LED with 11 different wavelenghts and a monochromatic 8-Bit CCD camera with C-Mount Objective Lens. The software based on GUI MATLAB to control the system was also developed. Our system provides 11 monoband images and is coupled with a software reconstructing hyperspectral cubes from these multispectral images. In this paper, we proposed a new method to build a hyperspectral reflectance cube based on artificial neural network algorithm. After preliminary corrections, a neural network is trained using the 32 natural color from X-Rite Color Checker Passport. The learning procedure involves acquisition, by a spectrophotometer. This neural network is then used to retrieve a megapixel multispectral cube between 380 and 880 nm with a 5 nm resolution from a low-spectral-resolution multispectral acquisition. As hyperspectral cubes contain spectra for each pixel; comparison should be done between the theoretical values from the spectrophotometer and the reconstructed spectrum. To evaluate the performance of reconstruction, we used the Goodness of Fit Coefficient (GFC) and Root Mean Squared Error (RMSE). To validate reconstruction, the set of 8 colour patches reconstructed by our MSI system and the one recorded by the spectrophotometer were compared. The average GFC was 0.9990 (standard deviation = 0.0010) and the average RMSE is 0.2167 (standard deviation = 0.064).Keywords: multispectral imaging, reflectance cube, spectral reconstruction, artificial neural network
Procedia PDF Downloads 323540 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments
Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles
Abstract:
This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal
Procedia PDF Downloads 42539 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas
Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi
Abstract:
In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.Keywords: thermal remote sensing, insolation model, land surface temperature, geothermal anomalies
Procedia PDF Downloads 371538 Identification of Watershed Landscape Character Types in Middle Yangtze River within Wuhan Metropolitan Area
Authors: Huijie Wang, Bin Zhang
Abstract:
In China, the middle reaches of the Yangtze River are well-developed, boasting a wealth of different types of watershed landscape. In this regard, landscape character assessment (LCA) can serve as a basis for protection, management and planning of trans-regional watershed landscape types. For this study, we chose the middle reaches of the Yangtze River in Wuhan metropolitan area as our study site, wherein the water system consists of rich variety in landscape types. We analyzed trans-regional data to cluster and identify types of landscape characteristics at two levels. 55 basins were analyzed as variables with topography, land cover and river system features in order to identify the watershed landscape character types. For watershed landscape, drainage density and degree of curvature were specified as special variables to directly reflect the regional differences of river system features. Then, we used the principal component analysis (PCA) method and hierarchical clustering algorithm based on the geographic information system (GIS) and statistical products and services solution (SPSS) to obtain results for clusters of watershed landscape which were divided into 8 characteristic groups. These groups highlighted watershed landscape characteristics of different river systems as well as key landscape characteristics that can serve as a basis for targeted protection of watershed landscape characteristics, thus helping to rationally develop multi-value landscape resources and promote coordinated development of trans-regions.Keywords: GIS, hierarchical clustering, landscape character, landscape typology, principal component analysis, watershed
Procedia PDF Downloads 233537 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer
Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın
Abstract:
We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer
Procedia PDF Downloads 248536 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.Keywords: bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset
Procedia PDF Downloads 130535 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information
Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu
Abstract:
In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness
Procedia PDF Downloads 121534 Using Geo-Statistical Techniques and Machine Learning Algorithms to Model the Spatiotemporal Heterogeneity of Land Surface Temperature and its Relationship with Land Use Land Cover
Authors: Javed Mallick
Abstract:
In metropolitan areas, rapid changes in land use and land cover (LULC) have ecological and environmental consequences. Saudi Arabia's cities have experienced tremendous urban growth since the 1990s, resulting in urban heat islands, groundwater depletion, air pollution, loss of ecosystem services, and so on. From 1990 to 2020, this study examines the variance and heterogeneity in land surface temperature (LST) caused by LULC changes in Abha-Khamis Mushyet, Saudi Arabia. LULC was mapped using the support vector machine (SVM). The mono-window algorithm was used to calculate the land surface temperature (LST). To identify LST clusters, the local indicator of spatial associations (LISA) model was applied to spatiotemporal LST maps. In addition, the parallel coordinate (PCP) method was used to investigate the relationship between LST clusters and urban biophysical variables as a proxy for LULC. According to LULC maps, urban areas increased by more than 330% between 1990 and 2018. Between 1990 and 2018, built-up areas had an 83.6% transitional probability. Furthermore, between 1990 and 2020, vegetation and agricultural land were converted into built-up areas at a rate of 17.9% and 21.8%, respectively. Uneven LULC changes in built-up areas result in more LST hotspots. LST hotspots were associated with high NDBI but not NDWI or NDVI. This study could assist policymakers in developing mitigation strategies for urban heat islandsKeywords: land use land cover mapping, land surface temperature, support vector machine, LISA model, parallel coordinate plot
Procedia PDF Downloads 78533 Model-Based Fault Diagnosis in Carbon Fiber Reinforced Composites Using Particle Filtering
Abstract:
Carbon fiber reinforced composites (CFRP) used as aircraft structure are subject to lightning strike, putting structural integrity under risk. Indirect damage may occur after a lightning strike where the internal structure can be damaged due to excessive heat induced by lightning current, while the surface of the structures remains intact. Three damage modes may be observed after a lightning strike: fiber breakage, inter-ply delamination and intra-ply cracks. The assessment of internal damage states in composite is challenging due to complicated microstructure, inherent uncertainties, and existence of multiple damage modes. In this work, a model based approach is adopted to diagnose faults in carbon composites after lighting strikes. A resistor network model is implemented to relate the overall electrical and thermal conduction behavior under simulated lightning current waveform to the intrinsic temperature dependent material properties, microstructure and degradation of materials. A fault detection and identification (FDI) module utilizes the physics based model and a particle filtering algorithm to identify damage mode as well as calculate the probability of structural failure. Extensive simulation results are provided to substantiate the proposed fault diagnosis methodology with both single fault and multiple faults cases. The approach is also demonstrated on transient resistance data collected from a IM7/Epoxy laminate under simulated lightning strike.Keywords: carbon composite, fault detection, fault identification, particle filter
Procedia PDF Downloads 196532 Pattern of Adverse Drug Reactions with Platinum Compounds in Cancer Chemotherapy at a Tertiary Care Hospital in South India
Authors: Meena Kumari, Ajitha Sharma, Mohan Babu Amberkar, Hasitha Manohar, Joseph Thomas, K. L. Bairy
Abstract:
Aim: To evaluate the pattern of occurrence of adverse drug reactions (ADRs) with platinum compounds in cancer chemotherapy at a tertiary care hospital. Methods: It was a retrospective, descriptive case record study done on patients admitted to the medical oncology ward of Kasturba Hospital, Manipal from July to November 2012. Inclusion criteria comprised of patients of both sexes and all ages diagnosed with cancer and were on platinum compounds, who developed at least one adverse drug reaction during or after the treatment period. CDSCO proforma was used for reporting ADRs. Causality was assessed using Naranjo Algorithm. Results: A total of 65 patients was included in the study. Females comprised of 67.69% and rest males. Around 49.23% of the ADRs were seen in the age group of 41-60 years, followed by 20 % in 21-40 years, 18.46% in patients over 60 years and 12.31% in 1-20 years age group. The anticancer agents which caused adverse drug reactions in our study were carboplatin (41.54%), cisplatin (36.92%) and oxaliplatin (21.54%). Most common adverse drug reactions observed were oral candidiasis (21.53%), vomiting (16.92%), anaemia (12.3%), diarrhoea (12.3%) and febrile neutropenia (0.08%). The results of the causality assessment of most of the cases were probable. Conclusion: The adverse effect of chemotherapeutic agents is a matter of concern in the pharmacological management of cancer as it affects the quality of life of patients. This information would be useful in identifying and minimizing preventable adverse drug reactions while generally enhancing the knowledge of the prescribers to deal with these adverse drug reactions more efficiently.Keywords: adverse drug reactions, platinum compounds, cancer, chemotherapy
Procedia PDF Downloads 433531 Heuristics for Optimizing Power Consumption in the Smart Grid
Authors: Zaid Jamal Saeed Almahmoud
Abstract:
Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.Keywords: heuristics, optimization, smart grid, peak demand, power supply
Procedia PDF Downloads 89530 Applying Kinect on the Development of a Customized 3D Mannequin
Authors: Shih-Wen Hsiao, Rong-Qi Chen
Abstract:
In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.Keywords: 3D mannequin, kinect scanner, interactive closest point, shape morphing, subdivision
Procedia PDF Downloads 309529 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values
Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi
Abstract:
A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest
Procedia PDF Downloads 189528 Fragment Domination for Many-Objective Decision-Making Problems
Authors: Boris Djartov, Sanaz Mostaghim
Abstract:
This paper presents a number-based dominance method. The main idea is how to fragment the many attributes of the problem into subsets suitable for the well-established concept of Pareto dominance. Although other similar methods can be found in the literature, they focus on comparing the solutions one objective at a time, while the focus of this method is to compare entire subsets of the objective vector. Given the nature of the method, it is computationally costlier than other methods and thus, it is geared more towards selecting an option from a finite set of alternatives, where each solution is defined by multiple objectives. The need for this method was motivated by dynamic alternate airport selection (DAAS). In DAAS, pilots, while en route to their destination, can find themselves in a situation where they need to select a new landing airport. In such a predicament, they need to consider multiple alternatives with many different characteristics, such as wind conditions, available landing distance, the fuel needed to reach it, etc. Hence, this method is primarily aimed at human decision-makers. Many methods within the field of multi-objective and many-objective decision-making rely on the decision maker to initially provide the algorithm with preference points and weight vectors; however, this method aims to omit this very difficult step, especially when the number of objectives is so large. The proposed method will be compared to Favour (1 − k)-Dom and L-dominance (LD) methods. The test will be conducted using well-established test problems from the literature, such as the DTLZ problems. The proposed method is expected to outperform the currently available methods in the literature and hopefully provide future decision-makers and pilots with support when dealing with many-objective optimization problems.Keywords: multi-objective decision-making, many-objective decision-making, multi-objective optimization, many-objective optimization
Procedia PDF Downloads 91527 Artificial Intelligence-Generated Previews of Hyaluronic Acid-Based Treatments
Authors: Ciro Cursio, Giulia Cursio, Pio Luigi Cursio, Luigi Cursio
Abstract:
Communication between practitioner and patient is of the utmost importance in aesthetic medicine: as of today, images of previous treatments are the most common tool used by doctors to describe and anticipate future results for their patients. However, using photos of other people often reduces the engagement of the prospective patient and is further limited by the number and quality of pictures available to the practitioner. Pre-existing work solves this issue in two ways: 3D scanning of the area with manual editing of the 3D model by the doctor or automatic prediction of the treatment by warping the image with hand-written parameters. The first approach requires the manual intervention of the doctor, while the second approach always generates results that aren’t always realistic. Thus, in one case, there is significant manual work required by the doctor, and in the other case, the prediction looks artificial. We propose an AI-based algorithm that autonomously generates a realistic prediction of treatment results. For the purpose of this study, we focus on hyaluronic acid treatments in the facial area. Our approach takes into account the individual characteristics of each face, and furthermore, the prediction system allows the patient to decide which area of the face she wants to modify. We show that the predictions generated by our system are realistic: first, the quality of the generated images is on par with real images; second, the prediction matches the actual results obtained after the treatment is completed. In conclusion, the proposed approach provides a valid tool for doctors to show patients what they will look like before deciding on the treatment.Keywords: prediction, hyaluronic acid, treatment, artificial intelligence
Procedia PDF Downloads 116526 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 469525 Implementation of a Monostatic Microwave Imaging System using a UWB Vivaldi Antenna
Authors: Babatunde Olatujoye, Binbin Yang
Abstract:
Microwave imaging is a portable, noninvasive, and non-ionizing imaging technique that employs low-power microwave signals to reveal objects in the microwave frequency range. This technique has immense potential for adoption in commercial and scientific applications such as security scanning, material characterization, and nondestructive testing. This work presents a monostatic microwave imaging setup using an Ultra-Wideband (UWB), low-cost, miniaturized Vivaldi antenna with a bandwidth of 1 – 6 GHz. The backscattered signals (S-parameters) of the Vivaldi antenna used for scanning targets were measured in the lab using a VNA. An automated two-dimensional (2-D) scanner was employed for the 2-D movement of the transceiver to collect the measured scattering data from different positions. The targets consist of four metallic objects, each with a distinct shape. Similar setup was also simulated in Ansys HFSS. A high-resolution Back Propagation Algorithm (BPA) was applied to both the simulated and experimental backscattered signals. The BPA utilizes the phase and amplitude information recorded over a two-dimensional aperture of 50 cm × 50 cm with a discreet step size of 2 cm to reconstruct a focused image of the targets. The adoption of BPA was demonstrated by coherently resolving and reconstructing reflection signals from conventional time-of-flight profiles. For both the simulation and experimental data, BPA accurately reconstructed a high resolution 2D image of the targets in terms of shape and location. An improvement of the BPA, in terms of target resolution, was achieved by applying the filtering method in frequency domain.Keywords: back propagation, microwave imaging, monostatic, vivialdi antenna, ultra wideband
Procedia PDF Downloads 23524 Drought Risk Analysis Using Neural Networks for Agri-Businesses and Projects in Lejweleputswa District Municipality, South Africa
Authors: Bernard Moeketsi Hlalele
Abstract:
Drought is a complicated natural phenomenon that creates significant economic, social, and environmental problems. An analysis of paleoclimatic data indicates that severe and extended droughts are inevitable part of natural climatic circle. This study characterised drought in Lejweleputswa using both Standardised Precipitation Index (SPI) and neural networks (NN) to quantify and predict respectively. Monthly 37-year long time series precipitation data were obtained from online NASA database. Prior to the final analysis, this dataset was checked for outliers using SPSS. Outliers were removed and replaced by Expectation Maximum algorithm from SPSS. This was followed by both homogeneity and stationarity tests to ensure non-spurious results. A non-parametric Mann Kendall's test was used to detect monotonic trends present in the dataset. Two temporal scales SPI-3 and SPI-12 corresponding to agricultural and hydrological drought events showed statistically decreasing trends with p-value = 0.0006 and 4.9 x 10⁻⁷, respectively. The study area has been plagued with severe drought events on SPI-3, while on SPI-12, it showed approximately a 20-year circle. The concluded the analyses with a seasonal analysis that showed no significant trend patterns, and as such NN was used to predict possible SPI-3 for the last season of 2018/2019 and four seasons for 2020. The predicted drought intensities ranged from mild to extreme drought events to come. It is therefore recommended that farmers, agri-business owners, and other relevant stakeholders' resort to drought resistant crops as means of adaption.Keywords: drought, risk, neural networks, agri-businesses, project, Lejweleputswa
Procedia PDF Downloads 128523 Secure Automatic Key SMS Encryption Scheme Using Hybrid Cryptosystem: An Approach for One Time Password Security Enhancement
Authors: Pratama R. Yunia, Firmansyah, I., Ariani, Ulfa R. Maharani, Fikri M. Al
Abstract:
Nowadays, notwithstanding that the role of SMS as a means of communication has been largely replaced by online applications such as WhatsApp, Telegram, and others, the fact that SMS is still used for certain and important communication needs is indisputable. Among them is for sending one time password (OTP) as an authentication media for various online applications ranging from chatting, shopping to online banking applications. However, the usage of SMS does not pretty much guarantee the security of transmitted messages. As a matter of fact, the transmitted messages between BTS is still in the form of plaintext, making it extremely vulnerable to eavesdropping, especially if the message is confidential, for instance, the OTP. One solution to overcome this problem is to use an SMS application which provides security services for each transmitted message. Responding to this problem, in this study, an automatic key SMS encryption scheme was designed as a means to secure SMS communication. The proposed scheme allows SMS sending, which is automatically encrypted with keys that are constantly changing (automatic key update), automatic key exchange, and automatic key generation. In terms of the security method, the proposed scheme applies cryptographic techniques with a hybrid cryptosystem mechanism. Proofing the proposed scheme, a client to client SMS encryption application was developed using Java platform with AES-256 as encryption algorithm, RSA-768 as public and private key generator and SHA-256 for message hashing function. The result of this study is a secure automatic key SMS encryption scheme using hybrid cryptosystem which can guarantee the security of every transmitted message, so as to become a reliable solution in sending confidential messages through SMS although it still has weaknesses in terms of processing time.Keywords: encryption scheme, hybrid cryptosystem, one time password, SMS security
Procedia PDF Downloads 130522 Real-Time Generative Architecture for Mesh and Texture
Abstract:
In the evolving landscape of physics-based machine learning (PBML), particularly within fluid dynamics and its applications in electromechanical engineering, robot vision, and robot learning, achieving precision and alignment with researchers' specific needs presents a formidable challenge. In response, this work proposes a methodology that integrates neural transformation with a modified smoothed particle hydrodynamics model for generating transformed 3D fluid simulations. This approach is useful for nanoscale science, where the unique and complex behaviors of viscoelastic medium demand accurate neurally-transformed simulations for materials understanding and manipulation. In electromechanical engineering, the method enhances the design and functionality of fluid-operated systems, particularly microfluidic devices, contributing to advancements in nanomaterial design, drug delivery systems, and more. The proposed approach also aligns with the principles of PBML, offering advantages such as multi-fluid stylization and consistent particle attribute transfer. This capability is valuable in various fields where the interaction of multiple fluid components is significant. Moreover, the application of neurally-transformed hydrodynamical models extends to manufacturing processes, such as the production of microelectromechanical systems, enhancing efficiency and cost-effectiveness. The system's ability to perform neural transfer on 3D fluid scenes using a deep learning algorithm alongside physical models further adds a layer of flexibility, allowing researchers to tailor simulations to specific needs across scientific and engineering disciplines.Keywords: physics-based machine learning, robot vision, robot learning, hydrodynamics
Procedia PDF Downloads 66521 Modeling of Sediment Yield and Streamflow of Watershed Basin in the Philippines Using the Soil Water Assessment Tool Model for Watershed Sustainability
Authors: Warda L. Panondi, Norihiro Izumi
Abstract:
Sedimentation is a significant threat to the sustainability of reservoirs and their watershed. In the Philippines, the Pulangi watershed experienced a high sediment loss mainly due to land conversions and plantations that showed critical erosion rates beyond the tolerable limit of -10 ton/ha/yr in all of its sub-basin. From this event, the prediction of runoff volume and sediment yield is essential to examine using the country's soil conservation techniques realistically. In this research, the Pulangi watershed was modeled using the soil water assessment tool (SWAT) to predict its watershed basin's annual runoff and sediment yield. For the calibration and validation of the model, the SWAT-CUP was utilized. The model was calibrated with monthly discharge data for 1990-1993 and validated for 1994-1997. Simultaneously, the sediment yield was calibrated in 2014 and validated in 2015 because of limited observed datasets. Uncertainty analysis and calculation of efficiency indexes were accomplished through the SUFI-2 algorithm. According to the coefficient of determination (R2), Nash Sutcliffe efficiency (NSE), King-Gupta efficiency (KGE), and PBIAS, the calculation of streamflow indicates a good performance for both calibration and validation periods while the sediment yield resulted in a satisfactory performance for both calibration and validation. Therefore, this study was able to identify the most critical sub-basin and severe needs of soil conservation. Furthermore, this study will provide baseline information to prevent floods and landslides and serve as a useful reference for land-use policies and watershed management and sustainability in the Pulangi watershed.Keywords: Pulangi watershed, sediment yield, streamflow, SWAT model
Procedia PDF Downloads 210