Search results for: pointing accuracy
1277 BeamGA Median: A Hybrid Heuristic Search Approach
Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte
Abstract:
The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.Keywords: median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance
Procedia PDF Downloads 2641276 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing
Procedia PDF Downloads 1871275 Leveraging SHAP Values for Effective Feature Selection in Peptide Identification
Authors: Sharon Li, Zhonghang Xia
Abstract:
Post-database search is an essential phase in peptide identification using tandem mass spectrometry (MS/MS) to refine peptide-spectrum matches (PSMs) produced by database search engines. These engines frequently face difficulty differentiating between correct and incorrect peptide assignments. Despite advances in statistical and machine learning methods aimed at improving the accuracy of peptide identification, challenges remain in selecting critical features for these models. In this study, two machine learning models—a random forest tree and a support vector machine—were applied to three datasets to enhance PSMs. SHAP values were utilized to determine the significance of each feature within the models. The experimental results indicate that the random forest model consistently outperformed the SVM across all datasets. Further analysis of SHAP values revealed that the importance of features varies depending on the dataset, indicating that a feature's role in model predictions can differ significantly. This variability in feature selection can lead to substantial differences in model performance, with false discovery rate (FDR) differences exceeding 50% between different feature combinations. Through SHAP value analysis, the most effective feature combinations were identified, significantly enhancing model performance.Keywords: peptide identification, SHAP value, feature selection, random forest tree, support vector machine
Procedia PDF Downloads 211274 Resilience Assessment for Power Distribution Systems
Authors: Berna Eren Tokgoz, Mahdi Safa, Seokyon Hwang
Abstract:
Power distribution systems are essential and crucial infrastructures for the development and maintenance of a sustainable society. These systems are extremely vulnerable to various types of natural and man-made disasters. The assessment of resilience focuses on preparedness and mitigation actions under pre-disaster conditions. It also concentrates on response and recovery actions under post-disaster situations. The aim of this study is to present a methodology to assess the resilience of electric power distribution poles against wind-related events. The proposed methodology can improve the accuracy and rapidity of the evaluation of the conditions and the assessment of the resilience of poles. The methodology provides a metric for the evaluation of the resilience of poles under pre-disaster and post-disaster conditions. The metric was developed using mathematical expressions for physical forces that involve various variables, such as physical dimensions of the pole, the inclination of the pole, and wind speed. A three-dimensional imaging technology (photogrammetry) was used to determine the inclination of poles. Based on expert opinion, the proposed metric was used to define zones to visualize resilience. Visual representation of resilience is helpful for decision makers to prioritize their resources before and after experiencing a wind-related disaster. Multiple electric poles in the City of Beaumont, TX were used in a case study to evaluate the proposed methodology.Keywords: photogrammetry, power distribution systems, resilience metric, system resilience, wind-related disasters
Procedia PDF Downloads 2201273 Destination Port Detection For Vessels: An Analytic Tool For Optimizing Port Authorities Resources
Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin
Abstract:
Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/ unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages AIS messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring Automatic Identification System (AIS) messages. Our RRoT method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measure to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Fr´echet Distance (DFD), Dynamic Time Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an fmeasure of 99.08% using Dynamic Time Warping (DTW) similarity measure.Keywords: spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization
Procedia PDF Downloads 1201272 Context-Aware Alert Method in Hajj Pilgrim Location-Based Tracking System
Authors: Syarif Hidayat
Abstract:
As millions of people with different backgrounds perform hajj every year in Saudi Arabia, it brings out several problems. Missing people is among many crucial problems need to be encountered. Some people might have had insufficient knowledge of using tracking system equipment. Other might become a victim of an accident, lose consciousness, or even died, prohibiting them to perform certain activity. For those reasons, people could not send proper SOS message. The major contribution of this paper is the application of the diverse alert method in pilgrims tracking system. It offers a simple yet robust solution to send SOS message by pilgrims during Hajj. Knowledge of context aware computing is assumed herein. This study presents four methods that could be utilized by pilgrims to send SOS. The first method is simple mobile application contains only a button. The second method is based on behavior analysis based off GPS location movement anomaly. The third method is by introducing pressing pattern to smartwatch physical button as a panic button. The fourth method is by identifying certain accelerometer pattern recognition as a sign of emergency situations. Presented method in this paper would be an important part of pilgrims tracking system. The discussion provided here includes easy to use design whilst maintaining tracking accuracy, privacy, and security of its users.Keywords: context aware computing, emergency alert system, GPS, hajj pilgrim tracking, location-based services
Procedia PDF Downloads 2141271 Mathematical Modeling of the Operating Process and a Method to Determine the Design Parameters in an Electromagnetic Hammer Using Solenoid Electromagnets
Authors: Song Hyok Choe
Abstract:
This study presented a method to determine the optimum design parameters based on a mathematical model of the operating process in a manual electromagnetic hammer using solenoid electromagnets. The operating process of the electromagnetic hammer depends on the circuit scheme of the power controller. Mathematical modeling of the operating process was carried out by considering the energy transfer process in the forward and reverse windings and the electromagnetic force acting on the impact and brake pistons. Using the developed mathematical model, the initial design data of a manual electromagnetic hammer proposed in this paper are encoded and analyzed in Matlab. On the other hand, a measuring experiment was carried out by using a measurement device to check the accuracy of the developed mathematical model. The relative errors of the analytical results for measured stroke distance of the impact piston, peak value of forward stroke current and peak value of reverse stroke current were −4.65%, 9.08% and 9.35%, respectively. Finally, it was shown that the mathematical model of the operating process of an electromagnetic hammer is relatively accurate, and it can be used to determine the design parameters of the electromagnetic hammer. Therefore, the design parameters that can provide the required impact energy in the manual electromagnetic hammer were determined using a mathematical model developed. The proposed method will be used for the further design and development of the various types of percussion rock drills.Keywords: solenoid electromagnet, electromagnetic hammer, stone processing, mathematical modeling
Procedia PDF Downloads 441270 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.Keywords: injection molding, shrinkage, six sigma, Taguchi parameter design
Procedia PDF Downloads 1781269 A System Dynamics Approach to Technological Learning Impact for Cost Estimation of Solar Photovoltaics
Authors: Rong Wang, Sandra Hasanefendic, Elizabeth von Hauff, Bart Bossink
Abstract:
Technological learning and learning curve models have been continuously used to estimate the photovoltaics (PV) cost development over time for the climate mitigation targets. They can integrate a number of technological learning sources which influence the learning process. Yet the accuracy and realistic predictions for cost estimations of PV development are still difficult to achieve. This paper develops four hypothetical-alternative learning curve models by proposing different combinations of technological learning sources, including both local and global technology experience and the knowledge stock. This paper specifically focuses on the non-linear relationship between the costs and technological learning source and their dynamic interaction and uses the system dynamics approach to predict a more accurate PV cost estimation for future development. As the case study, the data from China is gathered and drawn to illustrate that the learning curve model that incorporates both the global and local experience is more accurate and realistic than the other three models for PV cost estimation. Further, absorbing and integrating the global experience into the local industry has a positive impact on PV cost reduction. Although the learning curve model incorporating knowledge stock is not realistic for current PV cost deployment in China, it still plays an effective positive role in future PV cost reduction.Keywords: photovoltaic, system dynamics, technological learning, learning curve
Procedia PDF Downloads 951268 The Use of Layered Neural Networks for Classifying Hierarchical Scientific Fields of Study
Authors: Colin Smith, Linsey S Passarella
Abstract:
Due to the proliferation and decentralized nature of academic publication, no widely accepted scheme exists for organizing papers by their scientific field of study (FoS) to the author’s best knowledge. While many academic journals require author provided keywords for papers, these keywords range wildly in scope and are not consistent across papers, journals, or field domains, necessitating alternative approaches to paper classification. Past attempts to perform field-of-study (FoS) classification on scientific texts have largely used a-hierarchical FoS schemas or ignored the schema’s inherently hierarchical structure, e.g. by compressing the structure into a single layer for multi-label classification. In this paper, we introduce an application of a Layered Neural Network (LNN) to the problem of performing supervised hierarchical classification of scientific fields of study (FoS) on research papers. In this approach, paper embeddings from a pretrained language model are fed into a top-down LNN. Beginning with a single neural network (NN) for the highest layer of the class hierarchy, each node uses a separate local NN to classify the subsequent subfield child node(s) for an input embedding of concatenated paper titles and abstracts. We compare our LNN-FOS method to other recent machine learning methods using the Microsoft Academic Graph (MAG) FoS hierarchy and find that the LNN-FOS offers increased classification accuracy at each FoS hierarchical level.Keywords: hierarchical classification, layer neural network, scientific field of study, scientific taxonomy
Procedia PDF Downloads 1321267 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping
Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung
Abstract:
Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)
Procedia PDF Downloads 2551266 3D Steady and Transient Centrifugal Pump Flow within Ansys CFX and OpenFOAM
Authors: Clement Leroy, Guillaume Boitel
Abstract:
This paper presents a comparative benchmarking review of a steady and transient three-dimensional (3D) flow computations in centrifugal pump using commercial (AnsysCFX) and open source (OpenFOAM) computational fluid dynamics (CFD) software. In centrifugal rotor-dynamic pump, the fluid enters in the impeller along to the rotating axis to be accelerated in order to increase the pressure, flowing radially outward into another stage, vaned diffuser or volute casing, from where it finally exits into a downstream pipe. Simulations are carried out at the best efficiency point (BEP) and part load, for single-phase flow with several turbulence models. The results are compared with overall performance report from experimental data. The use of CFD technology in industry is still limited by the high computational costs, and even more by the high cost of commercial CFD software and high-performance computing (HPC) licenses. The main objectives of the present study are to define OpenFOAM methodology for high-quality 3D steady and transient turbomachinery CFD simulation to conduct a thorough time-accurate performance analysis. On the other hand a detailed comparisons between computational methods, features on latest Ansys release 18 and OpenFOAM is investigated to assess the accuracy and industrial applications of those solvers. Finally an automated connected workflow (IoT) for turbine blade applications is presented.Keywords: benchmarking, CFX, internet of things, openFOAM, time-accurate, turbomachinery
Procedia PDF Downloads 2031265 A Combined Approach Based on Artificial Intelligence and Computer Vision for Qualitative Grading of Rice Grains
Authors: Hemad Zareiforoush, Saeed Minaei, Ahmad Banakar, Mohammad Reza Alizadeh
Abstract:
The quality inspection of rice (Oryza sativa L.) during its various processing stages is very important. In this research, an artificial intelligence-based model coupled with computer vision techniques was developed as a decision support system for qualitative grading of rice grains. For conducting the experiments, first, 25 samples of rice grains with different levels of percentage of broken kernels (PBK) and degree of milling (DOM) were prepared and their qualitative grade was assessed by experienced experts. Then, the quality parameters of the same samples examined by experts were determined using a machine vision system. A grading model was developed based on fuzzy logic theory in MATLAB software for making a relationship between the qualitative characteristics of the product and its quality. Totally, 25 rules were used for qualitative grading based on AND operator and Mamdani inference system. The fuzzy inference system was consisted of two input linguistic variables namely, DOM and PBK, which were obtained by the machine vision system, and one output variable (quality of the product). The model output was finally defuzzified using Center of Maximum (COM) method. In order to evaluate the developed model, the output of the fuzzy system was compared with experts’ assessments. It was revealed that the developed model can estimate the qualitative grade of the product with an accuracy of 95.74%.Keywords: machine vision, fuzzy logic, rice, quality
Procedia PDF Downloads 4181264 NOx Prediction by Quasi-Dimensional Combustion Model of Hydrogen Enriched Compressed Natural Gas Engine
Authors: Anas Rao, Hao Duan, Fanhua Ma
Abstract:
The dependency on the fossil fuels can be minimized by using the hydrogen enriched compressed natural gas (HCNG) in the transportation vehicles. However, the NOx emissions of HCNG engines are significantly higher, and this turned to be its major drawback. Therefore, the study of NOx emission of HCNG engines is a very important area of research. In this context, the experiments have been performed at the different hydrogen percentage, ignition timing, air-fuel ratio, manifold-absolute pressure, load and engine speed. Afterwards, the simulation has been accomplished by the quasi-dimensional combustion model of HCNG engine. In order to investigate the NOx emission, the NO mechanism has been coupled to the quasi-dimensional combustion model of HCNG engine. The three NOx mechanism: the thermal NOx, prompt NOx and N2O mechanism have been used to predict NOx emission. For the validation purpose, NO curve has been transformed into NO packets based on the temperature difference of 100 K for the lean-burn and 60 K for stoichiometric condition. While, the width of the packet has been taken as the ratio of crank duration of the packet to the total burnt duration. The combustion chamber of the engine has been divided into three zones, with the zone equal to the product of summation of NO packets and space. In order to check the accuracy of the model, the percentage error of NOx emission has been evaluated, and it lies in the range of ±6% and ±10% for the lean-burn and stoichiometric conditions respectively. Finally, the percentage contribution of each NO formation has been evaluated.Keywords: quasi-dimensional combustion , thermal NO, prompt NO, NO packet
Procedia PDF Downloads 2501263 Investigation of Mode II Fracture Toughness in Orthotropic Materials
Authors: Mahdi Fakoor, Nabi Mehri Khansari, Ahmadreza Farokhi
Abstract:
Evaluation of mode II fracture toughness (KIIC) in composite materials is very hard problem to be solved, since it can be affected by many mechanisms of dissipation. Furthermore, non-linearity in its behavior can offer an extra difficulty to obtain accuracy in the results. Different reported values for KIIC in various references can prove the mentioned assertion. In this research, some solutions proposed based on the form of necessary corrections that should be executed on the common test fixtures. Due to the fact that the common test fixtures are not able to active toughening mechanisms in pure Mode II correctly, we have employed some structural modifications on common fixtures. Particularly, the Iosipescu test is used as start point. The tests are applied on graphite/epoxy; PMMA and Western White Pine Wood. Also, mixed mode I/II fracture limit curves are used to indicate the scattering in test results are really relevant to the creation of Fracture Process Zone (FPZ). In the present paper, shear load consideration applied at the predicted shear zone by considering some significant structural amendments that can active mode II toughening mechanisms. Indeed, the employed empirical method causes significant developing in repeatability and reproducibility as well. Moreover, a 3D Finite Element (FE) is performed for verification of the obtained results. Eventually, it is figured out that, a remarkable precision can be obtained in common test fixture in comparison with the previous one.Keywords: FPZ, shear test fixture, mode II fracture toughness, composite material, FEM
Procedia PDF Downloads 3611262 Humans Trust Building in Robots with the Help of Explanations
Authors: Misbah Javaid, Vladimir Estivill-Castro, Rene Hexel
Abstract:
The field of robotics is advancing rapidly to the point where robots have become an integral part of the modern society. These robots collaborate and contribute productively with humans and compensate some shortcomings from human abilities and complement them with their skills. Effective teamwork of humans and robots demands to investigate the critical issue of trust. The field of human-computer interaction (HCI) has already examined trust humans place in technical systems mostly on issues like reliability and accuracy of performance. Early work in the area of expert systems suggested that automatic generation of explanations improved trust and acceptability of these systems. In this work, we augmented a robot with the user-invoked explanation generation proficiency. To measure explanations effect on human’s level of trust, we collected subjective survey measures and behavioral data in a human-robot team task into an interactive, adversarial and partial information environment. The results showed that with the explanation capability humans not only understand and recognize robot as an expert team partner. But, it was also observed that human's learning and human-robot team performance also significantly improved because of the meaningful interaction with the robot in the human-robot team. Moreover, by observing distinctive outcomes, we expect our research outcomes will also provide insights into further improvement of human-robot trustworthy relationships.Keywords: explanation interface, adversaries, partial observability, trust building
Procedia PDF Downloads 1971261 Pre-Operative Tool for Facial-Post-Surgical Estimation and Detection
Authors: Ayat E. Ali, Christeen R. Aziz, Merna A. Helmy, Mohammed M. Malek, Sherif H. El-Gohary
Abstract:
Goal: Purpose of the project was to make a plastic surgery prediction by using pre-operative images for the plastic surgeries’ patients and to show this prediction on a screen to compare between the current case and the appearance after the surgery. Methods: To this aim, we implemented a software which used data from the internet for facial skin diseases, skin burns, pre-and post-images for plastic surgeries then the post- surgical prediction is done by using K-nearest neighbor (KNN). So we designed and fabricated a smart mirror divided into two parts a screen and a reflective mirror so patient's pre- and post-appearance will be showed at the same time. Results: We worked on some skin diseases like vitiligo, skin burns and wrinkles. We classified the three degrees of burns using KNN classifier with accuracy 60%. We also succeeded in segmenting the area of vitiligo. Our future work will include working on more skin diseases, classify them and give a prediction for the look after the surgery. Also we will go deeper into facial deformities and plastic surgeries like nose reshaping and face slim down. Conclusion: Our project will give a prediction relates strongly to the real look after surgery and decrease different diagnoses among doctors. Significance: The mirror may have broad societal appeal as it will make the distance between patient's satisfaction and the medical standards smaller.Keywords: k-nearest neighbor (knn), face detection, vitiligo, bone deformity
Procedia PDF Downloads 1611260 Optimization of Highly Oriented Pyrolytic Graphite Crystals for Neutron Optics
Authors: Hao Qu, Xiang Liu, Michael Crosby, Brian Kozak, Andreas K. Freund
Abstract:
The outstanding performance of highly oriented pyrolytic graphite (HOPG) as an optical element for neutron beam conditioning is unequaled by any other crystalline material in the applications of monochromator, analyzer, and filter. This superiority stems from the favorable nuclear properties of carbon (small absorption and incoherent scattering cross-sections, big coherent scattering length) and the specific crystalline structure (small thermal diffuse scattering cross-section, layered crystal structure). The real crystal defect structure revealed by imaging techniques is correlated with the parameters used in the mosaic model (mosaic spread, mosaic block size, uniformity). The diffraction properties (rocking curve width as determined by both the intrinsic mosaic spread and the diffraction process, peak and integrated reflectivity, filter transmission) as a function of neutron wavelength or energy can be predicted with high accuracy and reliability by diffraction theory using empirical primary extinction coefficients extracted from a great amount of existing experimental data. The results of these calculations are given as graphs and tables permitting to optimize HOPG characteristics (mosaic spread, thickness, curvature) for any given experimental situation.Keywords: neutron optics, pyrolytic graphite, mosaic spread, neutron scattering, monochromator, analyzer
Procedia PDF Downloads 1421259 Prediction of Flow Around a NACA 0015 Profile
Authors: Boukhadia Karima
Abstract:
The fluid mechanics is the study of fluid motion laws and their interaction with solid bodies, this project leads to illustrate this interaction with depth studies and approved by experiments on the wind tunnel TE44, ensuring the efficiency, accuracy and reliability of these tests on a NACA0015 profile. A symmetric NACA0015 was placed in a subsonic wind tunnel, and measurements were made of the pressure on the upper and lower surface of the wing and of the velocity across the vortex trailing downstream from the tip of the wing. The aim of this work is to investigate experimentally the scattered pressure profile in a free airflow and the aerodynamic forces acting on this profile. The addition of around-lateral edge to the wing tip was found to eliminate the secondary vortex near the wing tip, but had little effect on the downstream characteristics of the trailing vortex. The increase in wing lift near the tip because of the presence of the trailing vortex was evident in the surface pressure, but was not captured by circulation-box measurements. The circumferential velocity within the vortex was found to reach free-stream values and produce core rotational speeds. Near the wing, the trailing vortex is asymmetric and contains definite zones where the stream wise velocity both exceeds and falls behind the free-stream value. When referenced to the free stream velocity, the maximum vertical velocity of the vortex is directly dependent on α and is independent of Re. A numerical study was conducted through a CFD code called FLUENT 6.0, and the results are compared with experimental.Keywords: CFD code, NACA Profile, detachment, angle of incidence, wind tunnel
Procedia PDF Downloads 4091258 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs
Authors: Taysir Soliman
Abstract:
One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph
Procedia PDF Downloads 1881257 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance
Authors: Loai AbdAllah, Mahmoud Kaiyal
Abstract:
Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.Keywords: missing values, incomplete data, distance, incomplete diabetes data
Procedia PDF Downloads 2241256 Logistic Regression Based Model for Predicting Students’ Academic Performance in Higher Institutions
Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu
Abstract:
In recent years, there has been a desire to forecast student academic achievement prior to graduation. This is to help them improve their grades, particularly for individuals with poor performance. The goal of this study is to employ supervised learning techniques to construct a predictive model for student academic achievement. Many academics have already constructed models that predict student academic achievement based on factors such as smoking, demography, culture, social media, parent educational background, parent finances, and family background, to name a few. This feature and the model employed may not have correctly classified the students in terms of their academic performance. This model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester as a prerequisite to predict if the student will perform well in future on related courses. The model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost, returning a 96.7% accuracy. This model is available as a desktop application, allowing both instructors and students to benefit from user-friendly interfaces for predicting student academic achievement. As a result, it is recommended that both students and professors use this tool to better forecast outcomes.Keywords: artificial intelligence, ML, logistic regression, performance, prediction
Procedia PDF Downloads 961255 RP-HPLC Method Development and Its Validation for Simultaneous Estimation of Metoprolol Succinate and Olmesartan Medoxomil Combination in Bulk and Tablet Dosage Form
Authors: S. Jain, R. Savalia, V. Saini
Abstract:
A simple, accurate, precise, sensitive and specific RP-HPLC method was developed and validated for simultaneous estimation of Metoprolol Succinate and Olmesartan Medoxomil in bulk and tablet dosage form. The RP-HPLC method has shown adequate separation for Metoprolol Succinate and Olmesartan Medoxomil from its degradation products. The separation was achieved on a Phenomenex luna ODS C18 (250mm X 4.6mm i.d., 5μm particle size) with an isocratic mixture of acetonitrile: 50mM phosphate buffer pH 4.0 adjusted with glacial acetic acid in the ratio of 55:45 v/v. The mobile phase at a flow rate of 1.0ml/min, Injection volume 20μl and wavelength of detection was kept at 225nm. The retention time for Metoprolol Succinate and Olmesartan Medoxomil was 2.451±0.1min and 6.167±0.1min, respectively. The linearity of the proposed method was investigated in the range of 5-50μg/ml and 2-20μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively. Correlation coefficient was 0.999 and 0.9996 for Metoprolol Succinate and Olmesartan Medoxomil, respectively. The limit of detection was 0.2847μg/ml and 0.1251μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively and the limit of quantification was 0.8630μg/ml and 0.3793μg/ml for Metoprolol and Olmesartan, respectively. Proposed methods were validated as per ICH guidelines for linearity, accuracy, precision, specificity and robustness for estimation of Metoprolol Succinate and Olmesartan Medoxomil in commercially available tablet dosage form and results were found to be satisfactory. Thus the developed and validated stability indicating method can be used successfully for marketed formulations.Keywords: metoprolol succinate, olmesartan medoxomil, RP-HPLC method, validation, ICH
Procedia PDF Downloads 3131254 Empirical Roughness Progression Models of Heavy Duty Rural Pavements
Authors: Nahla H. Alaswadko, Rayya A. Hassan, Bayar N. Mohammed
Abstract:
Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.Keywords: roughness progression, empirical model, pavement performance, heavy duty pavement
Procedia PDF Downloads 1671253 Finite Volume Method for Flow Prediction Using Unstructured Meshes
Authors: Juhee Lee, Yongjun Lee
Abstract:
In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.Keywords: finite volume method, fluid flow, laminar flow, unstructured grid
Procedia PDF Downloads 2851252 Infodemic Detection on Social Media with a Multi-Dimensional Deep Learning Framework
Authors: Raymond Xu, Cindy Jingru Wang
Abstract:
Social media has become a globally connected and influencing platform. Social media data, such as tweets, can help predict the spread of pandemics and provide individuals and healthcare providers early warnings. Public psychological reactions and opinions can be efficiently monitored by AI models on the progression of dominant topics on Twitter. However, statistics show that as the coronavirus spreads, so does an infodemic of misinformation due to pandemic-related factors such as unemployment and lockdowns. Social media algorithms are often biased toward outrage by promoting content that people have an emotional reaction to and are likely to engage with. This can influence users’ attitudes and cause confusion. Therefore, social media is a double-edged sword. Combating fake news and biased content has become one of the essential tasks. This research analyzes the variety of methods used for fake news detection covering random forest, logistic regression, support vector machines, decision tree, naive Bayes, BoW, TF-IDF, LDA, CNN, RNN, LSTM, DeepFake, and hierarchical attention network. The performance of each method is analyzed. Based on these models’ achievements and limitations, a multi-dimensional AI framework is proposed to achieve higher accuracy in infodemic detection, especially pandemic-related news. The model is trained on contextual content, images, and news metadata.Keywords: artificial intelligence, fake news detection, infodemic detection, image recognition, sentiment analysis
Procedia PDF Downloads 2531251 Vision-Based Daily Routine Recognition for Healthcare with Transfer Learning
Authors: Bruce X. B. Yu, Yan Liu, Keith C. C. Chan
Abstract:
We propose to record Activities of Daily Living (ADLs) of elderly people using a vision-based system so as to provide better assistive and personalization technologies. Current ADL-related research is based on data collected with help from non-elderly subjects in laboratory environments and the activities performed are predetermined for the sole purpose of data collection. To obtain more realistic datasets for the application, we recorded ADLs for the elderly with data collected from real-world environment involving real elderly subjects. Motivated by the need to collect data for more effective research related to elderly care, we chose to collect data in the room of an elderly person. Specifically, we installed Kinect, a vision-based sensor on the ceiling, to capture the activities that the elderly subject performs in the morning every day. Based on the data, we identified 12 morning activities that the elderly person performs daily. To recognize these activities, we created a HARELCARE framework to investigate into the effectiveness of existing Human Activity Recognition (HAR) algorithms and propose the use of a transfer learning algorithm for HAR. We compared the performance, in terms of accuracy, and training progress. Although the collected dataset is relatively small, the proposed algorithm has a good potential to be applied to all daily routine activities for healthcare purposes such as evidence-based diagnosis and treatment.Keywords: daily activity recognition, healthcare, IoT sensors, transfer learning
Procedia PDF Downloads 1311250 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport
Procedia PDF Downloads 4391249 Prediction of Boundary Shear Stress with Gradually Tapering Flood Plains
Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua
Abstract:
River is the main source of water. It is a form of natural open channel which gives rise to many complex phenomenon of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress and depth averaged velocity. The development of society more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. During floods, part of a river is carried by the simple main channel and rest is carried by flood plains. For such compound asymmetric channels, the flow structure becomes complicated due to momentum exchange between main channel and adjoining flood plains. Distribution of boundary shear in subsections provides us with the concept of momentum transfer between the interface of main channel and the flood plains. Experimentally, to get better data with accurate results are very complex because of the complexity of the problem. Hence, Conveyance Estimation System (CES) software has been used to tackle the complex processes to determine the shear stresses at different sections of an open channel having asymmetric flood plains on both sides of the main channel and the results are compared with the symmetric flood plains for various geometrical shapes and flow conditions. Error analysis is also performed to know the degree of accuracy of the model implemented.Keywords: depth average velocity, non prismatic compound channel, relative flow depth , velocity distribution
Procedia PDF Downloads 1211248 A Non-Destructive Estimation Method for Internal Time in Perilla Leaf Using Hyperspectral Data
Authors: Shogo Nagano, Yusuke Tanigaki, Hirokazu Fukuda
Abstract:
Vegetables harvested early in the morning or late in the afternoon are valued in plant production, and so the time of harvest is important. The biological functions known as circadian clocks have a significant effect on this harvest timing. The purpose of this study was to non-destructively estimate the circadian clock and so construct a method for determining a suitable harvest time. We took eight samples of green busil (Perilla frutescens var. crispa) every 4 hours, six times for 1 day and analyzed all samples at the same time. A hyperspectral camera was used to collect spectrum intensities at 141 different wavelengths (350–1050 nm). Calculation of correlations between spectrum intensity of each wavelength and harvest time suggested the suitability of the hyperspectral camera for non-destructive estimation. However, even the highest correlated wavelength had a weak correlation, so we used machine learning to raise the accuracy of estimation and constructed a machine learning model to estimate the internal time of the circadian clock. Artificial neural networks (ANN) were used for machine learning because this is an effective analysis method for large amounts of data. Using the estimation model resulted in an error between estimated and real times of 3 min. The estimations were made in less than 2 hours. Thus, we successfully demonstrated this method of non-destructively estimating internal time.Keywords: artificial neural network (ANN), circadian clock, green busil, hyperspectral camera, non-destructive evaluation
Procedia PDF Downloads 297