Search results for: imputation method of missing data
38175 Effect of Facilitation in a Problem-Based Environment on the Metacognition, Motivation and Self-Directed Learning in Nursing: A Quasi-Experimental Study among Nurse Students in Tanzania
Authors: Walter M. Millanzi, Stephen M. Kibusi
Abstract:
Background: Currently, there has been a progressive shortage not only to the number but also the quality of medical practitioners for the most of nursing. Despite that, those who are present exhibit unethical and illegal practices, under standard care and malpractices. The concern is raised in the ways they are prepared, or there might be something missing in nursing curricula or how it is delivered. There is a need for transforming or testing new teaching modalities to enhance competent health workforces. Objective: to investigate the Effect of Facilitation in a Problem-based Environment (FPBE) on metacognition, self-directed learning and learning motivation to undergraduate nurse student in Tanzanian higher learning institutions. Methods: quasi-experimental study (quantitative research approach). A purposive sampling technique was employed to select institutions and achieving a sample size of 401 participants (interventional = 134 and control = 267). Self-administered semi-structured questionnaire; was the main data collection methods and the Statistical Package for Service Solution (v. 20) software program was used for data entry, data analysis, and presentations. Results: The pre-post test results between groups indicated noticeably significant change on metacognition in an intervention (M = 1.52, SD = 0.501) against the control (M = 1.40, SD = 0.490), t (399) = 2.398, p < 0.05). SDL in an intervention (M = 1.52, SD = 0.501) against the control (M = 1.40, SD = 0.490), t (399) = 2.398, p < 0.05. Motivation to learn in an intervention (M = 62.67, SD = 14.14) and the control (n = 267, M = 57.75), t (399) = 2.907, p < 0.01). A FPBE teaching pedagogy, was observed to be effective on the metacognition (AOR = 1.603, p < 0.05), SDL (OR = 1.729, p < 0.05) and Intrinsic motivation in learning (AOR = 1.720, p < 0.05) against conventional teaching pedagogy. Needless, was less likely to enhance Extrinsic motivation (AOR = 0.676, p > 0.05) and Amotivation (AOR = 0.538, p > 0.05). Conclusion and recommendation: FPBE teaching pedagogy, can improve student’s metacognition, self-directed learning and intrinsic motivation to learn among nurse students. Nursing curricula developers should incorporate it to produce 21st century competent and qualified nurses.Keywords: facilitation, metacognition, motivation, self-directed
Procedia PDF Downloads 18938174 Integrated On-Board Diagnostic-II and Direct Controller Area Network Access for Vehicle Monitoring System
Authors: Kavian Khosravinia, Mohd Khair Hassan, Ribhan Zafira Abdul Rahman, Syed Abdul Rahman Al-Haddad
Abstract:
The CAN (controller area network) bus is introduced as a multi-master, message broadcast system. The messages sent on the CAN are used to communicate state information, referred as a signal between different ECUs, which provides data consistency in every node of the system. OBD-II Dongles that are based on request and response method is the wide-spread solution for extracting sensor data from cars among researchers. Unfortunately, most of the past researches do not consider resolution and quantity of their input data extracted through OBD-II technology. The maximum feasible scan rate is only 9 queries per second which provide 8 data points per second with using ELM327 as well-known OBD-II dongle. This study aims to develop and design a programmable, and latency-sensitive vehicle data acquisition system that improves the modularity and flexibility to extract exact, trustworthy, and fresh car sensor data with higher frequency rates. Furthermore, the researcher must break apart, thoroughly inspect, and observe the internal network of the vehicle, which may cause severe damages to the expensive ECUs of the vehicle due to intrinsic vulnerabilities of the CAN bus during initial research. Desired sensors data were collected from various vehicles utilizing Raspberry Pi3 as computing and processing unit with using OBD (request-response) and direct CAN method at the same time. Two types of data were collected for this study. The first, CAN bus frame data that illustrates data collected for each line of hex data sent from an ECU and the second type is the OBD data that represents some limited data that is requested from ECU under standard condition. The proposed system is reconfigurable, human-readable and multi-task telematics device that can be fitted into any vehicle with minimum effort and minimum time lag in the data extraction process. The standard operational procedure experimental vehicle network test bench is developed and can be used for future vehicle network testing experiment.Keywords: CAN bus, OBD-II, vehicle data acquisition, connected cars, telemetry, Raspberry Pi3
Procedia PDF Downloads 20838173 Empirical Orthogonal Functions Analysis of Hydrophysical Characteristics in the Shira Lake in Southern Siberia
Authors: Olga S. Volodko, Lidiya A. Kompaniets, Ludmila V. Gavrilova
Abstract:
The method of empirical orthogonal functions is the method of data analysis with a complex spatial-temporal structure. This method allows us to decompose the data into a finite number of modes determined by empirically finding the eigenfunctions of data correlation matrix. The modes have different scales and can be associated with various physical processes. The empirical orthogonal function method has been widely used for the analysis of hydrophysical characteristics, for example, the analysis of sea surface temperatures in the Western North Atlantic, ocean surface currents in the North Carolina, the study of tropical wave disturbances etc. The method used in this study has been applied to the analysis of temperature and velocity measurements in saline Lake Shira (Southern Siberia, Russia). Shira is a shallow lake with the maximum depth of 25 m. The lake Shira can be considered as a closed water site because of it has one small river providing inflow and but it has no outflows. The main factor that causes the motion of fluid is variable wind flows. In summer the lake is strongly stratified by temperature and saline. Long-term measurements of the temperatures and currents were conducted at several points during summer 2014-2015. The temperature has been measured with an accuracy of 0.1 ºC. The data were analyzed using the empirical orthogonal function method in the real version. The first empirical eigenmode accounts for 70-80 % of the energy and can be interpreted as temperature distribution with a thermocline. A thermocline is a thermal layer where the temperature decreases rapidly from the mixed upper layer of the lake to much colder deep water. The higher order modes can be interpreted as oscillations induced by internal waves. The currents measurements were recorded using Acoustic Doppler Current Profilers 600 kHz and 1200 kHz. The data were analyzed using the empirical orthogonal function method in the complex version. The first empirical eigenmode accounts for about 40 % of the energy and corresponds to the Ekman spiral occurring in the case of a stationary homogeneous fluid. Other modes describe the effects associated with the stratification of fluids. The second and next empirical eigenmodes were associated with dynamical modes. These modes were obtained for a simplified model of inhomogeneous three-level fluid at a water site with a flat bottom.Keywords: Ekman spiral, empirical orthogonal functions, data analysis, stratified fluid, thermocline
Procedia PDF Downloads 13638172 Iot Device Cost Effective Storage Architecture and Real-Time Data Analysis/Data Privacy Framework
Authors: Femi Elegbeleye, Omobayo Esan, Muienge Mbodila, Patrick Bowe
Abstract:
This paper focused on cost effective storage architecture using fog and cloud data storage gateway and presented the design of the framework for the data privacy model and data analytics framework on a real-time analysis when using machine learning method. The paper began with the system analysis, system architecture and its component design, as well as the overall system operations. The several results obtained from this study on data privacy model shows that when two or more data privacy model is combined we tend to have a more stronger privacy to our data, and when fog storage gateway have several advantages over using the traditional cloud storage, from our result shows fog has reduced latency/delay, low bandwidth consumption, and energy usage when been compare with cloud storage, therefore, fog storage will help to lessen excessive cost. This paper dwelt more on the system descriptions, the researchers focused on the research design and framework design for the data privacy model, data storage, and real-time analytics. This paper also shows the major system components and their framework specification. And lastly, the overall research system architecture was shown, its structure, and its interrelationships.Keywords: IoT, fog, cloud, data analysis, data privacy
Procedia PDF Downloads 10038171 Change Point Detection Using Random Matrix Theory with Application to Frailty in Elderly Individuals
Authors: Malika Kharouf, Aly Chkeir, Khac Tuan Huynh
Abstract:
Detecting change points in time series data is a challenging problem, especially in scenarios where there is limited prior knowledge regarding the data’s distribution and the nature of the transitions. We present a method designed for detecting changes in the covariance structure of high-dimensional time series data, where the number of variables closely matches the data length. Our objective is to achieve unbiased test statistic estimation under the null hypothesis. We delve into the utilization of Random Matrix Theory to analyze the behavior of our test statistic within a high-dimensional context. Specifically, we illustrate that our test statistic converges pointwise to a normal distribution under the null hypothesis. To assess the effectiveness of our proposed approach, we conduct evaluations on a simulated dataset. Furthermore, we employ our method to examine changes aimed at detecting frailty in the elderly.Keywords: change point detection, hypothesis tests, random matrix theory, frailty in elderly
Procedia PDF Downloads 5638170 Cooperative Coevolution for Neuro-Evolution of Feed Forward Networks for Time Series Prediction Using Hidden Neuron Connections
Authors: Ravneil Nand
Abstract:
Cooperative coevolution uses problem decomposition methods to solve a larger problem. The problem decomposition deals with breaking down the larger problem into a number of smaller sub-problems depending on their method. Different problem decomposition methods have their own strengths and limitations depending on the neural network used and application problem. In this paper we are introducing a new problem decomposition method known as Hidden-Neuron Level Decomposition (HNL). The HNL method is competing with established problem decomposition method in time series prediction. The results show that the proposed approach has improved the results in some benchmark data sets when compared to the standalone method and has competitive results when compared to methods from literature.Keywords: cooperative coevaluation, feed forward network, problem decomposition, neuron, synapse
Procedia PDF Downloads 33838169 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices
Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues
Abstract:
This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.Keywords: matrix minimization algorithm, decoding sequential search algorithm, image compression, DCT, DWT
Procedia PDF Downloads 15338168 Rodriguez Diego, Del Valle Martin, Hargreaves Matias, Riveros Jose Luis
Authors: Nathainail Bashir, Neil Anderson
Abstract:
The objective of this study site was to investigate the current state of the practice with regards to karst detection methods and recommend the best method and pattern of arrays to acquire the desire results. Proper site investigation in karst prone regions is extremely valuable in determining the location of possible voids. Two geophysical techniques were employed: multichannel analysis of surface waves (MASW) and electric resistivity tomography (ERT).The MASW data was acquired at each test location using different array lengths and different array orientations (to increase the probability of getting interpretable data in karst terrain). The ERT data were acquired using a dipole-dipole array consisting of 168 electrodes. The MASW data was interpreted (re: estimated depth to physical top of rock) and used to constrain and verify the interpretation of the ERT data. The ERT data indicates poorer quality MASW data were acquired in areas where there was significant local variation in the depth to top of rock.Keywords: dipole-dipole, ERT, Karst terrains, MASW
Procedia PDF Downloads 31538167 Pantograph-Catenary Contact Force: Features Evaluation for Catenary Diagnostics
Authors: Mehdi Brahimi, Kamal Medjaher, Noureddine Zerhouni, Mohammed Leouatni
Abstract:
The Prognostics and Health Management is a system engineering discipline which provides solutions and models to the implantation of a predictive maintenance. The approach is based on extracting useful information from monitoring data to assess the “health” state of an industrial equipment or an asset. In this paper, we examine multiple extracted features from Pantograph-Catenary contact force in order to select the most relevant ones to achieve a diagnostics function. The feature extraction methodology is based on simulation data generated thanks to a Pantograph-Catenary simulation software called INPAC and measurement data. The feature extraction method is based on both statistical and signal processing analyses. The feature selection method is based on statistical criteria.Keywords: catenary/pantograph interaction, diagnostics, Prognostics and Health Management (PHM), quality of current collection
Procedia PDF Downloads 29038166 Advancing the Hi-Tech Ecosystem in the Periphery: The Case of the Sea of Galilee Region
Authors: Yael Dubinsky, Orit Hazzan
Abstract:
There is a constant need for hi-tech innovation to be decentralized to peripheral regions. This work describes how we applied design science research (DSR) principles to define what we refer to as the Sea of Galilee (SoG) method. The goal of the SoG method is to harness existing and new technological initiatives in peripheral regions to create a socio-technological network that can initiate and maintain hi-tech activities. The SoG method consists of a set of principles, a stakeholder network, and actual hi-tech business initiatives, including their infrastructure and practices. The three cycles of DSR, the Relevance, Design, and Rigor cycles, layout a research framework to sharpen the requirements, collect data from case studies, and iteratively refine the SoG method based on the existing knowledge base. We propose that the SoG method can be deployed by regional authorities that wish to be considered smart regions (an extension of the notion of smart cities).Keywords: design science research, socio-technological initiatives, Sea of Galilee method, periphery stakeholder network, hi-tech initiatieves
Procedia PDF Downloads 13138165 Discerning Divergent Nodes in Social Networks
Authors: Mehran Asadi, Afrand Agah
Abstract:
In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.Keywords: online social networks, data mining, social cloud computing, interaction and collaboration
Procedia PDF Downloads 16038164 Application of Compressed Sensing Method for Compression of Quantum Data
Authors: M. Kowalski, M. Życzkowski, M. Karol
Abstract:
Current quantum key distribution systems (QKD) offer low bit rate of up to single MHz. Compared to conventional optical fiber links with multiple GHz bitrates, parameters of recent QKD systems are significantly lower. In the article we present the conception of application of the Compressed Sensing method for compression of quantum information. The compression methodology as well as the signal reconstruction method and initial results of improving the throughput of quantum information link are presented.Keywords: quantum key distribution systems, fiber optic system, compressed sensing
Procedia PDF Downloads 69638163 Entrepreneurs’ Perceptions of the Economic, Social and Physical Impacts of Tourism
Authors: Oktay Emir
Abstract:
The objective of this study is to determine how entrepreneurs perceive the economic, social and physical impacts of tourism. The study was conducted in the city of Afyonkarahisar, Turkey, which is rich in thermal tourism resources and investments. A survey was used as the data collection method, and the questionnaire was applied to 472 entrepreneurs. A simple random sampling method was used to identify the sample. Independent sampling t-tests and ANOVA tests were used to analyse the data obtained. Additionally, some statistically significant differences (p<0.05) were found based on the participants’ demographic characteristics regarding their opinions about the social, economic and physical impacts of tourism activities.Keywords: tourism, perception, entrepreneurship, entrepreneurs, structural equation modelling
Procedia PDF Downloads 45238162 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network
Authors: Abdolreza Memari
Abstract:
In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model
Procedia PDF Downloads 50238161 Influence of Magnetic Field on the Antibacterial Properties of Pine Oil
Authors: Dawid Sołoducha, Tomasz Borowski, Agata Markowska-Szczupak, Aneta Wesołowska, Marian Kordas, Rafał Rakoczy
Abstract:
Many studies report varied effects of the magnetic field in medicine, but applications are still missing. Also, essential oils (EOs) were historically used in healing therapies, food preservation and the cosmetic industry due to their wound healing and antioxidant properties and antimicrobial activity. Unfortunately, the chemical characterization of EOs activates its antibacterial action only at a fairly high concentration. They can cause skin reactions, e.g., irritation (irritant contact dermatitis) or allergic contact dermatitis; therefore, they should always be used with caution. However, the administration of EOs to achieve the desired antimicrobial activity and stability with long-term medical usage in low concentration is challenging. The aim of this work was to investigate the antimicrobial activity of commercial Pinus sylvestris L. essential oil from Polish company Avicenna-Oil® under Rotating Magnetic Field (RMF) at f = 1 – 50 Hz. The novel construction of the magnetically assisted self-constructed reactor (MAP) was applied for this study. The chemical composition of essential pine oil was determined by gas chromatography coupled with mass spectrometry (GC-MS). Model bacteria Escherichia coli K12 (ATCC 25922) was used. Different concentrations of pine oil was prepared: 100% 50%, 25%, 12.5% and 6.25%. The disc diffusion and MIC test were done. To examine the effect of essential pine oil and rotating magnetic field RMF on antibacterial performance agar plate method was used. Pine oil consist of α-pinene (28.58%), β-pinene (17.79%), δ-3-carene (14.17%) and limonene (11.58%). The present study indicates the exposition to the RMF, as compared to the unexposed controls causing an increase in the efficacy of antibacterial properties of pine oil. We have shown that the rotating magnetic fields (RMF) at a frequency, f, between 25 Hz to 50 Hz, increase the antimicrobial efficiency of oil at lower than 50% concentration. The new method can be applied in many fields e.g. aromatherapy, medicine as a component of dressing, or as food preservatives.Keywords: rotating magnetic field, pine oil, antimicrobial activity, Escherichia coli
Procedia PDF Downloads 22038160 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods
Authors: Autcha Araveeporn
Abstract:
This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.Keywords: Bayes method, Markov chain Monte Carlo method, maximum likelihood method, normal distribution
Procedia PDF Downloads 35738159 Bayesian Using Markov Chain Monte Carlo and Lindley's Approximation Based on Type-I Censored Data
Authors: Al Omari Moahmmed Ahmed
Abstract:
These papers describe the Bayesian Estimator using Markov Chain Monte Carlo and Lindley’s approximation and the maximum likelihood estimation of the Weibull distribution with Type-I censored data. The maximum likelihood method can’t estimate the shape parameter in closed forms, although it can be solved by numerical methods. Moreover, the Bayesian estimates of the parameters, the survival and hazard functions cannot be solved analytically. Hence Markov Chain Monte Carlo method and Lindley’s approximation are used, where the full conditional distribution for the parameters of Weibull distribution are obtained via Gibbs sampling and Metropolis-Hastings algorithm (HM) followed by estimate the survival and hazard functions. The methods are compared to Maximum Likelihood counterparts and the comparisons are made with respect to the Mean Square Error (MSE) and absolute bias to determine the better method in scale and shape parameters, the survival and hazard functions.Keywords: weibull distribution, bayesian method, markov chain mote carlo, survival and hazard functions
Procedia PDF Downloads 47938158 A New Computational Package for Using in CFD and Other Problems (Third Edition)
Authors: Mohammad Reza Akhavan Khaleghi
Abstract:
This paper shows changes done to the Reduced Finite Element Method (RFEM) that its result will be the most powerful numerical method that has been proposed so far (some forms of this method are so powerful that they can approximate the most complex equations simply Laplace equation!). Finite Element Method (FEM) is a powerful numerical method that has been used successfully for the solution of the existing problems in various scientific and engineering fields such as its application in CFD. Many algorithms have been expressed based on FEM, but none have been used in popular CFD software. In this section, full monopoly is according to Finite Volume Method (FVM) due to better efficiency and adaptability with the physics of problems in comparison with FEM. It doesn't seem that FEM could compete with FVM unless it was fundamentally changed. This paper shows those changes and its result will be a powerful method that has much better performance in all subjects in comparison with FVM and another computational method. This method is not to compete with the finite volume method but to replace it.Keywords: reduced finite element method, new computational package, new finite element formulation, new higher-order form, new isogeometric analysis
Procedia PDF Downloads 11938157 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method
Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent
Abstract:
A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.Keywords: bed topography, FBM, LBM, shallow water, simulations
Procedia PDF Downloads 9938156 Extreme Temperature Forecast in Mbonge, Cameroon Through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution
Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph
Abstract:
In this paper, temperature extremes are forecast by employing the block maxima method of the generalized extreme value (GEV) distribution to analyse temperature data from the Cameroon Development Corporation (CDC). By considering two sets of data (raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data, while in the simulated data the return values show an increasing trend with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend with an upper bound. This clearly shows that although temperatures in the tropics show a sign of increase in the future, there is a maximum temperature at which there is no exceedance. The results of this paper are very vital in agricultural and environmental research.Keywords: forecasting, generalized extreme value (GEV), meteorology, return level
Procedia PDF Downloads 48238155 Inversion of Gravity Data for Density Reconstruction
Authors: Arka Roy, Chandra Prakash Dubey
Abstract:
Inverse problem generally used for recovering hidden information from outside available data. Vertical component of gravity field we will be going to use for underneath density structure calculation. Ill-posing nature is main obstacle for any inverse problem. Linear regularization using Tikhonov formulation are used for appropriate choice of SVD and GSVD components. For real time data handle, signal to noise ratios should have to be less for reliable solution. In our study, 2D and 3D synthetic model with rectangular grid are used for gravity field calculation and its corresponding inversion for density reconstruction. Fine grid also we have considered to hold any irregular structure. Keeping in mind of algebraic ambiguity factor number of observation point should be more than that of number of data point. Picard plot is represented here for choosing appropriate or main controlling Eigenvalues for a regularized solution. Another important study is depth resolution plot (DRP). DRP are generally used for studying how the inversion is influenced by regularizing or discretizing. Our further study involves real time gravity data inversion of Vredeforte Dome South Africa. We apply our method to this data. The results include density structure is in good agreement with known formation in that region, which puts an additional support of our method.Keywords: depth resolution plot, gravity inversion, Picard plot, SVD, Tikhonov formulation
Procedia PDF Downloads 21338154 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method
Authors: Mohamad R. Moshtagh, Ahmad Bagheri
Abstract:
Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.Keywords: fault detection, gearbox, machine learning, wiener method
Procedia PDF Downloads 8138153 The Wear Recognition on Guide Surface Based on the Feature of Radar Graph
Authors: Youhang Zhou, Weimin Zeng, Qi Xie
Abstract:
Abstract: In order to solve the wear recognition problem of the machine tool guide surface, a new machine tool guide surface recognition method based on the radar-graph barycentre feature is presented in this paper. Firstly, the gray mean value, skewness, projection variance, flat degrees and kurtosis features of the guide surface image data are defined as primary characteristics. Secondly, data Visualization technology based on radar graph is used. The visual barycentre graphical feature is demonstrated based on the radar plot of multi-dimensional data. Thirdly, a classifier based on the support vector machine technology is used, the radar-graph barycentre feature and wear original feature are put into the classifier separately for classification and comparative analysis of classification and experiment results. The calculation and experimental results show that the method based on the radar-graph barycentre feature can detect the guide surface effectively.Keywords: guide surface, wear defects, feature extraction, data visualization
Procedia PDF Downloads 51938152 Modal FDTD Method for Wave Propagation Modeling Customized for Parallel Computing
Authors: H. Samadiyeh, R. Khajavi
Abstract:
A new FD-based procedure, modal finite difference method (MFDM), is proposed for seismic wave propagation modeling, in which simulation is dealt with in the modal space. The method employs eigenvalues of a characteristic matrix formed by appropriate time-space FD stencils. Since MFD runs for different modes are totally independent of each other, MFDM can easily be parallelized while considerable simplicity in parallel-algorithm is also achieved. There is no requirement to any domain-decomposition procedure and inter-core data exchange. More important is the possibility to skip processing of less-significant modes, which enables one to adjust the procedure up to the level of accuracy needed. Thus, in addition to considerable ease of parallel programming, computation and storage costs are significantly reduced. The method is qualified for its efficiency by some numerical examples.Keywords: Finite Difference Method, Graphics Processing Unit (GPU), Message Passing Interface (MPI), Modal, Wave propagation
Procedia PDF Downloads 29738151 Mob Justice in Ghana: Implication for Peace
Authors: Ishaq Alhassan Meriga
Abstract:
This study examined the phenomenon of mob violence and its implication for peace in Ghana. The study used the archival study of media reports and content analysis of other secondary data as well as eyewitness accounts. The study examined trends and patterns of vigilante violence within the Ghanaian context. Results showed a considerable increase in the occurrence of mob violence within the last 10 years. Theft and robbery emerged as the most frequently suspected crimes for which victims were attacked, while the LGBT community is not left out. Cases of mob violence were most frequently reported in urban areas. This study has shown that the patterns, scope, nature, and implication of mob justice in Ghana are fairly and comparatively similar to those found in other parts of Africa and the globe. Mob violence is identified as undermining the rule of law and thereby infringing on the fundamental human rights of the victims. It is confirmed to have a cycle of effects that is an impediment to the peace of the country. The study underscores the implications of mob violence in terms of disdaining human life and dignity, revisiting our justice systems and punishment procedures, resourcing, and empowering law enforcers to fight the menace of vigilantism. First, the archival study had a limitation regarding missing data. The majority of the cases used for the study lack information mostly on perpetrators and the steps taken by public authorities and security agencies after reports of a mob attack have been lodged with them. The study recommends for further research to be undertaken on the perpetrators and survivors of mob actions in order to get a holistic understanding of the phenomenon. This will give a more comprehensive view of the issue of mob violence in Ghana. From the findings, it can be concluded that mob justice is a social canker in Ghanaian communities, which has a great impact on the peace of the country.Keywords: LGBT, mob justice, peace, vigilantism
Procedia PDF Downloads 9038150 A Study on the Solutions of the 2-Dimensional and Forth-Order Partial Differential Equations
Abstract:
In this study, we will carry out a comparative study between the reduced differential transform method, the adomian decomposition method, the variational iteration method and the homotopy analysis method. These methods are used in many fields of engineering. This is been achieved by handling a kind of 2-Dimensional and forth-order partial differential equations called the Kuramoto–Sivashinsky equations. Three numerical examples have also been carried out to validate and demonstrate efficiency of the four methods. Furthermost, it is shown that the reduced differential transform method has advantage over other methods. This method is very effective and simple and could be applied for nonlinear problems which used in engineering.Keywords: reduced differential transform method, adomian decomposition method, variational iteration method, homotopy analysis method
Procedia PDF Downloads 43538149 Bayesian Reliability of Weibull Regression with Type-I Censored Data
Authors: Al Omari Moahmmed Ahmed
Abstract:
In the Bayesian, we developed an approach by using non-informative prior with covariate and obtained by using Gauss quadrature method to estimate the parameters of the covariate and reliability function of the Weibull regression distribution with Type-I censored data. The maximum likelihood seen that the estimators obtained are not available in closed forms, although they can be solved it by using Newton-Raphson methods. The comparison criteria are the MSE and the performance of these estimates are assessed using simulation considering various sample size, several specific values of shape parameter. The results show that Bayesian with non-informative prior is better than Maximum Likelihood Estimator.Keywords: non-informative prior, Bayesian method, type-I censoring, Gauss quardature
Procedia PDF Downloads 50438148 Knowledge Discovery and Data Mining Techniques in Textile Industry
Authors: Filiz Ersoz, Taner Ersoz, Erkin Guler
Abstract:
This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the CHAID algorithm, CART algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result and the production per person show the highest and negative relationship.Keywords: data mining, textile production, decision trees, classification
Procedia PDF Downloads 35238147 Fuzzy Gauge Capability (Cg and Cgk) through Buckley Approach
Authors: Seyed Habib A. Rahmati, Mohsen Sadegh Amalnick
Abstract:
Different terms of the statistical process control (SPC) has sketch in the fuzzy environment. However, measurement system analysis (MSA), as a main branch of the SPC, is rarely investigated in fuzzy area. This procedure assesses the suitability of the data to be used in later stages or decisions of the SPC. Therefore, this research focuses on some important measures of MSA and through a new method introduces the measures in fuzzy environment. In this method, which works based on Buckley approach, imprecision and vagueness nature of the real world measurement are considered simultaneously. To do so, fuzzy version of the gauge capability (Cg and Cgk) are introduced. The method is also explained through example clearly.Keywords: measurement, SPC, MSA, gauge capability (Cg and Cgk)
Procedia PDF Downloads 65238146 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model
Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis
Abstract:
Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry
Procedia PDF Downloads 225