Search results for: forecasting accuracy
2034 Advanced Mechatronic Design of Robot Manipulator Using Hardware-In-The-Loop Simulation
Authors: Reza Karami, Ali Akbar Ebrahimi
Abstract:
This paper discusses concurrent engineering of robot manipulators, based on the Holistic Concurrent Design (HCD) methodology and by using a hardware-in-the-loop simulation platform. The methodology allows for considering numerous design variables with different natures concurrently. It redefines the ultimate goal of design based on the notion of satisfaction, resulting in the simplification of the multi-objective constrained optimization process. It also formalizes the effect of designer’s subjective attitude in the process. To enhance modeling efficiency for both computation and accuracy, a hardware-in-the-loop simulation platform is used, which involves physical joint modules and the control unit in addition to the software modules. This platform is implemented in the HCD design architecture to reliably evaluate the design attributes and performance super criterion during the design process. The resulting overall architecture is applied to redesigning kinematic, dynamic and control parameters of an industrial robot manipulator.Keywords: concurrent engineering, hardware-in-the-loop simulation, robot manipulator, multidisciplinary systems, mechatronics
Procedia PDF Downloads 4542033 A Relational Case-Based Reasoning Framework for Project Delivery System Selection
Authors: Yang Cui, Yong Qiang Chen
Abstract:
An appropriate project delivery system (PDS) is crucial to the success of a construction project. Case-based reasoning (CBR) is a useful support for PDS selection. However, the traditional CBR approach represents cases as attribute-value vectors without taking relations among attributes into consideration, and could not calculate the similarity when the structures of cases are not strictly same. Therefore, this paper solves this problem by adopting the relational case-based reasoning (RCBR) approach for PDS selection, considering both the structural similarity and feature similarity. To develop the feature terms of the construction projects, the criteria and factors governing PDS selection process are first identified. Then, feature terms for the construction projects are developed. Finally, the mechanism of similarity calculation and a case study indicate how RCBR works for PDS selection. The adoption of RCBR in PDS selection expands the scope of application of traditional CBR method and improves the accuracy of the PDS selection system.Keywords: relational cased-based reasoning, case-based reasoning, project delivery system, PDS selection
Procedia PDF Downloads 4322032 Simultaneous Interpreting in the European Parliament: Linguistic Quality of the Political Discourse: An Empirical Analysis
Authors: Alicja Zapolnik-Plachetka
Abstract:
The paper examines the impact of the Members’ of the European Parliament (MEPs) language choice on the linguistic quality of their political discourse as delivered by the interpreters. The study, designed by the author, who is an EU interpreter herself, consisted of three phases. First, a number of speeches of Polish and Spanish MEPs were analyzed to determine whether the incidence of use of certain figures of speech depending on whether the speech had been delivered in English or their respective mother tongue. Then the use of figures of speech was also analyzed based on speeches by some British MEPs, in order to determine what was the incidence for the native users of English. Subsequently, the speeches were compared with their interpretations to find out whether the interpreters managed to convey accurately the means of oratory used by the MEPs. The final result shows that in case of institutional environments dependant on simultaneous interpretation the speakers’ choices can, in fact, influence the linguistic quality of the political communication.Keywords: content accuracy, European Parliament, political discourse, simultaneous interpreting
Procedia PDF Downloads 1302031 Using Data Mining Techniques to Evaluate the Different Factors Affecting the Academic Performance of Students at the Faculty of Information Technology in Hashemite University in Jordan
Authors: Feras Hanandeh, Majdi Shannag
Abstract:
This research studies the different factors that could affect the Faculty of Information Technology in Hashemite University students’ accumulative average. The research paper verifies the student information, background, their academic records, and how this information will affect the student to get high grades. The student information used in the study is extracted from the student’s academic records. The data mining tools and techniques are used to decide which attribute(s) will affect the student’s accumulative average. The results show that the most important factor which affects the students’ accumulative average is the student Acceptance Type. And we built a decision tree model and rules to determine how the student can get high grades in their courses. The overall accuracy of the model is 44% which is accepted rate.Keywords: data mining, classification, extracting rules, decision tree
Procedia PDF Downloads 4162030 Automated Marker Filling System
Authors: Pinisetti Swami Sairam, Meera C. S.
Abstract:
Marker pens are widely used all over the world, mainly in educational institutions due to their neat, accurate and easily erasable nature. But refilling the ink in these pens is a tedious and time consuming job. Besides, it requires careful handling of the pens and ink bottle. A fully automated marker filling system is a solution developed to overcome this problem. The system comprises of pneumatics and electronics modules as well as PLC control. The system design is done in such a way that the empty markers are dumped in a marker container which then sent through different modules of the system in order to refill it automatically. The filled markers are then collected in a marker container. Refilling of ink takes place in different stages inside the system. An ink detecting system detects the colour of the marker which is to be filled and then refilling is done. The processes like capping and uncapping of the cap as well as screwing and unscrewing of the tip are done with the help of robotic arm and gripper. We make use of pneumatics in this system in order to get the precision while performing the capping, screwing, and refilling operations. Thus with the help of this system we can achieve cleanliness, accuracy, effective and time saving in the process of filling a marker.Keywords: automated system, market filling, information technology, control and automation
Procedia PDF Downloads 4972029 Estimating Solar Irradiance on a Tilted Surface Using Artificial Neural Networks with Differential Outputs
Authors: Hsu-Yung Cheng, Kuo-Chang Hsu, Chi-Chang Chan, Mei-Hui Tseng, Chih-Chang Yu, Ya-Sheng Liu
Abstract:
Photovoltaics modules are usually not installed horizontally to avoid water or dust accumulation. However, the measured irradiance data on tilted surfaces are rarely available since installing pyranometers with various tilt angles induces high costs. Therefore, estimating solar irradiance on tilted surfaces is an important research topic. In this work, artificial neural networks (ANN) are utilized to construct the transfer model to estimate solar irradiance on tilted surfaces. Instead of predicting tilted irradiance directly, the proposed method estimates the differences between the horizontal irradiance and the irradiance on a tilted surface. The outputs of the ANNs in the proposed design are differential values. The experimental results have shown that the proposed ANNs with differential outputs can substantially improve the estimation accuracy compared to ANNs that estimate the titled irradiance directly.Keywords: photovoltaics, artificial neural networks, tilted irradiance, solar energy
Procedia PDF Downloads 3972028 Automatic Censoring in K-Distribution for Multiple Targets Situations
Authors: Naime Boudemagh, Zoheir Hammoudi
Abstract:
The parameters estimation of the K-distribution is an essential part in radar detection. In fact, presence of interfering targets in reference cells causes a decrease in detection performances. In such situation, the estimate of the shape and the scale parameters are far from the actual values. In the order to avoid interfering targets, we propose an Automatic Censoring (AC) algorithm of radar interfering targets in K-distribution. The censoring technique used in this work offers a good discrimination between homogeneous and non-homogeneous environments. The homogeneous population is then used to estimate the unknown parameters by the classical Method of Moment (MOM). The AC algorithm does not need any prior information about the clutter parameters nor does it require both the number and the position of interfering targets. The accuracy of the estimation parameters obtained by this algorithm are validated and compared to various actual values of the shape parameter, using Monte Carlo simulations, this latter show that the probability of censing in multiple target situations are in good agreement.Keywords: parameters estimation, method of moments, automatic censoring, K distribution
Procedia PDF Downloads 3732027 Reduction of Rotor-Bearing-Support Finite Element Model through Substructuring
Authors: Abdur Rosyid, Mohamed El-Madany, Mohanad Alata
Abstract:
Due to simplicity and low cost, rotordynamic system is often modeled by using lumped parameters. Recently, finite elements have been used to model rotordynamic system as it offers higher accuracy. However, it involves high degrees of freedom. In some applications such as control design, this requires higher cost. For this reason, various model reduction methods have been proposed. This work demonstrates the quality of model reduction of rotor-bearing-support system through substructuring. The quality of the model reduction is evaluated by comparing some first natural frequencies, modal damping ratio, critical speeds and response of both the full system and the reduced system. The simulation shows that the substructuring is proven adequate to reduce finite element rotor model in the frequency range of interest as long as the numbers and the locations of master nodes are determined appropriately. However, the reduction is less accurate in an unstable or nearly-unstable system.Keywords: rotordynamic, finite element model, timoshenko beam, 3D solid elements, Guyan reduction method
Procedia PDF Downloads 2722026 Discussing Concept Gratitude of Muslim Consumers Based on Islamic Law: A Confirmation on the Theory of Consumer Satisfaction through Imam Al-Ghazali's Thought
Authors: Suprihatin Soewarto
Abstract:
The background of writing this paper is to assess the truth of rejection of some Muslim scholars who develop Islamic economics on the concept of consumer satisfaction and replace it with the concept of maslahah. In the perspective of Islamic law, this rejection attitude needs to be verified in order to know the accuracy of the replacement of this concept of satisfaction with maslahah as part of consumer behavior. This is done so that replacement of rejection of the term satisfaction with maslahah is objective. This objective replacement of the term will surely be more enlightening and more just than the subjective substitution. Therefore the writing of this paper aims to get an answer whether the concept of satisfaction needs to be replaced? is it possible for Islamic law to confirm the theory of consumer satisfaction? The method of writing this paper using the method of literature with a critical analysis approach. The results of this study is an explanation of the similarities and differences of consumer satisfaction theory and consumer theory maslahah according to Islamic law. disclosure of the concept of consumer gratitude according to Islamic law and its implementation in Muslim consumer demand theory.Keywords: consumer's gratitude, islamic law, confirmation, satisfaction consumer's
Procedia PDF Downloads 2082025 Cryogenic Machining of Sawdust Incorporated Polypropylene Composites
Authors: K. N. Umesh
Abstract:
Wood Polymer Composites (WPC) were synthesized artificially by combining polypropylene, wood and resin. It is difficult to obtain a good surface finish by conventional machining on WPC because of material degradation due to excessive heat generated during the process. In order to preserve the material property and deliver a better surface finish and accuracy, a proper solution is devised for the machining of wood composites at low temperature. This research focuses on studying the effects of parameters of cryogenic machining on sawdust incorporated polypropylene composite material, in view of evolving the most suitable composition and an appropriate combination of process parameters. The machining characteristics of the six different compositions of WPC were evaluated by analyzing the trend. An attempt is made to determine proper combinations material composition and process control parameters, through process capability studies. A WPC of 80%-wood (saw dust particles), 20%-polypropylene and 0%-resin was found to be the best alternative for obtaining the best surface finish under cryogenic machining conditions.Keywords: Cryogenic Machining, Process Capability, Surface Finish, Wood Polymer Composites
Procedia PDF Downloads 2492024 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 3502023 Simulation and Experimental Study on Tensile Force Measurement of PS Tendons Using an Embedded EM Sensor
Authors: ByoungJoon Yu, Junkyeong Kim, Seunghee Park
Abstract:
The tensile force estimation PS tendons is in great demand on monitoring the structural health condition of PSC girder bridges. Measuring the tensile force of the PS tendons inside the PSC girder using conventional methods is hard due to its location. In this paper, an embedded EM sensor based tensile force estimation of PS tendon was carried out by measuring the permeability of the PS tendons in PSC girder. The permeability is changed due to the induced tensile force by the magneto-elastic effect and the effect then lead to the gradient change of the B-H curve. An experiment was performed to obtain the signals from the EM sensor using three down-scaled PSC girder models. The permeability of PS tendons was proportionally decreased according to the increase of the tensile forces. To verify the experiment results, a simulation of tensile force estimation will be conducted in further study. Consequently, it is expected that both the experiment results and the simulation results increase the accuracy of the tensile force estimation, and then it could be one of the solutions for evaluating the performance of PSC girder.Keywords: tensile force estimation, embedded EM sensor, PSC girder, EM sensor simulation, cross section loss
Procedia PDF Downloads 4792022 Determination of Iron, Zinc, Copper, Cadmium and Lead in Different Cigarette Brands in Yemen by Atomic Absorption Spectrometry
Authors: Ali A. Mutair
Abstract:
The concentration levels of iron (Fe), copper (Cu), zinc (Zn), cadmium (Cd) and lead (Pb) in different cigarette brands commonly produced and sold in Yemen were determined. Convenient sample treatment for cigarette tobacco of freshly opened packs was achieved by a sample preparation method based on dry digestion, and the concentrations of the analysed metals were measured by Flame Atomic Absorption Spectrometry (FAAS). The mean values obtained for Fe, Zn, Cu, Cd, and Pb in different Yemeni cigarette tobacco were 311, 52.2, 10.11, 1.71 and 4.06 µg/g dry weight, respectively. There is no more significant difference among cigarette brands tested. It was found that Fe was at the highest concentration, followed by Zn, Cu, Pb and Cd. The average relative standard deviation (RSD) ranged from 1.77% to 19.34%. The accuracy and precision of the results were checked by blank and recovery tests. The results show that Yemeni cigarettes contain heavy metal concentration levels that are similar to those in foreign cigarette brands reported by other studies in the worldwide.Keywords: iron, zinc, copper, lead, cadmium, tobacco, Yemeni cigarette brands, atomic absorption spectrometry
Procedia PDF Downloads 3592021 RGB-D SLAM Algorithm Based on pixel level Dense Depth Map
Authors: Hao Zhang, Hongyang Yu
Abstract:
Scale uncertainty is a well-known challenging problem in visual SLAM. Because RGB-D sensor provides depth information, RGB-D SLAM improves this scale uncertainty problem. However, due to the limitation of physical hardware, the depth map output by RGB-D sensor usually contains a large area of missing depth values. These missing depth information affect the accuracy and robustness of RGB-D SLAM. In order to reduce these effects, this paper completes the missing area of the depth map output by RGB-D sensor and then fuses the completed dense depth map into ORB SLAM2. By adding the process of obtaining pixel-level dense depth maps, a better RGB-D visual SLAM algorithm is finally obtained. In the process of obtaining dense depth maps, a deep learning model of indoor scenes is adopted. Experiments are conducted on public datasets and real-world environments of indoor scenes. Experimental results show that the proposed SLAM algorithm has better robustness than ORB SLAM2.Keywords: RGB-D, SLAM, dense depth, depth map
Procedia PDF Downloads 1402020 Using Machine Learning Techniques for Autism Spectrum Disorder Analysis and Detection in Children
Authors: Norah Mohammed Alshahrani, Abdulaziz Almaleh
Abstract:
Autism Spectrum Disorder (ASD) is a condition related to issues with brain development that affects how a person recognises and communicates with others which results in difficulties with interaction and communication socially and it is constantly growing. Early recognition of ASD allows children to lead safe and healthy lives and helps doctors with accurate diagnoses and management of conditions. Therefore, it is crucial to develop a method that will achieve good results and with high accuracy for the measurement of ASD in children. In this paper, ASD datasets of toddlers and children have been analyzed. We employed the following machine learning techniques to attempt to explore ASD and they are Random Forest (RF), Decision Tree (DT), Na¨ıve Bayes (NB) and Support Vector Machine (SVM). Then Feature selection was used to provide fewer attributes from ASD datasets while preserving model performance. As a result, we found that the best result has been provided by the Support Vector Machine (SVM), achieving 0.98% in the toddler dataset and 0.99% in the children dataset.Keywords: autism spectrum disorder, machine learning, feature selection, support vector machine
Procedia PDF Downloads 1522019 Automatic Classification of the Stand-to-Sit Phase in the TUG Test Using Machine Learning
Authors: Yasmine Abu Adla, Racha Soubra, Milana Kasab, Mohamad O. Diab, Aly Chkeir
Abstract:
Over the past several years, researchers have shown a great interest in assessing the mobility of elderly people to measure their functional status. Usually, such an assessment is done by conducting tests that require the subject to walk a certain distance, turn around, and finally sit back down. Consequently, this study aims to provide an at home monitoring system to assess the patient’s status continuously. Thus, we proposed a technique to automatically detect when a subject sits down while walking at home. In this study, we utilized a Doppler radar system to capture the motion of the subjects. More than 20 features were extracted from the radar signals, out of which 11 were chosen based on their intraclass correlation coefficient (ICC > 0.75). Accordingly, the sequential floating forward selection wrapper was applied to further narrow down the final feature vector. Finally, 5 features were introduced to the linear discriminant analysis classifier, and an accuracy of 93.75% was achieved as well as a precision and recall of 95% and 90%, respectively.Keywords: Doppler radar system, stand-to-sit phase, TUG test, machine learning, classification
Procedia PDF Downloads 1612018 Application of Artificial Neural Network in Initiating Cleaning Of Photovoltaic Solar Panels
Authors: Mohamed Mokhtar, Mostafa F. Shaaban
Abstract:
Among the challenges facing solar photovoltaic (PV) systems in the United Arab Emirates (UAE), dust accumulation on solar panels is considered the most severe problem that faces the growth of solar power plants. The accumulation of dust on the solar panels significantly degrades output from these panels. Hence, solar PV panels have to be cleaned manually or using costly automated cleaning methods. This paper focuses on initiating cleaning actions when required to reduce maintenance costs. The cleaning actions are triggered only when the dust level exceeds a threshold value. The amount of dust accumulated on the PV panels is estimated using an artificial neural network (ANN). Experiments are conducted to collect the required data, which are used in the training of the ANN model. Then, this ANN model will be fed by the output power from solar panels, ambient temperature, and solar irradiance, and thus, it will be able to estimate the amount of dust accumulated on solar panels at these conditions. The model was tested on different case studies to confirm the accuracy of the developed model.Keywords: machine learning, dust, PV panels, renewable energy
Procedia PDF Downloads 1442017 Developed CNN Model with Various Input Scale Data Evaluation for Bearing Faults Prognostics
Authors: Anas H. Aljemely, Jianping Xuan
Abstract:
Rolling bearing fault diagnosis plays a pivotal issue in the rotating machinery of modern manufacturing. In this research, a raw vibration signal and improved deep learning method for bearing fault diagnosis are proposed. The multi-dimensional scales of raw vibration signals are selected for evaluation condition monitoring system, and the deep learning process has shown its effectiveness in fault diagnosis. In the proposed method, employing an Exponential linear unit (ELU) layer in a convolutional neural network (CNN) that conducts the identical function on positive data, an exponential nonlinearity on negative inputs, and a particular convolutional operation to extract valuable features. The identification results show the improved method has achieved the highest accuracy with a 100-dimensional scale and increase the training and testing speed.Keywords: bearing fault prognostics, developed CNN model, multiple-scale evaluation, deep learning features
Procedia PDF Downloads 2102016 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.Keywords: instance selection, data reduction, MapReduce, kNN
Procedia PDF Downloads 2532015 Behavior Study of Concrete-Filled Thin-Walled Square Hollow Steel Stub Columns
Authors: Mostefa Mimoune
Abstract:
Test results on concrete-filled steel tubular stub columns under axial compression are presented. The study was mainly focused on square hollow section SHS columns; 27 columns were tested. The main experimental parameters considered were the thickness of the tube, columns length and cross section sizes. Existing design codes and theoretical model were used to predict load-carrying capacities of composite section to compare the accuracy of the predictions by using the recommendations of DTR-BC (Algerian code), CSA (Canadian standard), AIJ, EC4, DBJ, AISC, BS and EC4. Experimental results indicate that the studied parameters have significant influence on both the compressive load capacity and the column failure mode. All codes used in the comparison, provide higher resistance compared to those of tests. Equation method has been suggested to evaluate the axial capacity of the composite section seem to be in agreement with tests.Keywords: axial loading, composite section, concrete-filled steel tubes, square hollow section
Procedia PDF Downloads 3782014 Framework for Socio-Technical Issues in Requirements Engineering for Developing Resilient Machine Vision Systems Using Levels of Automation through the Lifecycle
Authors: Ryan Messina, Mehedi Hasan
Abstract:
This research is to examine the impacts of using data to generate performance requirements for automation in visual inspections using machine vision. These situations are intended for design and how projects can smooth the transfer of tacit knowledge to using an algorithm. We have proposed a framework when specifying machine vision systems. This framework utilizes varying levels of automation as contingency planning to reduce data processing complexity. Using data assists in extracting tacit knowledge from those who can perform the manual tasks to assist design the system; this means that real data from the system is always referenced and minimizes errors between participating parties. We propose using three indicators to know if the project has a high risk of failing to meet requirements related to accuracy and reliability. All systems tested achieved a better integration into operations after applying the framework.Keywords: automation, contingency planning, continuous engineering, control theory, machine vision, system requirements, system thinking
Procedia PDF Downloads 2042013 Using Machine Learning to Predict Answers to Big-Five Personality Questions
Authors: Aadityaa Singla
Abstract:
The big five personality traits are as follows: openness, conscientiousness, extraversion, agreeableness, and neuroticism. In order to get an insight into their personality, many flocks to these categories, which each have different meanings/characteristics. This information is important not only to individuals but also to career professionals and psychologists who can use this information for candidate assessment or job recruitment. The links between AI and psychology have been well studied in cognitive science, but it is still a rather novel development. It is possible for various AI classification models to accurately predict a personality question via ten input questions. This would contrast with the hundred questions that normal humans have to answer to gain a complete picture of their five personality traits. In order to approach this problem, various AI classification models were used on a dataset to predict what a user may answer. From there, the model's prediction was compared to its actual response. Normally, there are five answer choices (a 20% chance of correct guess), and the models exceed that value to different degrees, proving their significance. By utilizing an MLP classifier, decision tree, linear model, and K-nearest neighbors, they were able to obtain a test accuracy of 86.643, 54.625, 47.875, and 52.125, respectively. These approaches display that there is potential in the future for more nuanced predictions to be made regarding personality.Keywords: machine learning, personally, big five personality traits, cognitive science
Procedia PDF Downloads 1452012 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 2292011 Filtering and Reconstruction System for Grey-Level Forensic Images
Authors: Ahd Aljarf, Saad Amin
Abstract:
Images are important source of information used as evidence during any investigation process. Their clarity and accuracy is essential and of the utmost importance for any investigation. Images are vulnerable to losing blocks and having noise added to them either after alteration or when the image was taken initially, therefore, having a high performance image processing system and it is implementation is very important in a forensic point of view. This paper focuses on improving the quality of the forensic images. For different reasons packets that store data can be affected, harmed or even lost because of noise. For example, sending the image through a wireless channel can cause loss of bits. These types of errors might give difficulties generally for the visual display quality of the forensic images. Two of the images problems: noise and losing blocks are covered. However, information which gets transmitted through any way of communication may suffer alteration from its original state or even lose important data due to the channel noise. Therefore, a developed system is introduced to improve the quality and clarity of the forensic images.Keywords: image filtering, image reconstruction, image processing, forensic images
Procedia PDF Downloads 3662010 A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering
Authors: Takanori Tanaka, Daisuke Kitao, Daisuke Ikeda
Abstract:
The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras.Keywords: data-intensive science, image classification, content-based image retrieval, aurora
Procedia PDF Downloads 4492009 Determination of Water Pollution and Water Quality with Decision Trees
Authors: Çiğdem Bakır, Mecit Yüzkat
Abstract:
With the increasing emphasis on water quality worldwide, the search for and expanding the market for new and intelligent monitoring systems has increased. The current method is the laboratory process, where samples are taken from bodies of water, and tests are carried out in laboratories. This method is time-consuming, a waste of manpower, and uneconomical. To solve this problem, we used machine learning methods to detect water pollution in our study. We created decision trees with the Orange3 software we used in our study and tried to determine all the factors that cause water pollution. An automatic prediction model based on water quality was developed by taking many model inputs such as water temperature, pH, transparency, conductivity, dissolved oxygen, and ammonia nitrogen with machine learning methods. The proposed approach consists of three stages: preprocessing of the data used, feature detection, and classification. We tried to determine the success of our study with different accuracy metrics and the results. We presented it comparatively. In addition, we achieved approximately 98% success with the decision tree.Keywords: decision tree, water quality, water pollution, machine learning
Procedia PDF Downloads 832008 Utility Assessment Model for Wireless Technology in Construction
Authors: Yassir AbdelRazig, Amine Ghanem
Abstract:
Construction projects are information intensive in nature and involve many activities that are related to each other. Wireless technologies can be used to improve the accuracy and timeliness of data collected from construction sites and shares it with appropriate parties. Nonetheless, the construction industry tends to be conservative and shows hesitation to adopt new technologies. A main concern for owners, contractors or any person in charge on a job site is the cost of the technology in question. Wireless technologies are not cheap. There are a lot of expenses to be taken into consideration, and a study should be completed to make sure that the importance and savings resulting from the usage of this technology is worth the expenses. This research attempts to assess the effectiveness of using the appropriate wireless technologies based on criteria such as performance, reliability, and risk. The assessment is based on a utility function model that breaks down the selection issue into alternatives attribute. Then the attributes are assigned weights and single attributes are measured. Finally, single attribute are combined to develop one single aggregate utility index for each alternative.Keywords: analytic hierarchy process, decision theory, utility function, wireless technologies
Procedia PDF Downloads 3422007 Kinetics of Growth Rate of Microalga: The Effect of Carbon Dioxide Concentration
Authors: Retno Ambarwati Sigit Lestari
Abstract:
Microalga is one of the organisms that can be considered ideal and potential for raw material of bioenergy production, because the content of lipids in microalga is relatively high. Microalga is an aquatic organism that produces complex organic compounds from inorganic molecules using carbon dioxide as a carbon source, and sunlight for energy supply. Microalga-CO₂ fixation has potential advantages over other carbon captures and storage approaches, such as wide distribution, high photosynthetic rate, good environmental adaptability, and ease of operation. The rates of growth and CO₂ capture of microalga are influenced by CO₂ concentration and light intensity. This study quantitatively investigates the effects of CO₂ concentration on the rates of growth and CO₂ capture of a type of microalga, cultivated in bioreactors. The works include laboratory experiments as well as mathematical modelling. The mathematical models were solved numerically and the accuracy of the model was tested by the experimental data. It turned out that the mathematical model proposed can well quantitatively describe the growth and CO₂ capture of microalga, in which the effects of CO₂ concentration can be observed.Keywords: Microalga, CO2 concentration, photobioreactor, mathematical model
Procedia PDF Downloads 1252006 Cooling-Rate Induced Fiber Birefringence Variation in Regenerated High Birefringent Fiber
Authors: Man-Hong Lai, Dinusha S. Gunawardena, Kok-Sing Lim, Harith Ahmad
Abstract:
In this paper, we have reported birefringence manipulation in regenerated high-birefringent fiber Bragg grating (RPMG) by using CO2 laser annealing method. The results indicate that the birefringence of RPMG remains unchanged after CO2 laser annealing followed by a slow cooling process, but reduced after the fast cooling process (~5.6×10-5). After a series of annealing procedures with different cooling rates, the obtained results show that slower the cooling rate, higher the birefringence of RPMG. The volume, thermal expansion coefficient (TEC) and glass transition temperature (Tg) change of stress applying part in RPMG during the cooling process are responsible for the birefringence change. Therefore, these findings are important to the RPMG sensor in high and dynamic temperature environment. The measuring accuracy, range and sensitivity of RPMG sensor are greatly affected by its birefringence value. This work also opens up a new application of CO2 laser for fiber annealing and birefringence modification.Keywords: birefringence, CO2 laser annealing, regenerated gratings, thermal stress
Procedia PDF Downloads 4592005 Generic Hybrid Models for Two-Dimensional Ultrasonic Guided Wave Problems
Authors: Manoj Reghu, Prabhu Rajagopal, C. V. Krishnamurthy, Krishnan Balasubramaniam
Abstract:
A thorough understanding of guided ultrasonic wave behavior in structures is essential for the application of existing Non Destructive Evaluation (NDE) technologies, as well as for the development of new methods. However, the analysis of guided wave phenomena is challenging because of their complex dispersive and multimodal nature. Although numerical solution procedures have proven to be very useful in this regard, the increasing complexity of features and defects to be considered, as well as the desire to improve the accuracy of inspection often imposes a large computational cost. Hybrid models that combine numerical solutions for wave scattering with faster alternative methods for wave propagation have long been considered as a solution to this problem. However usually such models require modification of the base code of the solution procedure. Here we aim to develop Generic Hybrid models that can be directly applied to any two different solution procedures. With this goal in mind, a Numerical Hybrid model and an Analytical-Numerical Hybrid model has been developed. The concept and implementation of these Hybrid models are discussed in this paper.Keywords: guided ultrasonic waves, Finite Element Method (FEM), Hybrid model
Procedia PDF Downloads 465