Search results for: accuracy improvement
7158 Transient Electrical Resistivity and Elastic Wave Velocity of Sand-Cement-Inorganic Binder Mixture
Authors: Kiza Rusati Pacifique, Ki-il Song
Abstract:
The cement milk grout has been used for ground improvement. Due to the environmental issues related to cement, the reduction of cement usage is requesting. In this study, inorganic binder is introduced to reduce the use of cement contents for ground improvement. To evaluate transient electrical and mechanical properties of sand-cement-inorganic binder mixture, two non-destructive testing (NDT) methods, Electrical Resistivity (ER) and Free Free Resonant Column (FFRC) tests were adopted in addition to unconfined compressive strength test. Electrical resistivity, longitudinal wave velocity and damping ratio of sand-cement admixture samples improved with addition of inorganic binders were measured. Experimental tests were performed considering four different mixing ratios and three different cement contents depending on the curing time. Results show that mixing ratio and curing time have considerable effects on electrical and mechanical properties of mixture. Unconfined compressive strength (UCS) decreases as the cement content decreases. However, sufficient grout strength can be obtained with increase of content of inorganic binder. From the results, it is found that the inorganic binder can be used to enhance the mechanical properties of mixture and reduce the cement content. It is expected that data and trends proposed in this study can be used as reference in predicting grouting quality in the field.Keywords: damping ratio, electrical resistivity, ground improvement, inorganic binder, longitudinal wave velocity, unconfined compression strength
Procedia PDF Downloads 3467157 Assessment of the High-Speed Ice Friction of Bob Skeleton Runners
Authors: Agata Tomaszewska, Timothy Kamps, Stephan R. Turnock, Nicola Symonds
Abstract:
Bob skeleton is a highly competitive sport in which an athlete reaches speeds up to 40 m/s sliding, head first, down an ice track. It is believed that the friction between the runners and ice significantly contributes to the amount of the total energy loss during a bob skeleton descent. There is only limited available experimental data regarding the friction of bob skeleton runners or indeed steel on the ice at high sliding speeds ( > 20 m/s). Testing methods used to investigate the friction of steel on ice in winter sports have been outlined, and their accuracy and repeatability discussed. A system thinking approach was used to investigate the runner-ice interaction during sliding and create concept designs of three ice tribometers. The operational envelope of the bob skeleton system has been defined through mathematical modelling. Designs of a drum, linear and inertia pin-on-disk tribometers were developed specifically for bob skeleton runner testing with the requirement of reaching up to 40 m/s speed and facilitate fresh ice sliding. The design constraints have been outline and the proposed solutions compared based on the ease of operation, accuracy and the development cost.Keywords: bob skeleton, ice friction, high-speed tribometers, sliding friction
Procedia PDF Downloads 2637156 Electrical Power Distribution Reliability Improvement by Retrofitting 4.16 kV Vacuum Contactor in Badak LNG Plant
Authors: David Hasurungan
Abstract:
This paper objective is to assess the power distribution reliability improvement by retrofitting obsolete vacuum contactor. The case study in Badak Liquefied Natural Gas (LNG) plant is presented in this paper. To support plant operational, Badak LNG is equipped with 4.16 kV switchgear for supplying the storage and loading facilities, utilities facilities, and train facilities. However, there is a problem in two switch gears of sixteen switch gears. The problem is the obsolescence issue in its vacuum contactor. Not only that, but the same switchgear also has suffered from electrical fault due to contact fingering misalignment. In order to improve the reliability in switchgear, the vacuum contactor retrofit project is done. The retrofit will introduce new vacuum contactor design. The comparison between existing design and the new design is presented in this paper. Meanwhile, The reliability assessment and calculation are performed using software Reliasoft 7.Keywords: reliability, obsolescence, retrofit, vacuum contactor
Procedia PDF Downloads 2917155 Developing an Advanced Algorithm Capable of Classifying News, Articles and Other Textual Documents Using Text Mining Techniques
Authors: R. B. Knudsen, O. T. Rasmussen, R. A. Alphinas
Abstract:
The reason for conducting this research is to develop an algorithm that is capable of classifying news articles from the automobile industry, according to the competitive actions that they entail, with the use of Text Mining (TM) methods. It is needed to test how to properly preprocess the data for this research by preparing pipelines which fits each algorithm the best. The pipelines are tested along with nine different classification algorithms in the realm of regression, support vector machines, and neural networks. Preliminary testing for identifying the optimal pipelines and algorithms resulted in the selection of two algorithms with two different pipelines. The two algorithms are Logistic Regression (LR) and Artificial Neural Network (ANN). These algorithms are optimized further, where several parameters of each algorithm are tested. The best result is achieved with the ANN. The final model yields an accuracy of 0.79, a precision of 0.80, a recall of 0.78, and an F1 score of 0.76. By removing three of the classes that created noise, the final algorithm is capable of reaching an accuracy of 94%.Keywords: Artificial Neural network, Competitive dynamics, Logistic Regression, Text classification, Text mining
Procedia PDF Downloads 1227154 Forecasting Stock Prices Based on the Residual Income Valuation Model: Evidence from a Time-Series Approach
Authors: Chen-Yin Kuo, Yung-Hsin Lee
Abstract:
Previous studies applying residual income valuation (RIV) model generally use panel data and single-equation model to forecast stock prices. Unlike these, this paper uses Taiwan longitudinal data to estimate multi-equation time-series models such as Vector Autoregressive (VAR), Vector Error Correction Model (VECM), and conduct out-of-sample forecasting. Further, this work assesses their forecasting performance by two instruments. In favor of extant research, the major finding shows that VECM outperforms other three models in forecasting for three stock sectors over entire horizons. It implies that an error correction term containing long-run information contributes to improve forecasting accuracy. Moreover, the pattern of composite shows that at longer horizon, VECM produces the greater reduction in errors, and performs substantially better than VAR.Keywords: residual income valuation model, vector error correction model, out of sample forecasting, forecasting accuracy
Procedia PDF Downloads 3187153 Advances in Genome Editing and Future Prospects for Sorghum Improvement: A Review
Authors: Micheale Yifter Weldemichael, Hailay Mehari Gebremedhn, Teklehaimanot Hailesslasie Teklu
Abstract:
Recent developments in targeted genome editing accelerated genetic research and opened new potentials to improve crops for better yields and quality. Given the significance of cereal crops as a primary source of food for the global population, the utilization of contemporary genome editing techniques like CRISPR/Cas9 is timely and crucial. CRISPR/Cas technology has enabled targeted genomic modifications, revolutionizing genetic research and exploration. Application of gene editing through CRISPR/Cas9 in enhancing sorghum is particularly vital given the current ecological, environmental, and agricultural challenges exacerbated by climate change. As sorghum is one of the main staple foods of our region and is known to be a resilient crop with a high potential to overcome the above challenges, the application of genome editing technology will enhance the investigation of gene functionality. CRISPR/Cas9 enables the improvement of desirable sorghum traits, including nutritional value, yield, resistance to pests and diseases, and tolerance to various abiotic stresses. Furthermore, CRISPR/Cas9 has the potential to perform intricate editing and reshape the existing elite sorghum varieties, and introduce new genetic variations. However, current research primarily focuses on improving the efficacy of the CRISPR/Cas9 system in successfully editing endogenous sorghum genes, making it a feasible and successful undertaking in sorghum improvement. Recent advancements and developments in CRISPR/Cas9 techniques have further empowered researchers to modify additional genes in sorghum with greater efficiency. Successful application and advancement of CRISPR techniques in sorghum will aid not only in gene discovery and the creation of novel traits that regulate gene expression and functional genomics but also in facilitating site-specific integration events. The purpose of this review is, therefore, to elucidate the current advances in sorghum genome editing and highlight its potential in addressing food security issues. It also assesses the efficiency of CRISPR-mediated improvement and its long-term effects on crop improvement and host resistance against parasites, including tissue-specific activity and the ability to induce resistance. This review ends by emphasizing the challenges and opportunities of CRISPR technology in combating parasitic plants and proposing directions for future research to safeguard global agricultural productivity.Keywords: CRISPR/Cas9, genome editing, quality, sorghum, stress, yield
Procedia PDF Downloads 437152 Continuous Improvement of Teaching Quality through Course Evaluation by the Students
Authors: Valerie Follonier, Henrike Hamelmann, Jean-Michel Jullien
Abstract:
The Distance Learning University in Switzerland (UniDistance) is offering bachelor and master courses as well as further education programs. The professors and their assistants work at traditional Swiss universities and are giving their courses at UniDistance following a blended learning and flipped classroom approach. A standardized course evaluation by the students has been established as a component of a quality improvement process. The students’ feedback enables the stakeholders to identify areas of improvement, initiate professional development for the teaching teams and thus continuously augment the quality of instruction. This paper describes the evaluation process, the tools involved and how the approach involving all stakeholders helps forming a culture of quality in teaching. Additionally, it will present the first evaluation results following the new process. Two software tools have been developed to support all stakeholders in the process of the semi-annual formative evaluation. The first tool allows to create the survey and to assign it to the relevant courses and students. The second tool presents the results of the evaluation to the stakeholders, providing specific features for the teaching teams, the dean, the directorate and EDUDL+ (Educational development unit distance learning). The survey items were selected in accordance with the e-learning strategy of the institution and are formulated to support the professional development of the teaching teams. By reviewing the results the teaching teams become aware of the opinion of the students and are asked to write a feedback for the attention of their dean. The dean reviews the results of the faculty and writes a general report about the situation of the faculty and the possible improvements intended. Finally, EDUDL+ writes a final report summarising the evaluation results. A mechanism of adjustable warnings allows it to generate quality indicators for each module. These are summarised for each faculty and globally for the whole institution in order to increase the vigilance of the responsible. The quality process involves changing the indicators regularly to focus on different areas each semester, to facilitate the professional development of the teaching teams and to progressively augment the overall teaching quality of the institution.Keywords: continuous improvement process, course evaluation, distance learning, software tools, teaching quality
Procedia PDF Downloads 2617151 Amharic Text News Classification Using Supervised Learning
Authors: Misrak Assefa
Abstract:
The Amharic language is the second most widely spoken Semitic language in the world. There are several new overloaded on the web. Searching some useful documents from the web on a specific topic, which is written in the Amharic language, is a challenging task. Hence, document categorization is required for managing and filtering important information. In the classification of Amharic text news, there is still a gap in the domain of information that needs to be launch. This study attempts to design an automatic Amharic news classification using a supervised learning mechanism on four un-touch classes. To achieve this research, 4,182 news articles were used. Naive Bayes (NB) and Decision tree (j48) algorithms were used to classify the given Amharic dataset. In this paper, k-fold cross-validation is used to estimate the accuracy of the classifier. As a result, it shows those algorithms can be applicable in Amharic news categorization. The best average accuracy result is achieved by j48 decision tree and naïve Bayes is 95.2345 %, and 94.6245 % respectively using three categories. This research indicated that a typical decision tree algorithm is more applicable to Amharic news categorization.Keywords: text categorization, supervised machine learning, naive Bayes, decision tree
Procedia PDF Downloads 2127150 A Simple and Easy-To-Use Tool for Detecting Outer Contour of Leukocytes Based on Image Processing Techniques
Authors: Retno Supriyanti, Best Leader Nababan, Yogi Ramadhani, Wahyu Siswandari
Abstract:
Blood cell morphology is an important parameter in a hematology test. Currently, in developing countries, a lot of hematology is done manually, either by physicians or laboratory staff. According to the limitation of the human eye, examination based on manual method will result in a lower precision and accuracy. In addition, the hematology test by manual will further complicate the diagnosis in some areas that do not have competent medical personnel. This research aims to develop a simple tool in the detection of blood cell morphology-based computer. In this paper, we focus on the detection of the outer contour of leukocytes. The results show that the system that we developed is promising for detecting blood cell morphology automatically. It is expected, by implementing this method, the problem of accuracy, precision and limitations of the medical staff can be solved.Keywords: morphology operation, developing countries, hematology test, limitation of medical personnel
Procedia PDF Downloads 3407149 Sustainable Manufacturing of Concentrated Latex and Ribbed Smoked Sheets in Sri Lanka
Authors: Pasan Dunuwila, V. H. L. Rodrigo, Naohiro Goto
Abstract:
Sri Lanka is one the largest natural rubber (NR) producers of the world, where the NR industry is a major foreign exchange earner. Among the locally manufactured NR products, concentrated latex (CL) and ribbed smoked sheets (RSS) hold a significant position. Furthermore, these products become the foundation for many products utilized by the people all over the world (e.g. gloves, condoms, tires, etc.). Processing of CL and RSS costs a significant amount of material, energy, and workforce. With this background, both manufacturing lines have immensely challenged by waste, low productivity, lack of cost efficiency, rising cost of production, and many environmental issues. To face the above challenges, the adaptation of sustainable manufacturing measures that use less energy, water, materials, and produce less waste is imperative. However, these sectors lack comprehensive studies that shed light on such measures and thoroughly discuss their improvement potentials from both environmental and economic points of view. Therefore, based on a study of three CL and three RSS mills in Sri Lanka, this study deploys sustainable manufacturing techniques and tools to uncover the underlying potentials to improve performances in CL and RSS processing sectors. This study is comprised of three steps: 1. quantification of average material waste, economic losses, and greenhouse gas (GHG) emissions via material flow analysis (MFA), material flow cost accounting (MFCA), and life cycle assessment (LCA) in each manufacturing process, 2. identification of improvement options with the help of Pareto and What-if analyses, field interviews, and the existing literature; and 3. validation of the identified improvement options via the re-execution of MFA, MFCA, and LCA. With the help of this methodology, the economic and environmental hotspots, and the degrees of improvement in both systems could be identified. Results highlighted that each process could be improved to have less waste, monetary losses, manufacturing costs, and GHG emissions. Conclusively, study`s methodology and findings are believed to be beneficial for assuring the sustainable growth not only in Sri Lankan NR processing sector itself but also in NR or any other industry rooted in other developing countries.Keywords: concentrated latex, natural rubber, ribbed smoked sheets, Sri Lanka
Procedia PDF Downloads 2617148 Margin-Based Feed-Forward Neural Network Classifiers
Authors: Xiaohan Bookman, Xiaoyan Zhu
Abstract:
Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is developed and generated from Widrow-Hoff Principle that means to minimize the squared error. In this paper, we propose a new training algorithm for feed-forward neural networks based on Margin-Based Principle, which could effectively promote the accuracy and generalization ability of neural network classifiers with less labeled samples and flexible network. We have conducted experiments on four UCI open data sets and achieved good results as expected. In conclusion, our model could handle more sparse labeled and more high-dimension data set in a high accuracy while modification from old ANN method to our method is easy and almost free of work.Keywords: Max-Margin Principle, Feed-Forward Neural Network, classifier, structural risk
Procedia PDF Downloads 3477147 Computer-Aided Diagnosis of Eyelid Skin Tumors Using Machine Learning
Authors: Ofira Zloto, Ofir Fogel, Eyal Klang
Abstract:
Purpose: The aim is to develop an automated framework based on machine learning to diagnose malignant eyelid skin tumors. Methods: This study utilized eyelid lesion images from Sheba Medical Center, a large tertiary center in Israel. Before model training, we pre-trained our models on the ISIC 2019 dataset consisting of 25,332 images. The proprietary eyelid dataset was then used for fine-tuning. The dataset contained multiple images per patient, aiming to classify malignant lesions in comparison to benign counterparts. Results: The analyzed dataset consisted of images representing both benign and malignant eyelid lesions. For the benign category, a total of 373 images were sourced. In comparison, the malignant category has 186 images. Based on the accuracy values, the model with 3 epochs and a learning rate of 0.0001 exhibited the best performance, achieving an accuracy of 0.748 with a standard deviation of 0.034. At a sensitivity of 69%, the model has a corresponding specificity of 82%. To further understand the decision-making process of our model, we employed heatmap visualization techniques, specifically Gradient-weighted Class Activation Mapping. Discussion: This study introduces a dependable model-aided diagnostic technology for assessing eyelid skin lesions. The model demonstrated accuracy comparable to human evaluation, effectively determining whether a lesion raises a high suspicion of malignancy or is benign. Such a model has the potential to alleviate the burden on the healthcare system, particularly benefiting rural areas and enhancing the efficiency of clinicians and overall healthcare.Keywords: machine learning;, eyelid skin tumors;, decision-making process;, heatmap visualization techniques
Procedia PDF Downloads 67146 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-time
Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl
Abstract:
In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method Web-App auto-generated twin data structure replication. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi" has been developed. A special login form has been developed with a special instance of data validation; this verification process secures the web application from its early stages. The system has been tested and validated, up to 99% of SQLi attacks have been prevented.Keywords: SQL injection, attacks, web application, accuracy, database
Procedia PDF Downloads 1537145 Cognitive Methods for Detecting Deception During the Criminal Investigation Process
Authors: Laid Fekih
Abstract:
Background: It is difficult to detect lying, deception, and misrepresentation just by looking at verbal or non-verbal expression during the criminal investigation process, as there is a common belief that it is possible to tell whether a person is lying or telling the truth just by looking at the way they act or behave. The process of detecting lies and deception during the criminal investigation process needs more studies and research to overcome the difficulties facing the investigators. Method: The present study aimed to identify the effectiveness of cognitive methods and techniques in detecting deception during the criminal investigation. It adopted the quasi-experimental method and covered a sample of (20) defendants distributed randomly into two homogeneous groups, an experimental group of (10) defendants be subject to criminal investigation by applying cognitive techniques to detect deception and a second experimental group of (10) defendants be subject to the direct investigation method. The tool that used is a guided interview based on models of investigative questions according to the cognitive deception detection approach, which consists of three techniques of Vrij: imposing the cognitive burden, encouragement to provide more information, and ask unexpected questions, and the Direct Investigation Method. Results: Results revealed a significant difference between the two groups in term of lie detection accuracy in favour of defendants be subject to criminal investigation by applying cognitive techniques, the cognitive deception detection approach produced superior total accuracy rates both with human observers and through an analysis of objective criteria. The cognitive deception detection approach produced superior accuracy results in truth detection: 71%, deception detection: 70% compared to a direct investigation method truth detection: 52%; deception detection: 49%. Conclusion: The study recommended if practitioners use a cognitive deception detection technique, they will correctly classify more individuals than when they use a direct investigation method.Keywords: the cognitive lie detection approach, deception, criminal investigation, mental health
Procedia PDF Downloads 687144 The Effect of Hypertrophy Strength Training Using Traditional Set vs. Cluster Set on Maximum Strength and Sprinting Speed
Authors: Bjornar Kjellstadli, Shaher A. I. Shalfawi
Abstract:
The aim of this study was to investigate the effect of strength training Cluster set-method compared to traditional set-method 30 m sprinting time and maximum strength in squats and bench-press. Thirteen Physical Education students, 7 males and 6 females between the age of 19-28 years old were recruited. The students were random divided in three groups. Traditional set group (TSG) consist of 2 males and 2 females aged (±SD) (22.3 ± 1.5 years), body mass (79.2 ± 15.4 kg) and height (177.5 ± 11.3 cm). Cluster set group (CSG) consist of 3 males and 2 females aged (22.4 ± 3.29 years), body mass (81.0 ± 24.0 kg) and height (179.2 ± 11.8 cm) and a control group (CG) consist of 2 males and 2 females aged (21.5 ± 2.4 years), body mass (82.1 ± 17.4 kg) and height (175.5 ± 6.7 cm). The intervention consisted of performing squat and bench press at 70% of 1RM (twice a week) for 8 weeks using 10 repetition and 4 sets. Two types of strength-training methods were used , cluster set (CS) where the participants (CSG) performed 2 reps 5 times with a 10 s recovery in between reps and 50 s recovery between sets, and traditional set (TS) where the participants (TSG) performed 10 reps each set with 90 s recovery in between sets. The pre-tests and post-tests conducted were 1 RM in both squats and bench press, and 10 and 30 m sprint time. The 1RM test were performed with Eleiko XF barbell (20 kg), Eleiko weight plates, rack and bench from Hammerstrength. The speed test was measured with the Brower speed trap II testing system (Brower Timing Systems, Utah, USA). The participants received an individualized training program based on the pre-test of the 1RM. In addition, a mid-term test of 1RM was carried out to adjust training intensity. Each training session were supervised by the researchers. Beast sensors (Milano, Italy) were also used to monitor and quantify the training load for the participants. All groups had a statistical significant improvement in bench press 1RM (TSG 1RM from 56.3 ± 28.9 to 66 ± 28.5 kg; CSG 1RM from 69.8 ± 33.5 to 77.2 ± 34.1 kg and CG 1RM from 67.8 ± 26.6 to 72.2 ± 29.1 kg), whereas only the TSG (1RM from 84.3 ± 26.8 to 114.3 ± 26.5 kg) and CSG (1RM from 100.4 ± 33.9 to 129 ± 35.1 kg) had a statistical significant improvement in Squats 1RM (P < 0.05). However, a between groups examination reveals that there were no marked differences in 1RM squat performance between TSG and CSG (P > 0.05) and both groups had a marked improvements compared to the CG (P < 0.05). On the other hand, no differences between groups were observed in Bench press 1RM. The within groups results indicate that none of the groups had any marked improvement in the distances from 0-10 m and 10-30 m except the CSG which had a notable improvement in the distance from 10-30 m (-0.07 s; P < 0.05). Furthermore, no differences in sprinting abilities were observed between groups. The results from this investigation indicate that traditional set strength training at 70% of 1RM gave close results compared to Cluster set strength training at the same intensity. However, the results indicate that the cluster set had an effect on flying time (10-30 m) indicating that the velocity at which those repetitions were performed could be the explanation factor of this this improvement.Keywords: physical performance, 1RM, pushing velocity, velocity based training
Procedia PDF Downloads 1667143 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms
Authors: Habtamu Ayenew Asegie
Abstract:
Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction
Procedia PDF Downloads 437142 Detection of Powdery Mildew Disease in Strawberry Using Image Texture and Supervised Classifiers
Authors: Sultan Mahmud, Qamar Zaman, Travis Esau, Young Chang
Abstract:
Strawberry powdery mildew (PM) is a serious disease that has a significant impact on strawberry production. Field scouting is still a major way to find PM disease, which is not only labor intensive but also almost impossible to monitor disease severity. To reduce the loss caused by PM disease and achieve faster automatic detection of the disease, this paper proposes an approach for detection of the disease, based on image texture and classified with support vector machines (SVMs) and k-nearest neighbors (kNNs). The methodology of the proposed study is based on image processing which is composed of five main steps including image acquisition, pre-processing, segmentation, features extraction and classification. Two strawberry fields were used in this study. Images of healthy leaves and leaves infected with PM (Sphaerotheca macularis) disease under artificial cloud lighting condition. Colour thresholding was utilized to segment all images before textural analysis. Colour co-occurrence matrix (CCM) was introduced for extraction of textural features. Forty textural features, related to a physiological parameter of leaves were extracted from CCM of National television system committee (NTSC) luminance, hue, saturation and intensity (HSI) images. The normalized feature data were utilized for training and validation, respectively, using developed classifiers. The classifiers have experimented with internal, external and cross-validations. The best classifier was selected based on their performance and accuracy. Experimental results suggested that SVMs classifier showed 98.33%, 85.33%, 87.33%, 93.33% and 95.0% of accuracy on internal, external-I, external-II, 4-fold cross and 5-fold cross-validation, respectively. Whereas, kNNs results represented 90.0%, 72.00%, 74.66%, 89.33% and 90.3% of classification accuracy, respectively. The outcome of this study demonstrated that SVMs classified PM disease with a highest overall accuracy of 91.86% and 1.1211 seconds of processing time. Therefore, overall results concluded that the proposed study can significantly support an accurate and automatic identification and recognition of strawberry PM disease with SVMs classifier.Keywords: powdery mildew, image processing, textural analysis, color co-occurrence matrix, support vector machines, k-nearest neighbors
Procedia PDF Downloads 1227141 Optimal Placement and Sizing of Distributed Generation in Microgrid for Power Loss Reduction and Voltage Profile Improvement
Authors: Ferinar Moaidi, Mahdi Moaidi
Abstract:
Environmental issues and the ever-increasing in demand of electrical energy make it necessary to have distributed generation (DG) resources in the power system. In this research, in order to realize the goals of reducing losses and improving the voltage profile in a microgrid, the allocation and sizing of DGs have been used. The proposed Genetic Algorithm (GA) is described from the array of artificial intelligence methods for solving the problem. The algorithm is implemented on the IEEE 33 buses network. This study is presented in two scenarios, primarily to illustrate the effect of location and determination of DGs has been done to reduce losses and improve the voltage profile. On the other hand, decisions made with the one-level assumptions of load are not universally accepted for all levels of load. Therefore, in this study, load modelling is performed and the results are presented for multi-levels load state.Keywords: distributed generation, genetic algorithm, microgrid, load modelling, loss reduction, voltage improvement
Procedia PDF Downloads 1457140 Evaluating Factors Affecting Audiologists’ Diagnostic Performance in Auditory Brainstem Response Reading: Training and Experience
Authors: M. Zaitoun, S. Cumming, A. Purcell
Abstract:
This study aims to determine if audiologists' experience characteristics in ABR (Auditory Brainstem Response) reading is associated with their performance in interpreting ABR results. Fifteen ABR traces with varying degrees of hearing level were presented twice, making a total of 30. Audiologists were asked to determine the hearing threshold for each of the cases after completing a brief survey regarding their experience and training in ABR administration. Sixty-one audiologists completed all tasks. Correlations between audiologists’ performance measures and experience variables suggested significant associations (p < 0.05) between training period in ABR testing and audiologists’ performance in terms of both sensitivity and accuracy. In addition, the number of years conducting ABR testing correlated with specificity. No other correlations approached significance. While there are relatively few significant correlations between ABR performance and experience, accuracy in ABR reading is associated with audiologists’ length of experience and period of training. To improve audiologists’ performance in reading ABR results, an emphasis on the importance of training should be raised and standardized levels and period for audiologists training in ABR testing should also be set.Keywords: ABR, audiology, performance, training, experience
Procedia PDF Downloads 1677139 Height of Highway Embankment for Tolerable Residual Settlement of Loose Cohesionless Subsoil Overlain by Stronger Soil
Authors: Sharifullah Ahmed
Abstract:
Residual settlement of cohesionless or non-plastic soil of different strength underlying highway embankment overlain by stronger soil layer highway embankment is studied. A parametric study is carried out for different height of embankment and for different ESAL factor. The sum of elastic settlements of cohesionless subsoil due to axle induced stress and due to self-weight of pavement layers is termed as the residual settlement. The values of residual settlement (Sr) for different heights of road embankment (He) are obtained and presented as design charts for different SPT Value (N60) and ESAL factor. For rigid pavement and flexible pavement in approach to bridge or culvert, the tolerable residual settlement is 0.100m. This limit is taken as 0.200m for flexible pavement in general sections of highway without approach to bridge or culvert. A simplified guideline is developed for design of highway embankment underlain by very loose to loose cohesionless subsoil overlain by a stronger soil layer for limiting value of the residual settlement. In the current research study range of ESAL factor is 1-10 and range of SPT value (N60) is 1-10. That is found that, ground improvement is not required if the overlying stronger layer is minimum 1.5m and 4.0m for general road section of flexible pavement except bridge or culvert approach and for rigid pavement or flexible pavement in bridge or culvert approach. Tables and charts are included in the prepared guideline to obtain minimum allowable height of highway embankment to limit the residual settlement with in mentioned tolerable limit. Allowable values of the embankment height (He) are obtained corresponding to tolerable or limiting level of the residual settlement of loose subsoil for different SPT value, thickness of stronger layer (d) and ESAL factor. The developed guideline is may be issued to be used in assessment of the necessity of ground improvement in case of cohesionless subsoil underlying highway embankment overlain by stronger subsoil layer for limiting residual settlement. The ground improvement is only to be required if the residual settlement of subsoil is more than tolerable limit.Keywords: axle pressure, equivalent single axle load, ground improvement, highway embankment, tolerable residual settlement
Procedia PDF Downloads 1297138 Structural Equation Modeling Semiparametric in Modeling the Accuracy of Payment Time for Customers of Credit Bank in Indonesia
Authors: Adji Achmad Rinaldo Fernandes
Abstract:
The research was conducted to apply semiparametric SEM modeling to the timeliness of paying credit. Semiparametric SEM is structural modeling in which two combined approaches of parametric and nonparametric approaches are used. The analysis method in this research is semiparametric SEM with a nonparametric approach using a truncated spline. The data in the study were obtained through questionnaires distributed to Bank X mortgage debtors and are confidential. The study used 3 variables consisting of one exogenous variable, one intervening endogenous variable, and one endogenous variable. The results showed that (1) the effect of capacity and willingness to pay variables on timeliness of payment is significant, (2) modeling the capacity variable on willingness to pay also produces a significant estimate, (3) the effect of the capacity variable on the timeliness of payment variable is not influenced by the willingness to pay variable as an intervening variable, (4) the R^2 value of 0.763 or 76.33% indicates that the model has good predictive relevance.Keywords: structural equation modeling semiparametric, credit bank, accuracy of payment time, willingness to pay
Procedia PDF Downloads 477137 Improvement in Properties of Ni-Cr-Mo-V Steel through Process Control
Authors: Arnab Majumdar, Sanjoy Sadhukhan
Abstract:
Although gun barrel steels are an important variety from defense view point, available literatures are very limited. In the present work, an IF grade Ni-Cr-Mo-V high strength low alloy steel is produced in Electric Earth Furnace-ESR Route. Ingot was hot forged to desired dimension with a reduction ratio of 70-75% followed by homogenization, hardening and tempering treatment. Sample chemistry, NMIR, macro and micro structural analyses were done. Mechanical properties which include tensile, impact, and fracture toughness were studied. Ultrasonic testing was done to identify internal flaws. The existing high strength low alloy Ni-Cr-Mo-V steel shows improved properties in modified processing route and heat treatment schedule in comparison to properties noted earlier for manufacturing of gun barrels. The improvement in properties seems to withstand higher explosive loads with the same amount of steel in gun barrel application.Keywords: gun barrel steels, IF grade, chemistry, physical properties, thermal and mechanical processing, mechanical properties, ultrasonic testing
Procedia PDF Downloads 3837136 Flood-prone Urban Area Mapping Using Machine Learning, a Case Sudy of M'sila City (Algeria)
Authors: Medjadj Tarek, Ghribi Hayet
Abstract:
This study aims to develop a flood sensitivity assessment tool using machine learning (ML) techniques and geographic information system (GIS). The importance of this study is integrating the geographic information systems (GIS) and machine learning (ML) techniques for mapping flood risks, which help decision-makers to identify the most vulnerable areas and take the necessary precautions to face this type of natural disaster. To reach this goal, we will study the case of the city of M'sila, which is among the areas most vulnerable to floods. This study drew a map of flood-prone areas based on the methodology where we have made a comparison between 3 machine learning algorithms: the xGboost model, the Random Forest algorithm and the K Nearest Neighbour algorithm. Each of them gave an accuracy respectively of 97.92 - 95 - 93.75. In the process of mapping flood-prone areas, the first model was relied upon, which gave the greatest accuracy (xGboost).Keywords: Geographic information systems (GIS), machine learning (ML), emergency mapping, flood disaster management
Procedia PDF Downloads 957135 Machine Learning Driven Analysis of Kepler Objects of Interest to Identify Exoplanets
Authors: Akshat Kumar, Vidushi
Abstract:
This paper identifies 27 KOIs, 26 of which are currently classified as candidates and one as false positives that have a high probability of being confirmed. For this purpose, 11 machine learning algorithms were implemented on the cumulative kepler dataset sourced from the NASA exoplanet archive; it was observed that the best-performing model was HistGradientBoosting and XGBoost with a test accuracy of 93.5%, and the lowest-performing model was Gaussian NB with a test accuracy of 54%, to test model performance F1, cross-validation score and RUC curve was calculated. Based on the learned models, the significant characteristics for confirm exoplanets were identified, putting emphasis on the object’s transit and stellar properties; these characteristics were namely koi_count, koi_prad, koi_period, koi_dor, koi_ror, and koi_smass, which were later considered to filter out the potential KOIs. The paper also calculates the Earth similarity index based on the planetary radius and equilibrium temperature for each KOI identified to aid in their classification.Keywords: Kepler objects of interest, exoplanets, space exploration, machine learning, earth similarity index, transit photometry
Procedia PDF Downloads 767134 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach
Authors: Zhuoran Li, Guan Qin
Abstract:
A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method
Procedia PDF Downloads 1737133 Application of an Educational Program for Al Jouf University Students regarding Scientific Writing and Presentation Skills
Authors: Fatma Abdel Moneim Al Tawil
Abstract:
This study was undertaken to evaluate an educational program regarding scientific writing and presentation skills among university students. This interventional study used a one-group, pretest/posttest design and was conducted in Al Jouf University among four colleges in Saudi Arabia. Baseline students’ assessment was conducted for developing educational program. Interventional, one group, pretest/posttest study was designed to evaluate the effectiveness of the educational program. Three parts evaluation sheet with total scores of 30 was used for 113 students for the development of the program and 52 students for test pretest phase. Wilcoxon signed ranks showed statistically significant improvement in the combined overall program skills score from a median of 56.7 pre to a median of 86.7 post, (z = 6.231, p < 0.001). When compared to preprogram intervention, post interventions 51.9 % of students achieve excellent performance. While pre intervention no students (0.0 %) achieve this score. Regarding to scientific writing skills, Wilcoxon signed ranks showed statistically significant improvement in the score from a median of 60 pre to a median of 90 post, (z = 6.122, p < 0.001). None of students had excellent performance changed to 73.1%. Regarding to oral presentation skills, Wilcoxon signed ranks showed statistically significant improvement in the score from a median of 50 pre to a median of 80 post, (z = 6.153, p < 0.001). None of students had excellent performance changed to 48.1%. Such educational program needs to be incorporated into classroom delivery of the students’ curriculum. Scientific writing skills book needed to be developed to be recommended as a basic educational strategy for all university faculties.Keywords: scientific writing, presentation skills, university students, educational program
Procedia PDF Downloads 4537132 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 1047131 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm
Abstract:
Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension
Procedia PDF Downloads 1027130 Solutions for Large Diameter Piles Stifness Used in Offshore Wind Turbine Farms
Authors: M. H. Aissa, Amar Bouzid Dj
Abstract:
As known, many countries are now planning to build new wind farms with high capacity up to 5MW. Consequently, the size of the foundation increase. These kinds of structures are subject to fatigue damage from environmental loading mainly due to wind and waves as well as from cyclic loading imposed through the rotational frequency (1P) through mass and aerodynamic imbalances and from the blade passing frequency (3P) of the wind turbine which make them behavior dynamically very sensitive. That is why natural frequency must be determined with accuracy from the existing data of the soil and the foundation stiffness sources of uncertainties, to avoid the resonance of the system. This paper presents analytical expressions of stiffness foundation with large diameter in linear soil behavior in different soil stiffness profile. To check the accuracy of the proposed formulas, a mathematical model approach based on non-dimensional parameters is used to calculate the natural frequency taking into account the soil structure interaction (SSI) compared with the p-y method and measured frequency in the North Sea Wind farms.Keywords: offshore wind turbines, semi analytical FE analysis, p-y curves, piles foundations
Procedia PDF Downloads 4687129 Applying Quadrant Analysis in Identifying Business-to-Business Customer-Driven Improvement Opportunities in Third Party Logistics Industry
Authors: Luay Jum'a
Abstract:
Many challenges are facing third-party logistics (3PL) providers in the domestic and global markets which create a volatile decision making environment. All these challenges such as managing changes in consumer behaviour, demanding expectations from customers and time compressions have turned into complex problems for 3PL providers. Since the movement towards increased outsourcing outpaces movement towards insourcing, the need to achieve a competitive advantage over competitors in 3PL market increases. This trend continues to grow over the years and as a result, areas of strengths and improvements are highlighted through the analysis of the LSQ factors that lead to B2B customers’ satisfaction which become a priority for 3PL companies. Consequently, 3PL companies are increasingly focusing on the most important issues from the perspective of their customers and relying more on this value of information in making their managerial decisions. Therefore, this study is concerned with providing guidance for improving logistics service quality (LSQ) levels in the context of 3PL industry in Jordan. The study focused on the most important factors in LSQ and used a managerial tool that guides 3PL companies in making LSQ improvements based on a quadrant analysis of two main dimensions: LSQ declared importance and LSQ inferred importance. Although, a considerable amount of research has been conducted to investigate the relationship between logistics service quality (LSQ) and customer satisfaction, there remains a lack of developing managerial tools to aid in the process of LSQ improvement decision-making. Moreover, the main advantage for the companies to use 3PL service providers as a trend is due to the realised percentage of cost reduction on the total cost of logistics operations and the incremental improvement in customer service. In this regard, having a managerial tool that help 3PL service providers in managing the LSQ factors portfolio effectively and efficiently would be a great investment for service providers. One way of suggesting LSQ improvement actions for 3PL service providers is via the adoption of analysis tools that perform attribute categorisation such as Importance–Performance matrix. In mind of the above, it can be stated that the use of quadrant analysis will provide a valuable opportunity for 3PL service providers to identify improvement opportunities as customer service attributes or factors importance are identified in two different techniques that complete each other. Moreover, the data were collected through conducting a survey and 293 questionnaires were returned from business-to-business (B2B) customers of 3PL companies in Jordan. The results showed that the LSQ factors vary in their importance and 3PL companies should focus on some LSQ factors more than other factors. Moreover, ordering procedures, timeliness/responsiveness LSQ factors considered being crucial in 3PL businesses and therefore they need to have more focus and development by 3PL service providers in the Jordanian market.Keywords: logistics service quality, managerial decisions, quadrant analysis, third party logistics service provider
Procedia PDF Downloads 127