Search results for: iterative methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15538

Search results for: iterative methods

14968 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 142
14967 A Comparison Study of Different Methods Used in the Detection of Giardia lamblia on Fecal Specimen of Children

Authors: Muhammad Farooq Baig

Abstract:

Objective: The purpose of this study was to compare results obtained using a single fecal specimen for O&P examination, direct immunofluorescence assay (DFA), and two conventional staining methods. Design: Hundred and fifty children fecal specimens were collected and examined by each method. The O&P and the DFA were used as the reference method. Setting: The study was performed at the laboratory in the Basic Medical Science Institute JPMC Karachi. Patients or Other Participants: The fecal specimens were collected from children with a suspected Giardia lamblia infection. Main Outcome Measures: The amount of agreement and disagreement between methods.1) Presence of giardiasis in our population. 2) The sensitivity and specificity of each method. Results: There was 45(30%) positive 105 (70%) negative on DFA, 41 (27.4%) positive 109 (72.6%) negative on iodine and 34 (22.6%) positive 116(77.4%) on saline method. The sensitivity and specificity of DFA in comparision to iodine were 92.2%, 92.7% respectively. The sensitivity and specificity of DFA in comparisoin to saline method were 91.2%, 87.9% respectively. The sensitivity of iodine method and saline method in compariosn to DFA were 82.2%, 68.8% respectively. There is mark diffrence in sensitivity of DFA to conventional method. Conclusion: The study supported findings of other investigators who concluded that DFA method have the greater sensitivity. The immunologic methods were more efficient and quicker than the conventional O&P method.

Keywords: direct immunofluorescence assay (DFA), ova and parasite (O&P), Giardia lamblia, children, medical science

Procedia PDF Downloads 423
14966 Evaluation of Simple, Effective and Affordable Processing Methods to Reduce Phytates in the Legume Seeds Used for Feed Formulations

Authors: N. A. Masevhe, M. Nemukula, S. S. Gololo, K. G. Kgosana

Abstract:

Background and Study Significance: Legume seeds are important in agriculture as they are used for feed formulations due to their nutrient-dense, low-cost, and easy accessibility. Although they are important sources of energy, proteins, carbohydrates, vitamins, and minerals, they contain abundant quantities of anti-nutritive factors that reduce the bioavailability of nutrients, digestibility of proteins, and mineral absorption in livestock. However, the removal of these factors is too costly as it requires expensive state-of-the-art techniques such as high pressure and thermal processing. Basic Methodologies: The aim of the study was to investigate cost-effective methods that can be used to reduce the inherent phytates as putative antinutrients in the legume seeds. The seeds of Arachis hypogaea, Pisum sativum and Vigna radiata L. were subjected to the single processing methods viz raw seeds plus dehulling (R+D), soaking plus dehulling (S+D), ordinary cooking plus dehulling (C+D), infusion plus dehulling (I+D), autoclave plus dehulling (A+D), microwave plus dehulling (M+D) and five combined methods (S+I+D; S+A+D; I+M+D; S+C+D; S+M+D). All the processed seeds were dried, ground into powder, extracted, and analyzed on a microplate reader to determine the percentage of phytates per dry mass of the legume seeds. Phytic acid was used as a positive control, and one-way ANOVA was used to determine the significant differences between the means of the processing methods at a threshold of 0.05. Major Findings: The results of the processing methods showed the percentage yield ranges of 39.1-96%, 67.4-88.8%, and 70.2-93.8% for V. radiata, A. hypogaea and P. sativum, respectively. Though the raw seeds contained the highest contents of phytates that ranged between 0.508 and 0.527%, as expected, the R+D resulted in a slightly lower phytate percentage range of 0.469-0.485%, while other processing methods resulted in phytate contents that were below 0.35%. The M+D and S+M+D methods showed low phytate percentage ranges of 0.276-0.296% and 0.272-0.294%, respectively, where the lowest percentage yield was determined in S+M+D of P. sativum. Furthermore, these results were found to be significantly different (p<0.05). Though phytates cause micronutrient deficits as they chelate important minerals such as calcium, zinc, iron, and magnesium, their reduction may enhance nutrient bioavailability since they cannot be digested by the ruminants. Concluding Statement: Despite the nutritive aspects of the processed legume seeds, which are still in progress, the M+D and S+M+D methods, which significantly reduced the phytates in the investigated legume seeds, may be recommended to the local farmers and feed-producing industries so as to enhance animal health and production at an affordable cost.

Keywords: anti-nutritive factors, extraction, legume seeds, phytate

Procedia PDF Downloads 28
14965 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data

Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard

Abstract:

Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.

Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset

Procedia PDF Downloads 6
14964 Sensitivity Analysis for 14 Bus Systems in a Distribution Network with Distributed Generators

Authors: Lakshya Bhat, Anubhav Shrivastava, Shiva Rudraswamy

Abstract:

There has been a formidable interest in the area of Distributed Generation in recent times. A wide number of loads are addressed by Distributed Generators and have better efficiency too. The major disadvantage in Distributed Generation is voltage control- is highlighted in this paper. The paper addresses voltage control at buses in IEEE 14 Bus system by regulating reactive power. An analysis is carried out by selecting the most optimum location in placing the Distributed Generators through load flow analysis and seeing where the voltage profile rises. MATLAB programming is used for simulation of voltage profile in the respective buses after introduction of DG’s. A tolerance limit of +/-5% of the base value has to be maintained. To maintain the tolerance limit, 3 methods are used. Sensitivity analysis of 3 methods for voltage control is carried out to determine the priority among the methods.

Keywords: distributed generators, distributed system, reactive power, voltage control, sensitivity analysis

Procedia PDF Downloads 703
14963 The Effects of Extraction Methods on Fat Content and Fatty Acid Profiles of Marine Fish Species

Authors: Yesim Özogul, Fethiye Takadaş, Mustafa Durmus, Yılmaz Ucar, Ali Rıza Köşker, Gulsun Özyurt, Fatih Özogul

Abstract:

It has been well documented that polyunsaturated fatty acids (PUFAs), especially eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) have beneficial effects on health, regarding prevention of cardiovascular diseases, cancer and autoimmune disorders, development the brain and retina and treatment of major depressive disorder etc. Thus, an adequate intake of omega PUFA is essential and generally marine fish are the richest sources of PUFA in human diet. Thus, this study was conducted to evaluate the efficiency of different extraction methods (Bligh and Dyer, soxhlet, microwave and ultrasonics) on the fat content and fatty acid profiles of marine fish species (Mullus babatus, Upeneus moluccensis, Mullus surmuletus, Anguilla anguilla, Pagellus erythrinus and Saurida undosquamis). Fish species were caught by trawl in Mediterranean Sea and immediately iced. After that, fish were transported to laboratory in ice and stored at -18oC in a freezer until the day of analyses. After extracting lipid from fish by different methods, lipid samples were converted to their constituent fatty acid methyl esters. The fatty acid composition was analysed by a GC Clarus 500 with an autosampler (Perkin Elmer, Shelton, CT, USA) equipped with a flame ionization detector and a fused silica capillary SGE column (30 m x 0.32 mm ID x 0.25 mm BP20 0.25 UM, USA). The results showed that there were significant differences (P < 0.05) in fatty acids of all species and also extraction methods affected fat contents and fatty acid profiles of fish species.

Keywords: extraction methods, fatty acids, marine fish, PUFA

Procedia PDF Downloads 267
14962 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 136
14961 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 151
14960 Simple Procedure for Probability Calculation of Tensile Crack Occurring in Rigid Pavement: A Case Study

Authors: Aleš Florian, Lenka Ševelová, Jaroslav Žák

Abstract:

Formation of tensile cracks in concrete slabs of rigid pavement can be (among others) the initiation point of the other, more serious failures which can ultimately lead to complete degradation of the concrete slab and thus the whole pavement. Two measures can be used for reliability assessment of this phenomenon - the probability of failure and/or the reliability index. Different methods can be used for their calculation. The simple ones are called moment methods and simulation techniques. Two methods - FOSM Method and Simple Random Sampling Method - are verified and their comparison is performed. The influence of information about the probability distribution and the statistical parameters of input variables as well as of the limit state function on the calculated reliability index and failure probability are studied in three points on the lower surface of concrete slabs of the older type of rigid pavement formerly used in the Czech Republic.

Keywords: failure, pavement, probability, reliability index, simulation, tensile crack

Procedia PDF Downloads 546
14959 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite

Abstract:

Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.

Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination

Procedia PDF Downloads 124
14958 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data

Authors: Georgiana Onicescu, Yuqian Shen

Abstract:

Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.

Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection

Procedia PDF Downloads 143
14957 A Quantitative Evaluation of Text Feature Selection Methods

Authors: B. S. Harish, M. B. Revanasiddappa

Abstract:

Due to rapid growth of text documents in digital form, automated text classification has become an important research in the last two decades. The major challenge of text document representations are high dimension, sparsity, volume and semantics. Since the terms are only features that can be found in documents, selection of good terms (features) plays an very important role. In text classification, feature selection is a strategy that can be used to improve classification effectiveness, computational efficiency and accuracy. In this paper, we present a quantitative analysis of most widely used feature selection (FS) methods, viz. Term Frequency-Inverse Document Frequency (tfidf ), Mutual Information (MI), Information Gain (IG), CHISquare (x2), Term Frequency-Relevance Frequency (tfrf ), Term Strength (TS), Ambiguity Measure (AM) and Symbolic Feature Selection (SFS) to classify text documents. We evaluated all the feature selection methods on standard datasets like 20 Newsgroups, 4 University dataset and Reuters-21578.

Keywords: classifiers, feature selection, text classification

Procedia PDF Downloads 458
14956 Development of ELF Passive Shielding Application Using Magnetic Aqueous Substrate

Authors: W. N. L. Mahadi, S. N. Syed Zin, W. A. R. Othman, N. A. Mohd Rasyid, N. Jusoh

Abstract:

Public concerns on Extremely Low Frequency (ELF) Electromagnetic Field (EMF) exposure have been elongated since the last few decades. Electrical substations and high tension rooms (HT room) in commercial buildings were among the contributing factors emanating ELF magnetic fields. This paper discussed various shielding methods conventionally used in mitigating the ELF exposure. Nevertheless, the standard methods were found to be impractical and incapable of meeting currents shielding demands. In response to that, remarkable researches were conducted in effort to invent novel methods which is more convenient and efficient such as magnetic aqueous shielding or paint, textiles and papers shielding. A mitigation method using magnetic aqueous substrate in shielding application was proposed in this paper for further investigation. using Manganese Zinc Ferrite (Mn0.4Zn0.6Fe2O4). The magnetic field and flux distribution inside the aqueous magnetic material are evaluated to optimize shielding against ELF-EMF exposure, as to mitigate its exposure.

Keywords: ELF shielding, magnetic aqueous substrate, shielding effectiveness, passive shielding, magnetic material

Procedia PDF Downloads 531
14955 TimeTune: Personalized Study Plans Generation with Google Calendar Integration

Authors: Chevon Fernando, Banuka Athuraliya

Abstract:

The purpose of this research is to provide a solution to the students’ time management, which usually becomes an issue because students must study and manage their personal commitments. "TimeTune," an AI-based study planner that provides an opportunity to maneuver study timeframes by incorporating modern machine learning algorithms with calendar applications, is unveiled as the ideal solution. The research is focused on the development of LSTM models that connect to the Google Calendar API in the process of developing learning paths that would be fit for a unique student's daily life experience and study history. A key finding of this research is the success in building the LSTM model to predict optimal study times, which, integrating with the real-time data of Google Calendar, will generate the timetables automatically in a personalized and customized manner. The methodology encompasses Agile development practices and Object-Oriented Analysis and Design (OOAD) principles, focusing on user-centric design and iterative development. By adopting this method, students can significantly reduce the tension associated with poor study habits and time management. In conclusion, "TimeTune" displays an advanced step in personalized education technology. The fact that its application of ML algorithms and calendar integration is quite innovative is slowly and steadily revolutionizing the lives of students. The excellence of maintaining a balanced academic and personal life is stress reduction, which the applications promise to provide for students when it comes to managing their studies.

Keywords: personalized learning, study planner, time management, calendar integration

Procedia PDF Downloads 48
14954 A Comparative Analysis of Traditional and Advanced Methods in Evaluating Anti-corrosion Performance of Sacrificial and Barrier Coatings

Authors: Kazem Sabet-Bokati, Ilia Rodionov, Marciel Gaier, Kevin Plucknett

Abstract:

Protective coatings play a pivotal role in mitigating corrosion and preserving the integrity of metallic structures exposed to harsh environmental conditions. The diversity of corrosive environments necessitates the development of protective coatings suitable for various conditions. Accurately selecting and interpreting analysis methods is crucial in identifying the most suitable protective coatings for the various corrosive environments. This study conducted a comprehensive comparative analysis of traditional and advanced methods to assess the anti-corrosion performance of sacrificial and barrier coatings. The protective performance of pure epoxy, zinc-rich epoxy, and cold galvanizing coatings was evaluated using salt spray tests, together with electrochemical impedance spectroscopy (EIS) and potentiodynamic polarization methods. The performance of each coating was thoroughly differentiated under both atmospheric and immersion conditions. The distinct protective performance of each coating against atmospheric corrosion was assessed using traditional standard methods. Additionally, the electrochemical responses of these coatings in immersion conditions were systematically studied, and a detailed discussion on interpreting the electrochemical responses is provided. Zinc-rich epoxy and cold galvanizing coatings offer superior anti-corrosion performance against atmospheric corrosion, while the pure epoxy coating excels in immersion conditions.

Keywords: corrosion, barrier coatings, sacrificial coatings, salt-spray, EIS, polarization

Procedia PDF Downloads 66
14953 Theoretical Comparisons and Empirical Illustration of Malmquist, Hicks–Moorsteen, and Luenberger Productivity Indices

Authors: Fatemeh Abbasi, Sahand Daneshvar

Abstract:

Productivity is one of the essential goals of companies to improve performance, which as a strategy-oriented method, determines the basis of the company's economic growth. The history of productivity goes back centuries, but most researchers defined productivity as the relationship between a product and the factors used in production in the early twentieth century. Productivity as the optimal use of available resources means that "more output using less input" can increase companies' economic growth and prosperity capacity. Also, having a quality life based on economic progress depends on productivity growth in that society. Therefore, productivity is a national priority for any developed country. There are several methods for calculating productivity growth measurements that can be divided into parametric and non-parametric methods. Parametric methods rely on the existence of a function in their hypotheses, while non-parametric methods do not require a function based on empirical evidence. One of the most popular non-parametric methods is Data Envelopment Analysis (DEA), which measures changes in productivity over time. The DEA evaluates the productivity of decision-making units (DMUs) based on mathematical models. This method uses multiple inputs and outputs to compare the productivity of similar DMUs such as banks, government agencies, companies, airports, Etc. Non-parametric methods are themselves divided into the frontier and non frontier approaches. The Malmquist productivity index (MPI) proposed by Caves, Christensen, and Diewert (1982), the Hicks–Moorsteen productivity index (HMPI) proposed by Bjurek (1996), or the Luenberger productivity indicator (LPI) proposed by Chambers (2002) are powerful tools for measuring productivity changes over time. This study will compare the Malmquist, Hicks–Moorsteen, and Luenberger indices theoretically and empirically based on DEA models and review their strengths and weaknesses.

Keywords: data envelopment analysis, Hicks–Moorsteen productivity index, Leuenberger productivity indicator, malmquist productivity index

Procedia PDF Downloads 194
14952 Sensitivity Analysis for 14 Bus Systems in a Distribution Network with Distribution Generators

Authors: Lakshya Bhat, Anubhav Shrivastava, Shivarudraswamy

Abstract:

There has been a formidable interest in the area of Distributed Generation in recent times. A wide number of loads are addressed by Distributed Generators and have better efficiency too. The major disadvantage in Distributed Generation is voltage control- is highlighted in this paper. The paper addresses voltage control at buses in IEEE 14 Bus system by regulating reactive power. An analysis is carried out by selecting the most optimum location in placing the Distributed Generators through load flow analysis and seeing where the voltage profile rises. Matlab programming is used for simulation of voltage profile in the respective buses after introduction of DG’s. A tolerance limit of +/-5% of the base value has to be maintained.To maintain the tolerance limit , 3 methods are used. Sensitivity analysis of 3 methods for voltage control is carried out to determine the priority among the methods.

Keywords: distributed generators, distributed system, reactive power, voltage control, sensitivity analysis

Procedia PDF Downloads 587
14951 Measurement of the Dynamic Modulus of Elasticity of Cylindrical Concrete Specimens Used for the Cyclic Indirect Tensile Test

Authors: Paul G. Bolz, Paul G. Lindner, Frohmut Wellner, Christian Schulze, Joern Huebelt

Abstract:

Concrete, as a result of its use as a construction material, is not only subject to static loads but is also exposed to variables, time-variant, and oscillating stresses. In order to ensure the suitability of construction materials for resisting these cyclic stresses, different test methods are used for the systematic fatiguing of specimens, like the cyclic indirect tensile test. A procedure is presented that allows the estimation of the degradation of cylindrical concrete specimens during the cyclic indirect tensile test by measuring the dynamic modulus of elasticity in different states of the specimens’ fatigue process. Two methods are used in addition to the cyclic indirect tensile test in order to examine the dynamic modulus of elasticity of cylindrical concrete specimens. One of the methods is based on the analysis of eigenfrequencies, whilst the other one uses ultrasonic pulse measurements to estimate the material properties. A comparison between the dynamic moduli obtained using the three methods that operate in different frequency ranges shows good agreement. The concrete specimens’ fatigue process can therefore be monitored effectively and reliably.

Keywords: concrete, cyclic indirect tensile test, degradation, dynamic modulus of elasticity, eigenfrequency, fatigue, natural frequency, ultrasonic, ultrasound, Young’s modulus

Procedia PDF Downloads 174
14950 An Interactive Voice Response Storytelling Model for Learning Entrepreneurial Mindsets in Media Dark Zones

Authors: Vineesh Amin, Ananya Agrawal

Abstract:

In a prolonged period of uncertainty and disruptions in the pre-said normal order, non-cognitive skills, especially entrepreneurial mindsets, have become a pillar that can reform the educational models to inform the economy. Dreamverse Learning Lab’s IVR-based storytelling program -Call-a-Kahaani- is an evolving experiment with an aim to kindle entrepreneurial mindsets in the remotest locations of India in an accessible and engaging manner. At the heart of this experiment is the belief that at every phase in our life’s story, we have a choice which brings us closer to achieving our true potential. This interactive program is thus designed using real-time storytelling principles to empower learners, ages 24 and below, to make choices and take decisions as they become more self-aware, practice grit, try new things through stories, guided activities, and interactions, simply over a phone call. This research paper highlights the framework behind an ongoing scalable, data-oriented, low-tech program to kindle entrepreneurial mindsets in media dark zones supported by iterative design and prototyping to reach 13700+ unique learners who made 59000+ calls for 183900+min listening duration to listen to content pieces of around 3 to 4 min, with the last monitored (March 2022) record of 34% serious listenership, within one and a half years of its inception. The paper provides an in-depth account of the technical development, content creation, learning, and assessment frameworks, as well as mobilization models which have been leveraged to build this end-to-end system.

Keywords: non-cognitive skills, entrepreneurial mindsets, speech interface, remote learning, storytelling

Procedia PDF Downloads 209
14949 Airfield Pavements Made of Reinforced Concrete: Dimensioning According to the Theory of Limit States and Eurocode

Authors: M. Linek, P. Nita

Abstract:

In the previous airfield construction industry, pavements made of reinforced concrete have been used very rarely; however, the necessity to use this type of pavements in an emergency situations justifies the need reference to this issue. The paper concerns the problem of airfield pavement dimensioning made of reinforced concrete and the evaluation of selected dimensioning methods of reinforced concrete slabs intended for airfield pavements. Analysis of slabs dimensioning, according to classical method of limit states has been performed and it has been compared to results obtained in case of methods complying with Eurocode 2 guidelines. Basis of an analysis was a concrete slab of class C35/45 with reinforcement, located in tension zone. Steel bars of 16.0 mm have been used as slab reinforcement. According to comparative analysis of obtained results, conclusions were reached regarding application legitimacy of the discussed methods and their design advantages.

Keywords: rainforced concrete, cement concrete, airport pavements, dimensioning

Procedia PDF Downloads 255
14948 Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance

Authors: R. Ajgou, S. Sbaa, S. Ghendir, A. Chemsa, A. Taleb-Ahmed

Abstract:

The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC).

Keywords: speech enhancement, pesq, speaker recognition, MFCC

Procedia PDF Downloads 424
14947 The Effectiveness of Implementing Interactive Training for Teaching Kazakh Language

Authors: Samal Abzhanova, Saule Mussabekova

Abstract:

Today, a new system of education is being created in Kazakhstan in order to develop the system of education and to satisfy the world class standards. For this purpose, there have been established new requirements and responsibilities to the instructors. Students should not be limited with providing only theoretical knowledge. Also, they should be encouraged to be competitive, to think creatively and critically. Moreover, students should be able to implement these skills into practice. These issues could be resolved through the permanent improvement of teaching methods. Therefore, a specialist who teaches the languages should use up-to-date methods and introduce new technologies. The result of the investigation suggests that an interactive teaching method is one of the new technologies in this field. This paper aims to provide information about implementing new technologies in the process of teaching language. The paper will discuss about necessity of introducing innovative technologies and the techniques of organizing interactive lessons. At the same time, the structure of the interactive lesson, conditions, principles, discussions, small group works and role-playing games will be considered. Interactive methods are carried out with the help of several types of activities, such as working in a team (with two or more group of people), playing situational or role-playing games, working with different sources of information, discussions, presentations, creative works and learning through solving situational tasks and etc.

Keywords: interactive education, interactive methods, system of education, teaching a language

Procedia PDF Downloads 294
14946 Information Visualization Methods Applied to Nanostructured Biosensors

Authors: Osvaldo N. Oliveira Jr.

Abstract:

The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.

Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique

Procedia PDF Downloads 337
14945 Self-Attention Mechanism for Target Hiding Based on Satellite Images

Authors: Hao Yuan, Yongjian Shen, Xiangjun He, Yuheng Li, Zhouzhou Zhang, Pengyu Zhang, Minkang Cai

Abstract:

Remote sensing data can provide support for decision-making in disaster assessment or disaster relief. The traditional processing methods of sensitive targets in remote sensing mapping are mainly based on manual retrieval and image editing tools, which are inefficient. Methods based on deep learning for sensitive target hiding are faster and more flexible. But these methods have disadvantages in training time and cost of calculation. This paper proposed a target hiding model Self Attention (SA) Deepfill, which used self-attention modules to replace part of gated convolution layers in image inpainting. By this operation, the calculation amount of the model becomes smaller, and the performance is improved. And this paper adds free-form masks to the model’s training to enhance the model’s universal. The experiment on an open remote sensing dataset proved the efficiency of our method. Moreover, through experimental comparison, the proposed method can train for a longer time without over-fitting. Finally, compared with the existing methods, the proposed model has lower computational weight and better performance.

Keywords: remote sensing mapping, image inpainting, self-attention mechanism, target hiding

Procedia PDF Downloads 136
14944 Evaluation of the Hepatitis C Virus and Classical and Modern Immunoassays Used Nowadays to Diagnose It in Tirana

Authors: Stela Papa, Klementina Puto, Migena Pllaha

Abstract:

HCV is a hepatotropic RNA virus, transmitted primarily via the blood route, which causes progressive disease such as chronic hepatitis, liver cirrhosis, or hepatocellular carcinoma. HCV nowadays is a global healthcare problem. A variety of immunoassays including old and new technologies are being applied to detect HCV in our country. These methods include Immunochromatography assays (ICA), Fluorescence immunoassay (FIA), Enzyme linked fluorescent assay (ELFA), and Enzyme linked immunosorbent assay (ELISA) to detect HCV antibodies in blood serum, which lately is being slowly replaced by more sensitive methods such as rapid automated analyzer chemiluminescence immunoassay (CLIA). The aim of this study is to estimate HCV infection in carriers and chronic acute patients and to evaluate the use of new diagnostic methods. This study was realized from September 2016 to May 2018. During this study period, 2913 patients were analyzed for the presence of HCV by taking samples from their blood serum. The immunoassays performed were ICA, FIA, ELFA, ELISA, and CLIA assays. Concluding, 82% of patients taken in this study, resulted infected with HCV. Diagnostic methods in clinical laboratories are crucial in the early stages of infection, in the management of chronic hepatitis and in the treatment of patients during their disease.

Keywords: CLIA, ELISA, Hepatitis C virus, immunoassay

Procedia PDF Downloads 153
14943 Experimental Investigation of Soil Corrosion and Electrical Resistance in Depth by Geoelectrical Method

Authors: Seyed Abolhassan Naeini, Maedeh Akhavan Tavakkoli

Abstract:

Determining soil engineering properties is essential for geotechnical problems. In addition to high cost, invasive soil survey methods can be time-consuming, so geophysical methods can be an excellent choice to determine soil characteristics. In this study, geoelectric investigation using the Wenner arrangement method has been used to determine the amount of soil corrosion in soil layers in a project site as a case study. This study aims to assess the degree of corrosion of soil layers to a depth of 5 meters and find the variation of soil electrical resistance versus depth. For this purpose, the desired points in the study area were marked and specified, and all withdrawals were made within the specified points. The collected data have been processed by standard and accepted methods, and the results have been presented in the form of calculation tables and curves of electrical resistivity with depth.

Keywords: Wenner array, geoelectric, soil corrosion, electrical soil resistance

Procedia PDF Downloads 102
14942 Fragment Domination for Many-Objective Decision-Making Problems

Authors: Boris Djartov, Sanaz Mostaghim

Abstract:

This paper presents a number-based dominance method. The main idea is how to fragment the many attributes of the problem into subsets suitable for the well-established concept of Pareto dominance. Although other similar methods can be found in the literature, they focus on comparing the solutions one objective at a time, while the focus of this method is to compare entire subsets of the objective vector. Given the nature of the method, it is computationally costlier than other methods and thus, it is geared more towards selecting an option from a finite set of alternatives, where each solution is defined by multiple objectives. The need for this method was motivated by dynamic alternate airport selection (DAAS). In DAAS, pilots, while en route to their destination, can find themselves in a situation where they need to select a new landing airport. In such a predicament, they need to consider multiple alternatives with many different characteristics, such as wind conditions, available landing distance, the fuel needed to reach it, etc. Hence, this method is primarily aimed at human decision-makers. Many methods within the field of multi-objective and many-objective decision-making rely on the decision maker to initially provide the algorithm with preference points and weight vectors; however, this method aims to omit this very difficult step, especially when the number of objectives is so large. The proposed method will be compared to Favour (1 − k)-Dom and L-dominance (LD) methods. The test will be conducted using well-established test problems from the literature, such as the DTLZ problems. The proposed method is expected to outperform the currently available methods in the literature and hopefully provide future decision-makers and pilots with support when dealing with many-objective optimization problems.

Keywords: multi-objective decision-making, many-objective decision-making, multi-objective optimization, many-objective optimization

Procedia PDF Downloads 91
14941 Ethical Issues around Online Marketing to Children

Authors: Chris Preston

Abstract:

As we devise ever more sophisticated methods of on-line marketing, devising systems that are able to reach into the everyday lives of consumers, we are confronted by a generation of children who face unprecedented intervention by commercial organisations into young minds, via electronic devices, and whether by computer, tablet or phone, such children have been somehow reduced to the status of their devices, with little regard for their well being as individuals. This discussion paper seeks to draw attention to such practice and questions the ethics of digital marketing methods.

Keywords: online marketing to children, online research of children, online targeting of children, consumer rights, ethics

Procedia PDF Downloads 392
14940 Utilization Reactive Dilutes to Improve the Properties of Epoxy Resin as Anticorrosion Coating

Authors: El-Sayed Negim, Ainakulova D. T., Puteri S. M., Khaldun M. Azzam, Bekbayeva L. K., Arpit Goyal, Ganjian E.

Abstract:

Anticorrosion coatings protect metal surfaces from environmental factors including moisture, oxygen, and gases that caused corrosion to the metal. Various types of anticorrosion coatings are available, with different properties and application methods. Many researchers have been developing methods to prevent corrosion, and epoxy polymers are one of the wide methods due to their excellent adhesion, chemical resistance, and durability. In this study, synthesis reactive dilute based on glycidyl methacrylate (GMA) with each of 2-ethylhexyl acrylate (2-EHA) and butyl acrylate (BuA) to improve the performance of epoxy resin and anticorrosion coating. The copolymers were synthesized with composition ratio (5/5) by bulk polymerization technique using benzoyl peroxide as a catalyst and temperature at 85 oC for 2 hours and at 90 oC for 30 minutes to complete the polymerization process. The obtained copolymers were characterized by FTIR, viscosity and thixotropic index. The effect of copolymers as reactive dilute on the physical and mechanical properties of epoxy resin was investigated. Metal plates coated by the modified epoxy resins with different contents of copolymers were tested using alkali and salt test methods, and the copolymer based on GMA and BUA showed the best protection efficiency due to the barrier effect of the polymer layer.

Keywords: epoxy, coating, dilute, corrosion, reactive

Procedia PDF Downloads 52
14939 Development of Elementary Literacy in the Czech Republic

Authors: Iva Košek Bartošová

Abstract:

There is great attention being paid in the field of development of first reading, thus early literacy skills in the Czech Republic. Yet inconclusive results of PISA and PIRLS force us to think over the teacher´s work, his/her roles in the education process and methods and forms used in lessons. There is also a significant importance to monitor the family environment and the pupil, themselves. The aim of the publishing output is to focus on one side dealing with methods of practicing reading technique and their results in the process of comprehension. In the first part of the contribution there are the goals of development of reading literacy and the methods used in reading practice in some EU countries and a follow-up comparison of research implemented by the help of modern technology of an eye tracker device in the year 2015 and a research conducted at the Institute of Education and Psychological Counselling of the Czech Republic in the year 2011/12. These are the results of a diagnostic test of reading in first classes of primary schools, taught by the genetic method and analytic-synthetic method. The results show that in the first stage of practice there are no statistically significant differences between any researched subjects taught by different methods of reading practice (with the use of several diagnostic texts focused on reading technique and its comprehension). Different results are shown at the end of Grade One and during Grade Two of primary school.

Keywords: elementary literacy, eye tracker device, diagnostic reading tests, reading teaching method

Procedia PDF Downloads 186