Search results for: twinning variant selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2596

Search results for: twinning variant selection

2236 Descent Algorithms for Optimization Algorithms Using q-Derivative

Authors: Geetanjali Panda, Suvrakanti Chakraborty

Abstract:

In this paper, Newton-like descent methods are proposed for unconstrained optimization problems, which use q-derivatives of the gradient of an objective function. First, a local scheme is developed with alternative sufficient optimality condition, and then the method is extended to a global scheme. Moreover, a variant of practical Newton scheme is also developed introducing a real sequence. Global convergence of these schemes is proved under some mild conditions. Numerical experiments and graphical illustrations are provided. Finally, the performance profiles on a test set show that the proposed schemes are competitive to the existing first-order schemes for optimization problems.

Keywords: Descent algorithm, line search method, q calculus, Quasi Newton method

Procedia PDF Downloads 388
2235 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 76
2234 Diffusion Adaptation Strategies for Distributed Estimation Based on the Family of Affine Projection Algorithms

Authors: Mohammad Shams Esfand Abadi, Mohammad Ranjbar, Reza Ebrahimpour

Abstract:

This work presents the distributed processing solution problem in a diffusion network based on the adapt then combine (ATC) and combine then adapt (CTA)selective partial update normalized least mean squares (SPU-NLMS) algorithms. Also, we extend this approach to dynamic selection affine projection algorithm (DS-APA) and ATC-DS-APA and CTA-DS-APA are established. The purpose of ATC-SPU-NLMS and CTA-SPU-NLMS algorithm is to reduce the computational complexity by updating the selected blocks of weight coefficients at every iteration. In CTA-DS-APA and ATC-DS-APA, the number of the input vectors is selected dynamically. Diffusion cooperation strategies have been shown to provide good performance based on these algorithms. The good performance of introduced algorithm is illustrated with various experimental results.

Keywords: selective partial update, affine projection, dynamic selection, diffusion, adaptive distributed networks

Procedia PDF Downloads 694
2233 Method for Selecting and Prioritising Smart Services in Manufacturing Companies

Authors: Till Gramberg, Max Kellner, Erwin Gross

Abstract:

This paper presents a comprehensive investigation into the topic of smart services and IIoT-Platforms, focusing on their selection and prioritization in manufacturing organizations. First, a literature review is conducted to provide a basic understanding of the current state of research in the area of smart services. Based on discussed and established definitions, a definition approach for this paper is developed. In addition, value propositions for smart services are identified based on the literature and expert interviews. Furthermore, the general requirements for the provision of smart services are presented. Subsequently, existing approaches for the selection and development of smart services are identified and described. In order to determine the requirements for the selection of smart services, expert opinions from successful companies that have already implemented smart services are collected through semi-structured interviews. Based on the results, criteria for the evaluation of existing methods are derived. The existing methods are then evaluated according to the identified criteria. Furthermore, a novel method for the selection of smart services in manufacturing companies is developed, taking into account the identified criteria and the existing approaches. The developed concept for the method is verified in expert interviews. The method includes a collection of relevant smart services identified in the literature. The actual relevance of the use cases in the industrial environment was validated in an online survey. The required data and sensors are assigned to the smart service use cases. The value proposition of the use cases is evaluated in an expert workshop using different indicators. Based on this, a comparison is made between the identified value proposition and the required data, leading to a prioritization process. The prioritization process follows an established procedure for evaluating technical decision-making processes. In addition to the technical requirements, the prioritization process includes other evaluation criteria such as the economic benefit, the conformity of the new service offering with the company strategy, or the customer retention enabled by the smart service. Finally, the method is applied and validated in an industrial environment. The results of these experiments are critically reflected upon and an outlook on future developments in the area of smart services is given. This research contributes to a deeper understanding of the selection and prioritization process as well as the technical considerations associated with smart service implementation in manufacturing organizations. The proposed method serves as a valuable guide for decision makers, helping them to effectively select the most appropriate smart services for their specific organizational needs.

Keywords: smart services, IIoT, industrie 4.0, IIoT-platform, big data

Procedia PDF Downloads 73
2232 Relay Node Selection Algorithm for Cooperative Communications in Wireless Networks

Authors: Sunmyeng Kim

Abstract:

IEEE 802.11a/b/g standards support multiple transmission rates. Even though the use of multiple transmission rates increase the WLAN capacity, this feature leads to the performance anomaly problem. Cooperative communication was introduced to relieve the performance anomaly problem. Data packets are delivered to the destination much faster through a relay node with high rate than through direct transmission to the destination at low rate. In the legacy cooperative protocols, a source node chooses a relay node only based on the transmission rate. Therefore, they are not so feasible in multi-flow environments since they do not consider the effect of other flows. To alleviate the effect, we propose a new relay node selection algorithm based on the transmission rate and channel contention level. Performance evaluation is conducted using simulation, and shows that the proposed protocol significantly outperforms the previous protocol in terms of throughput and delay.

Keywords: cooperative communications, MAC protocol, relay node, WLAN

Procedia PDF Downloads 323
2231 Development of Generally Applicable Intravenous to Oral Antibiotic Switch Therapy Criteria

Authors: H. Akhloufi, M. Hulscher, J. M. Prins, I. H. Van Der Sijs, D. Melles, A. Verbon

Abstract:

Background: A timely switch from intravenous to oral antibiotic therapy has many advantages, such as reduced incidence of IV-line related infections, a decreased hospital length of stay and less workload for healthcare professionals with equivalent patient safety. Additionally, numerous studies have demonstrated significant decreases in costs of a timely intravenous to oral antibiotic therapy switch, while maintaining efficacy and safety. However, a considerable variation in iv to oral antibiotic switch therapy criteria has been described in literature. Here, we report the development of a set of iv to oral switch criteria that are generally applicable in all hospitals. Material/methods: A RAND-modified Delphi procedure, which was composed of 3 rounds, was used. This Delphi procedure is a widely used structured process to develop consensus using multiple rounds of questionnaires within a qualified panel of selected experts. The international expert panel was multidisciplinary and composed out of clinical microbiologists, infectious disease consultants and clinical pharmacists. This panel of 19 experts appraised 6 major intravenous to oral antibiotic switch therapy criteria and operationalized these criteria using 41 measurable conditions extracted from the literature. The procedure to select a concise set of iv to oral switch criteria included 2 questionnaire rounds and a face-to-face meeting. Results: The procedure resulted in the selection of 16 measurable conditions, which operationalize 6 major intravenous to oral antibiotic switch therapy criteria. The following 6 major switch therapy criteria were selected: (1) Vital signs should be good or improving when bad. (2) Signs and symptoms related to the infection have to be resolved or improved. (3) The gastrointestinal tract has to be intact and functioning. (4) The oral route should not be compromised. (5) Absence of contra-indicated infections. (6) An oral variant of the antibiotic with good bioavailability has to exist. Conclusions: This systematic stepwise method which combined evidence and expert opinion resulted in a feasible set of 6 major intravenous to oral antibiotic switch therapy criteria operationalized by 16 measurable conditions. This set of early antibiotic iv to oral switch criteria can be used in daily practice in all adult hospital patients. Future use in audits and as rules in computer assisted decision support systems will lead to improvement of antimicrobial steward ship programs.

Keywords: antibiotic resistance, antibiotic stewardship, intravenous to oral, switch therapy

Procedia PDF Downloads 352
2230 Lexico-Semantic and Contextual Analysis of the Concept of Joy in Modern English Fiction

Authors: Zarine Avetisyan

Abstract:

Concepts are part and parcel of everyday text and talk. Their ubiquity predetermines the topicality of the given research which aims at the semantic decomposition of concepts in general and the concept of joy in particular, as well as the study of lexico-semantic variants as means of realization of a certain concept in different “semantic settings”, namely in a certain context. To achieve the stated aim, the given research departs from the methods of componential and contextual analysis, studying lexico-semantic variants /LSVs/ of the concept of joy and the semantic signs embedded in those LSVs, such as the semantic sign of intensity, supporting emotions, etc. in the context of Modern English fiction.

Keywords: concept, context, lexico-semantic variant, semantic sign

Procedia PDF Downloads 348
2229 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 141
2228 Performance and Emission Prediction in a Biodiesel Engine Fuelled with Honge Methyl Ester Using RBF Neural Networks

Authors: Shiva Kumar, G. S. Vijay, Srinivas Pai P., Shrinivasa Rao B. R.

Abstract:

In the present study RBF neural networks were used for predicting the performance and emission parameters of a biodiesel engine. Engine experiments were carried out in a 4 stroke diesel engine using blends of diesel and Honge methyl ester as the fuel. Performance parameters like BTE, BSEC, Tech and emissions from the engine were measured. These experimental results were used for ANN modeling. RBF center initialization was done by random selection and by using Clustered techniques. Network was trained by using fixed and varying widths for the RBF units. It was observed that RBF results were having a good agreement with the experimental results. Networks trained by using clustering technique gave better results than using random selection of centers in terms of reduced MRE and increased prediction accuracy. The average MRE for the performance parameters was 3.25% with the prediction accuracy of 98% and for emissions it was 10.4% with a prediction accuracy of 80%.

Keywords: radial basis function networks, emissions, performance parameters, fuzzy c means

Procedia PDF Downloads 548
2227 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 126
2226 Technology Identification, Evaluation and Selection Methodology for Industrial Process Water and Waste Water Treatment Plant of 3x150 MWe Tufanbeyli Lignite-Fired Power Plant

Authors: Cigdem Safak Saglam

Abstract:

Most thermal power plants use steam as working fluid in their power cycle. Therefore, in addition to fuel, water is the other main input for thermal plants. Water and steam must be highly pure in order to protect the systems from corrosion, scaling and biofouling. Pure process water is produced in water treatment plants having many several treatment methods. Treatment plant design is selected depending on raw water source and required water quality. Although working principle of fossil-fuel fired thermal power plants are same, there is no standard design and equipment arrangement valid for all thermal power plant utility systems. Besides that, there are many other technology evaluation and selection criteria for designing the most optimal water systems meeting the requirements such as local conditions, environmental restrictions, electricity and other consumables availability and transport, process water sources and scarcity, land use constraints etc. Aim of this study is explaining the adopted methodology for technology selection for process water preparation and industrial waste water treatment plant in a thermal power plant project located in Tufanbeyli, Adana Province in Turkey. Thermal power plant is fired with indigenous lignite coal extracted from adjacent lignite reserves. This paper addresses all above-mentioned factors affecting the thermal power plant water treatment facilities (demineralization + waste water treatment) design and describes the ultimate design of Tufanbeyli Thermal Power Plant Water Treatment Plant.

Keywords: thermal power plant, lignite coal, pretreatment, demineralization, electrodialysis, recycling, ash dampening

Procedia PDF Downloads 471
2225 Investigating Complement Clause Choice in Written Educated Nigerian English (ENE)

Authors: Juliet Udoudom

Abstract:

Inappropriate complement selection constitutes one of the major features of non-standard complementation in the Nigerian users of English output of sentence construction. This paper investigates complement clause choice in Written Educated Nigerian English (ENE) and offers some results. It aims at determining preferred and dispreferred patterns of complement clause selection in respect of verb heads in English by selected Nigerian users of English. The complementation data analyzed in this investigation were obtained from experimental tasks designed to elicit complement categories of Verb – Noun -, Adjective – and Prepositional – heads in English. Insights from the Government – Binding relations were employed in analyzing data, which comprised responses obtained from one hundred subjects to a picture elicitation exercise, a grammaticality judgement test, and a free composition task. The findings indicate a general tendency for clausal complements (CPs) introduced by the complementizer that to be preferred by the subjects studied. Of the 235 tokens of clausal complements which occurred in our corpus, 128 of them representing 54.46% were CPs headed by that, while whether – and if-clauses recorded 31.07% and 8.94%, respectively. The complement clause-type which recorded the lowest incidence of choice was the CP headed by the Complementiser, for with a 5.53% incident of occurrence. Further findings from the study indicate that semantic features of relevant embedding verb heads were not taken into consideration in the choice of complementisers which introduce the respective complement clauses, hence the that-clause was chosen to complement verbs like prefer. In addition, the dispreferred choice of the for-clause is explicable in terms of the fact that the respondents studied regard ‘for’ as a preposition, and not a complementiser.

Keywords: complement, complement clause complement selection, complementisers, government-binding

Procedia PDF Downloads 178
2224 The Effect of Program Type on Mutation Testing: Comparative Study

Authors: B. Falah, N. E. Abakouy

Abstract:

Due to its high computational cost, mutation testing has been neglected by researchers. Recently, many cost and mutants’ reduction techniques have been developed, improved, and experimented, but few of them has relied the possibility of reducing the cost of mutation testing on the program type of the application under test. This paper is a comparative study between four operators’ selection techniques (mutants sampling, class level operators, method level operators, and all operators’ selection) based on the program code type of each application under test. It aims at finding an alternative approach to reveal the effect of code type on mutation testing score. The result of our experiment shows that the program code type can affect the mutation score and that the programs using polymorphism are best suited to be tested with mutation testing.

Keywords: equivalent mutant, killed mutant, mutation score, mutation testing, program code type, software testing

Procedia PDF Downloads 545
2223 A Hybrid System for Boreholes Soil Sample

Authors: Ali Ulvi Uzer

Abstract:

Data reduction is an important topic in the field of pattern recognition applications. The basic concept is the reduction of multitudinous amounts of data down to the meaningful parts. The Principal Component Analysis (PCA) method is frequently used for data reduction. The Support Vector Machine (SVM) method is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data, the algorithm outputs an optimal hyperplane which categorizes new examples. This study offers a hybrid approach that uses the PCA for data reduction and Support Vector Machines (SVM) for classification. In order to detect the accuracy of the suggested system, two boreholes taken from the soil sample was used. The classification accuracies for this dataset were obtained through using ten-fold cross-validation method. As the results suggest, this system, which is performed through size reduction, is a feasible system for faster recognition of dataset so our study result appears to be very promising.

Keywords: feature selection, sequential forward selection, support vector machines, soil sample

Procedia PDF Downloads 446
2222 Functional Mortality of Anopheles stephensi, the Urban Malaria Vector as Induced by the Sublethal Exposure to Deltamethrin

Authors: P. Aarumugam, N. Krishnamoorthy, K. Gunasekaran

Abstract:

The mosquitoes with loss of minimum three legs especially the hind legs have the negative impact on the survival hood of mosquitoes. Three days old unfed adult female laboratory strain was selected in each generation against sublethal dosages (0.004%, 0.005%, 0.007% and 0.01%) of deltamethrin upto 40 generations. Impregnated papers with acetone were used for control. Every fourth generation, survived mosquitoes were observed for functional mortality. Hind legs lost were significantly (P< 0.05) higher in treated than the controls up to generation 24, thereafter no significant lost. In contrary, no significant forelegs lost among exposed mosquitoes. Middle legs lost were also not significant in the exposed mosquitoes except first generation (F1). The field strain (Chennai) did not show any significant loss of legs (fore or mid or hind) compared to the control. The selection pressure on mosquito population influences strong natural selection to develop various adaptive mechanisms.

Keywords: Anopheles stephensi, deltamethrin, functional mortality, synthetic pyrethroids

Procedia PDF Downloads 385
2221 Selection Criteria in the Spanish Secondary Education Content and Language Integrated Learning (CLIL) Programmes and Their Effect on Code-Switching in CLIL Methodology

Authors: Dembele Dembele, Philippe

Abstract:

Several Second Language Acquisition (SLA) studies have stressed the benefits of Content and Language Integrated Learning (CLIL) and shown how CLIL students outperformed their non-CLIL counterparts in many L2 skills. However, numerous experimental CLIL programs seem to have mainly targeted above-average and rather highly motivated language learners. The need to understand the impact of the student’s language proficiency on code-switching in CLIL instruction motivated this study. Therefore, determining the implications of the students’ low-language proficiency for CLIL methodology, as well as the frequency with which CLIL teachers use the main pedagogical functions of code-switching, seemed crucial for a Spanish CLIL instruction on a large scale. In the mixed-method approach adopted, ten face-to-face interviews were conducted in nine Valencian public secondary education schools, while over 30 CLIL teachers also contributed with their experience in two online survey questionnaires. The results showed the crucial role language proficiency plays in the Valencian CLIL/Plurilingual selection criteria. The presence of a substantial number of low-language proficient students in CLIL groups, which in turn implied important methodological consequences, was another finding of the study. Indeed, though the pedagogical use of L1 was confirmed as an extended practice among CLIL teachers, more than half of the participants perceived that code-switching impaired attaining their CLIL lesson objectives. Therein, the dissertation highlights the need for more extensive empirical research on how code-switching could prove beneficial in CLIL instruction involving low-language proficient students while maintaining the maximum possible exposure to the target language.

Keywords: CLIL methodology, low language proficiency, code switching, selection criteria, code-switching functions

Procedia PDF Downloads 67
2220 Using Greywolf Optimized Machine Learning Algorithms to Improve Accuracy for Predicting Hospital Readmission for Diabetes

Authors: Vincent Liu

Abstract:

Machine learning algorithms (ML) can achieve high accuracy in predicting outcomes compared to classical models. Metaheuristic, nature-inspired algorithms can enhance traditional ML algorithms by optimizing them such as by performing feature selection. We compare ten ML algorithms to predict 30-day hospital readmission rates for diabetes patients in the US using a dataset from UCI Machine Learning Repository with feature selection performed by Greywolf nature-inspired algorithm. The baseline accuracy for the initial random forest model was 65%. After performing feature engineering, SMOTE for class balancing, and Greywolf optimization, the machine learning algorithms showed better metrics, including F1 scores, accuracy, and confusion matrix with improvements ranging in 10%-30%, and a best model of XGBoost with an accuracy of 95%. Applying machine learning this way can improve patient outcomes as unnecessary rehospitalizations can be prevented by focusing on patients that are at a higher risk of readmission.

Keywords: diabetes, machine learning, 30-day readmission, metaheuristic

Procedia PDF Downloads 47
2219 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails

Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali

Abstract:

When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.

Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis

Procedia PDF Downloads 26
2218 Optimizing Design Parameters for Efficient Saturated Steam Production in Fire Tube Boilers: A Cost-Effective Approach

Authors: Yoftahe Nigussie Worku

Abstract:

This research focuses on advancing fire tube boiler technology by systematically optimizing design parameters to achieve efficient saturated steam production. The main objective is to design a high-performance boiler with a production capacity of 2000kg/h at a 12-bar design pressure while minimizing costs. The methodology employs iterative analysis, utilizing relevant formulas, and considers material selection and production methods. The study successfully results in a boiler operating at 85.25% efficiency, with a fuel consumption rate of 140.37kg/hr and a heat output of 1610kW. Theoretical importance lies in balancing efficiency, safety considerations, and cost minimization. The research addresses key questions on parameter optimization, material choices, and safety-efficiency balance, contributing valuable insights to fire tube boiler design.

Keywords: safety consideration, efficiency, production methods, material selection

Procedia PDF Downloads 54
2217 Novel Bioinspired Design to Capture Smoky CO2 by Reactive Absorption with Aqueous Scrubber

Authors: J. E. O. Hernandez

Abstract:

In the next 20 years, energy production by burning fuels will increase and so will the atmospheric concentration of CO2 and its well-known threats to life on Earth. The technologies available for capturing CO2 are still dubious and this keeps fostering an interest in bio-inspired approaches. The leading one is the application of carbonic anhydrase (CA) –a superfast biocatalyst able to convert up to one million molecules of CO2 into carbonates in water. However, natural CA underperforms when applied to real smoky CO2 in chimneys and, so far, the efforts to create superior CAs in the lab rely on screening methods running under pristine conditions at the micro level, which are far from resembling those in chimneys. For the evolution of man-made enzymes, selection rather than screening would be ideal but this is challenging because of the need for a suitable artificial environment that is also sustainable for our society. Herein we present the stepwise design and construction of a bioprocess (from bench-scale to semi-pilot) for evolutionary selection experiments. In this bioprocess, reaction and adsorption took place simultaneously at atmospheric pressure in a spray tower. The scrubbing solution was fed countercurrently by reusing municipal pressure and it was mainly prepared with water, carbonic anhydrase and calcium chloride. This bioprocess allowed for the enzymatic carbonation of smoky CO2; the reuse of process water and the recovery of solid carbonates without cooling of smoke, pretreatments, solvent amines and compression of CO2. The average yield of solid carbonates was 0.54 g min-1 or 12-fold the amount produced in serum bottles at lab bench scale. This bioprocess could be used as a tailor-made environment for driving the selection of superior CAs. The bioprocess and its match CA could be sustainably used to reduce global warming by CO2 emissions from exhausts.

Keywords: biological carbon capture and sequestration, carbonic anhydrase, directed evolution, global warming

Procedia PDF Downloads 184
2216 Identifying Applicant Potential Through Admissions Testing

Authors: Belinda Brunner

Abstract:

Objectives: Communicate common test constructs of well-known higher education admissions tests. Discuss influences on admissions test construct definition and design and discuss research on related to factors influencing success in academic study. Discuss how admissions tests can be used to identify relevant talent. Examine how admissions test can be used to facilitate educational mobility and inform selection decisions when the prerequisite curricula is not standardized Observations: Generally speaking, constructs of admissions tests can be placed along a continuum from curriculum-related knowledge to more general reasoning abilities. For example, subject-specific achievement tests are more closely aligned to a prescribed curriculum, while reasoning tests are typically not associated with a specific curriculum. This session will draw reference from the test-constructs of well-known international higher education admissions tests, such as the UK clinical aptitude test (UKCAT) which is used for medicine and dentistry admissions. Conclusions: The purpose of academic admissions testing is to identify potential students with the prerequisite skills set needed to succeed in the academic environment, but how can the test construct help achieve this goal? Determination of the appropriate test construct for tests used in the admissions selection decisions should be influenced by a number of factors, including the preceding academic curricula, other criteria influencing the admissions decision, and the principal purpose for testing. Attendees of this session will learn the types of aptitudes and knowledge that are assessed higher education admissions tests and will have the opportunity to gain insight into how careful and deliberate consideration of the desired test constructs can aid in identifying potential students with the greatest likelihood of success in medical school.

Keywords: admissions, measuring success, selection, identify skills

Procedia PDF Downloads 481
2215 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features

Authors: Bo Wang

Abstract:

The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.

Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection

Procedia PDF Downloads 276
2214 Using Machine Learning Techniques for Autism Spectrum Disorder Analysis and Detection in Children

Authors: Norah Mohammed Alshahrani, Abdulaziz Almaleh

Abstract:

Autism Spectrum Disorder (ASD) is a condition related to issues with brain development that affects how a person recognises and communicates with others which results in difficulties with interaction and communication socially and it is constantly growing. Early recognition of ASD allows children to lead safe and healthy lives and helps doctors with accurate diagnoses and management of conditions. Therefore, it is crucial to develop a method that will achieve good results and with high accuracy for the measurement of ASD in children. In this paper, ASD datasets of toddlers and children have been analyzed. We employed the following machine learning techniques to attempt to explore ASD and they are Random Forest (RF), Decision Tree (DT), Na¨ıve Bayes (NB) and Support Vector Machine (SVM). Then Feature selection was used to provide fewer attributes from ASD datasets while preserving model performance. As a result, we found that the best result has been provided by the Support Vector Machine (SVM), achieving 0.98% in the toddler dataset and 0.99% in the children dataset.

Keywords: autism spectrum disorder, machine learning, feature selection, support vector machine

Procedia PDF Downloads 137
2213 Association of Nuclear – Mitochondrial Epistasis with BMI in Type 1 Diabetes Mellitus Patients

Authors: Agnieszka H. Ludwig-Slomczynska, Michal T. Seweryn, Przemyslaw Kapusta, Ewelina Pitera, Katarzyna Cyganek, Urszula Mantaj, Lucja Dobrucka, Ewa Wender-Ozegowska, Maciej T. Malecki, Pawel Wolkow

Abstract:

Obesity results from an imbalance between energy intake and its expenditure. Genome-Wide Association Study (GWAS) analyses have led to discovery of only about 100 variants influencing body mass index (BMI), which explain only a small portion of genetic variability. Analysis of gene epistasis gives a chance to discover another part. Since it was shown that interaction and communication between nuclear and mitochondrial genome are indispensable for normal cell function, we have looked for epistatic interactions between the two genomes to find their correlation with BMI. Methods: The analysis was performed on 366 T1DM patients using Illumina Infinium OmniExpressExome-8 chip and followed by imputation on Michigan Imputation Server. Only genes which influence mitochondrial functioning (listed in Human MitoCarta 2.0) were included in the analysis – variants of nuclear origin (MAF > 5%) in 1140 genes and 42 mitochondrial variants (MAF > 1%). Gene expression analysis was performed on GTex data. Association analysis between genetic variants and BMI was performed with the use of Linear Mixed Models as implemented in the package 'GENESIS' in R. Analysis of association between mRNA expression and BMI was performed with the use of linear models and standard significance tests in R. Results: Among variants involved in epistasis between mitochondria and nucleus we have identified one in mitochondrial transcription factor, TFB2M (rs6701836). It interacted with mitochondrial variants localized to MT-RNR1 (p=0.0004, MAF=15%), MT-ND2 (p=0.07, MAF=5%) and MT-ND4 (p=0.01, MAF=1.1%). Analysis of the interaction between nuclear variant rs6701836 (nuc) and rs3021088 localized to MT-ND2 mitochondrial gene (mito) has shown that the combination of the two led to BMI decrease (p=0.024). Each of the variants on its own does not correlate with higher BMI [p(nuc)=0.856, p(mito)=0.116)]. Although rs6701836 is intronic, it influences gene expression in the thyroid (p=0.000037). rs3021088 is a missense variant that leads to alanine to threonine substitution in the MT-ND2 gene which belongs to complex I of the electron transport chain. The analysis of the influence of genetic variants on gene expression has confirmed the trend explained above – the interaction of the two genes leads to BMI decrease (p=0.0308). Each of the mRNAs on its own is associated with higher BMI (p(mito)=0.0244 and p(nuc)=0.0269). Conclusıons: Our results show that nuclear-mitochondrial epistasis can influence BMI in T1DM patients. The correlation between transcription factor expression and mitochondrial genetic variants will be subject to further analysis.

Keywords: body mass index, epistasis, mitochondria, type 1 diabetes

Procedia PDF Downloads 167
2212 GIS Model for Sanitary Landfill Site Selection Based on Geotechnical Parameters

Authors: Hecson Christian, Joel Macwan

Abstract:

Landfill site selection in an urban area is a critical issue in the planning process. With the growth of the urbanization, it has a mammoth impact on the economy, ecology, and environmental health of the region. Outsized amount of wastes are produced and the problem gets soared every day. Hence, selection of ideal site for sanitary landfill is a challenge for urban planners and solid waste managers. Disposal site is a function of many parameters. Among all, Geotechnical parameters are very vital as the same is related to surrounding open land. Moreover, the accessible safe and acceptable land is also scarce. Therefore, in this paper geotechnical parameters are used to develop a GIS model to identify an ideal location for landfill purpose. Metropolitan city of Surat is highly populated and fastest growing urban area in India. The research objectives are to conduct field experiments to collect data and to transfer the facts in GIS platform to evolve a model, to find ideal location. Planners’ preferences were obtained to use analytical hierarchical process (AHP) to find weights of each parameter. Integration of GIS and Multi-Criteria Decision Analysis (MCDA) techniques are applied to improve decision-making. It augments an environment for transformation and combination of geographical data and planners’ preferences. GIS performs deterministic overlay and buffer operations. MCDA methods evaluate alternatives based on the decision makers’ subjective values and priorities. Research results have shown many alternative locations. Economic analysis of selected site from actual operations point of view is not included in this research.

Keywords: GIS, AHP, MCDA, Geo-technical

Procedia PDF Downloads 143
2211 A Rare Case Report of Wandering Spleen Torsion

Authors: Steven Robinson, Adriana Dager, Param Patel

Abstract:

Wandering spleen is a rare variant where there is abnormal development of the ligamentous peritoneal attachments of the spleen which normally anchor it in the left upper quadrant of the abdomen. Ligamentous abnormalities can be congenital, or acquired through pregnancy, injury, or iatrogenic causes. Absence or laxity of these ligaments allows migration of the spleen into ectopic portions of the abdomen, which is also associated with an elongated vascular pedicle. Incidence of wandering spleen is reported at less than 0.25% with a female to male ratio of approximately 6:1. The most common complication of a wandering spleen is torsion around its vascular pedicle which can lead to thrombosis and infarction. Torsion of a wandering spleen is a rare but important cause of an acute abdomen. Imaging, and specifically CT or ultrasound, is crucial in the diagnosis. We present a case of a torsed wandering spleen which was treated with splenectomy.

Keywords: Wandering Spleen, Torsion, Splenic Torsion, Spleen

Procedia PDF Downloads 72
2210 An Optimization Algorithm Based on Dynamic Schema with Dissimilarities and Similarities of Chromosomes

Authors: Radhwan Yousif Sedik Al-Jawadi

Abstract:

Optimization is necessary for finding appropriate solutions to a range of real-life problems. In particular, genetic (or more generally, evolutionary) algorithms have proved very useful in solving many problems for which analytical solutions are not available. In this paper, we present an optimization algorithm called Dynamic Schema with Dissimilarity and Similarity of Chromosomes (DSDSC) which is a variant of the classical genetic algorithm. This approach constructs new chromosomes from a schema and pairs of existing ones by exploring their dissimilarities and similarities. To show the effectiveness of the algorithm, it is tested and compared with the classical GA, on 15 two-dimensional optimization problems taken from literature. We have found that, in most cases, our method is better than the classical genetic algorithm.

Keywords: chromosome injection, dynamic schema, genetic algorithm, similarity and dissimilarity

Procedia PDF Downloads 333
2209 Using New Machine Algorithms to Classify Iranian Musical Instruments According to Temporal, Spectral and Coefficient Features

Authors: Ronak Khosravi, Mahmood Abbasi Layegh, Siamak Haghipour, Avin Esmaili

Abstract:

In this paper, a study on classification of musical woodwind instruments using a small set of features selected from a broad range of extracted ones by the sequential forward selection method was carried out. Firstly, we extract 42 features for each record in the music database of 402 sound files belonging to five different groups of Flutes (end blown and internal duct), Single –reed, Double –reed (exposed and capped), Triple reed and Quadruple reed. Then, the sequential forward selection method is adopted to choose the best feature set in order to achieve very high classification accuracy. Two different classification techniques of support vector machines and relevance vector machines have been tested out and an accuracy of up to 96% can be achieved by using 21 time, frequency and coefficient features and relevance vector machine with the Gaussian kernel function.

Keywords: coefficient features, relevance vector machines, spectral features, support vector machines, temporal features

Procedia PDF Downloads 311
2208 Investigations on the Influence of Optimized Charge Air Cooling for a Diesel Passenger Car

Authors: Christian Doppler, Gernot Hirschl, Gerhard Zsiga

Abstract:

Starting from 2020, an EU-wide CO2-limitation of 95g/km is scheduled for the average of an OEMs passenger car fleet. Considering that, further measures of optimization on the diesel cycle will be necessary in order to reduce fuel consumption and emissions while keeping performance values adequate at the least. The present article deals with charge air cooling (CAC) on the basis of a diesel passenger car model in a 0D/1D-working process calculation environment. The considered engine is a 2.4 litre EURO VI diesel engine with variable geometry turbocharger (VGT) and low-pressure exhaust gas recirculation (LP EGR). The object of study was the impact of charge air cooling on the engine working process at constant boundary conditions which could have been conducted with an available and validated engine model in AVL BOOST. Part load was realized with constant power and NOx-emissions, whereas full load was accomplished with a lambda control in order to obtain maximum engine performance. The informative results were used to implement a simulation model in Matlab/Simulink which is further integrated into a full vehicle simulation environment via coupling with ICOS (Independent Co-Simulation Platform). Next, the dynamic engine behavior was validated and modified with load steps taken from the engine test bed. Due to the modular setup in the Co-Simulation, different CAC-models have been simulated quickly with their different influences on the working process. In doing so, a new cooler variation isn’t needed to be reproduced and implemented into the primary simulation model environment, but is implemented quickly and easily as an independent component into the simulation entity. By means of the association of the engine model, longitudinal dynamics vehicle model and different CAC models (air/air & water/air variants) in both steady state and transient operational modes, statements are gained regarding fuel consumption, NOx-emissions and power behavior. The fact that there is no more need of a complex engine model is very advantageous for the overall simulation volume. Beside of the simulation with the mentioned demonstrator engine, there have also been conducted several experimental investigations on the engine test bench. Here the comparison of a standard CAC with an intake-manifold-integrated CAC was executed in particular. Simulative as well as experimental tests showed benefits for the water/air CAC variant (on test bed especially the intake manifold integrated variant). The benefits are illustrated by a reduced pressure loss and a gain in air efficiency and CAC efficiency, those who all lead to minimized emission and fuel consumption for stationary and transient operation.

Keywords: air/water-charge air cooler, co-simulation, diesel working process, EURO VI fuel consumption

Procedia PDF Downloads 261
2207 Selection of Intensity Measure in Probabilistic Seismic Risk Assessment of a Turkish Railway Bridge

Authors: M. F. Yilmaz, B. Ö. Çağlayan

Abstract:

Fragility curve is an effective common used tool to determine the earthquake performance of structural and nonstructural components. Also, it is used to determine the nonlinear behavior of bridges. There are many historical bridges in the Turkish railway network; the earthquake performances of these bridges are needed to be investigated. To derive fragility curve Intensity measures (IMs) and Engineering demand parameters (EDP) are needed to be determined. And the relation between IMs and EDP are needed to be derived. In this study, a typical simply supported steel girder riveted railway bridge is studied. Fragility curves of this bridge are derived by two parameters lognormal distribution. Time history analyses are done for selected 60 real earthquake data to determine the relation between IMs and EDP. Moreover, efficiency, practicality, and sufficiency of three different IMs are discussed. PGA, Sa(0.2s) and Sa(1s), the most common used IMs parameters for fragility curve in the literature, are taken into consideration in terms of efficiency, practicality and sufficiency.

Keywords: railway bridges, earthquake performance, fragility analyses, selection of intensity measures

Procedia PDF Downloads 348