Search results for: welding process selection
16592 Investigating the Effective Parameters in Determining the Type of Traffic Congestion Pricing Schemes in Urban Streets
Authors: Saeed Sayyad Hagh Shomar
Abstract:
Traffic congestion pricing – as a strategy in travel demand management in urban areas to reduce traffic congestion, air pollution and noise pollution – has drawn many attentions towards itself. Unlike the satisfying findings in this method, there are still problems in determining the best functional congestion pricing scheme with regard to the situation. The so-called problems in this process will result in further complications and even the scheme failure. That is why having proper knowledge of the significance of congestion pricing schemes and the effective factors in choosing them can lead to the success of this strategy. In this study, first, a variety of traffic congestion pricing schemes and their components are introduced; then, their functional usage is discussed. Next, by analyzing and comparing the barriers, limitations and advantages, the selection criteria of pricing schemes are described. The results, accordingly, show that the selection of the best scheme depends on various parameters. Finally, based on examining the effective parameters, it is concluded that the implementation of area-based schemes (cordon and zonal) has been more successful in non-diversion of traffic. That is considering the topology of the cities and the fact that traffic congestion is often created in the city centers, area-based schemes would be notably functional and appropriate.Keywords: congestion pricing, demand management, flat toll, variable toll
Procedia PDF Downloads 39016591 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach
Authors: Mohammad H. Almomani
Abstract:
In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization
Procedia PDF Downloads 35516590 A Convenient Part Library Based on SolidWorks Platform
Authors: Wei Liu, Xionghui Zhou, Qiang Niu, Yunhao Ni
Abstract:
3D part library is an ideal approach to reuse the existing design and thus facilitates the modeling process, which will enhance the efficiency. In this paper, we implemented the thought on the SolidWorks platform. The system supports the functions of type and parameter selection, 3D template driving and part assembly. Finally, BOM is exported in Excel format. Experiment shows that our method can satisfy the requirement of die and mold designers.Keywords: part library, SolidWorks, automatic assembly, intelligent
Procedia PDF Downloads 38916589 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 8616588 Transport Mode Selection under Lead Time Variability and Emissions Constraint
Authors: Chiranjit Das, Sanjay Jharkharia
Abstract:
This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection
Procedia PDF Downloads 43416587 Coal Preparation Plant:Technology Overview and New Adaptations
Authors: Amit Kumar Sinha
Abstract:
A coal preparation plant typically operates with multiple beneficiation circuits to process individual size fractions of coal obtained from mine so that the targeted overall plant efficiency in terms of yield and ash is achieved. Conventional coal beneficiation plant in India or overseas operates generally in two methods of processing; coarse beneficiation with treatment in dense medium cyclones or in baths and fines beneficiation with treatment in flotation cell. This paper seeks to address the proven application of intermediate circuit along with coarse and fines circuit in Jamadoba New Coal Preparation Plant of capacity 2 Mt/y to treat -0.5 mm+0.25 mm size particles in reflux classifier. Previously this size of particles was treated directly in Flotation cell which had operational and metallurgical limitations which will be discussed in brief in this paper. The paper also details test work results performed on the representative samples of TSL coal washeries to determine the top size of intermediate and fines circuit and discusses about the overlapping process of intermediate circuit and how it is process wise suitable to beneficiate misplaced particles from coarse circuit and fines circuit. This paper also compares the separation efficiency (Ep) of various intermediate circuit process equipment and tries to validate the use of reflux classifier over fine coal DMC or spirals. An overview of Modern coal preparation plant treating Indian coal especially Washery Grade IV coal with reference to Jamadoba New Coal Preparation Plant which was commissioned in 2018 with basis of selection of equipment and plant profile, application of reflux classifier in intermediate circuit and process design criteria is also outlined in this paper.Keywords: intermediate circuit, overlapping process, reflux classifier
Procedia PDF Downloads 13616586 Learning to Teach in Large Classrooms: Training Faculty Members from Milano Bicocca University, from Didactic Transposition to Communication Skills
Authors: E. Nigris, F. Passalacqua
Abstract:
Relating to the recent researches in the field of faculty development, this paper aims to present a pilot training programme realized at the University of Milano-Bicocca to improve teaching skills of faculty members. A total of 57 professors (both full professors and associate professors) were trained during the pilot programme in three editions of the workshop, focused on promoting skills for teaching large classes. The study takes into account: 1) the theoretical framework of the programme which combines the recent tradition about professional development and the research on in-service training of school teachers; 2) the structure and the content of the training programme, organized in a 12 hours-full immersion workshop and in individual consultations; 3) the educational specificity of the training programme which is based on the relation between 'general didactic' (active learning metholodies; didactic communication) and 'disciplinary didactics' (didactic transposition and reconstruction); 4) results about the impact of the training programme, both related to the workshop and the individual consultations. This study aims to provide insights mainly on two levels of the training program’s impact ('behaviour change' and 'transfer') and for this reason learning outcomes are evaluated by different instruments: a questionnaire filled out by all 57 participants; 12 in-depth interviews; 3 focus groups; conversation transcriptions of workshop activities. Data analysis is based on a descriptive qualitative approach and it is conducted through thematic analysis of the transcripts using analytical categories derived principally from the didactic transposition theory. The results show that the training programme developed effectively three major skills regarding different stages of the 'didactic transposition' process: a) the content selection; a more accurated selection and reduction of the 'scholarly knowledge', conforming to the first stage of the didactic transposition process; b) the consideration of students’ prior knowledge and misconceptions within the lesson design, in order to connect effectively the 'scholarly knowledge' to the 'knowledge to be taught' (second stage of the didactic transposition process); c) the way of asking questions and managing discussion in large classrooms, in line with the transformation of the 'knowledge to be taught' in 'taught knowledge' (third stage of the didactic transposition process).Keywords: didactic communication, didactic transposition, instructional development, teaching large classroom
Procedia PDF Downloads 13816585 Selecting Answers for Questions with Multiple Answer Choices in Arabic Question Answering Based on Textual Entailment Recognition
Authors: Anes Enakoa, Yawei Liang
Abstract:
Question Answering (QA) system is one of the most important and demanding tasks in the field of Natural Language Processing (NLP). In QA systems, the answer generation task generates a list of candidate answers to the user's question, in which only one answer is correct. Answer selection is one of the main components of the QA, which is concerned with selecting the best answer choice from the candidate answers suggested by the system. However, the selection process can be very challenging especially in Arabic due to its particularities. To address this challenge, an approach is proposed to answer questions with multiple answer choices for Arabic QA systems based on Textual Entailment (TE) recognition. The developed approach employs a Support Vector Machine that considers lexical, semantic and syntactic features in order to recognize the entailment between the generated hypotheses (H) and the text (T). A set of experiments has been conducted for performance evaluation and the overall performance of the proposed method reached an accuracy of 67.5% with C@1 score of 80.46%. The obtained results are promising and demonstrate that the proposed method is effective for TE recognition task.Keywords: information retrieval, machine learning, natural language processing, question answering, textual entailment
Procedia PDF Downloads 14516584 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 31616583 Investigating the Glass Ceiling Phenomenon: An Empirical Study of Glass Ceiling's Effects on Selection, Promotion and Female Effectiveness
Authors: Sharjeel Saleem
Abstract:
The glass ceiling has been a burning issue for many researchers. In this research, we examine gender of the BOD, training and development, workforce diversity, positive attitude towards women, and employee acts as antecedents of glass ceiling. Furthermore, we also look for effects of glass ceiling on likelihood of female selection and promotion and on female effectiveness. Multiple linear regression conducted on data drawn from different public and private sector organizations support our hypotheses. The research, however, is limited to Faisalabad city and only females from minority group are targeted here.Keywords: glass ceiling, stereotype attitudes, female effectiveness
Procedia PDF Downloads 29116582 Classification of Political Affiliations by Reduced Number of Features
Authors: Vesile Evrim, Aliyu Awwal
Abstract:
By the evolvement in technology, the way of expressing opinions switched the direction to the digital world. The domain of politics as one of the hottest topics of opinion mining research merged together with the behavior analysis for affiliation determination in text which constitutes the subject of this paper. This study aims to classify the text in news/blogs either as Republican or Democrat with the minimum number of features. As an initial set, 68 features which 64 are constituted by Linguistic Inquiry and Word Count (LIWC) features are tested against 14 benchmark classification algorithms. In the later experiments, the dimensions of the feature vector reduced based on the 7 feature selection algorithms. The results show that Decision Tree, Rule Induction and M5 Rule classifiers when used with SVM and IGR feature selection algorithms performed the best up to 82.5% accuracy on a given dataset. Further tests on a single feature and the linguistic based feature sets showed the similar results. The feature “function” as an aggregate feature of the linguistic category, is obtained as the most differentiating feature among the 68 features with 81% accuracy by itself in classifying articles either as Republican or Democrat.Keywords: feature selection, LIWC, machine learning, politics
Procedia PDF Downloads 38216581 Optimal Portfolio Selection under Treynor Ratio Using Genetic Algorithms
Authors: Imad Zeyad Ramadan
Abstract:
In this paper a genetic algorithm was developed to construct the optimal portfolio based on the Treynor method. The GA maximizes the Treynor ratio under budget constraint to select the best allocation of the budget for the companies in the portfolio. The results show that the GA was able to construct a conservative portfolio which includes companies from the three sectors. This indicates that the GA reduced the risk on the investor as it choose some companies with positive risks (goes with the market) and some with negative risks (goes against the market).Keywords: oOptimization, genetic algorithm, portfolio selection, Treynor method
Procedia PDF Downloads 44916580 Thermochemical Modelling for Extraction of Lithium from Spodumene and Prediction of Promising Reagents for the Roasting Process
Authors: Allen Yushark Fosu, Ndue Kanari, James Vaughan, Alexandre Changes
Abstract:
Spodumene is a lithium-bearing mineral of great interest due to increasing demand of lithium in emerging electric and hybrid vehicles. The conventional method of processing the mineral for the metal requires inevitable thermal transformation of α-phase to the β-phase followed by roasting with suitable reagents to produce lithium salts for downstream processes. The selection of appropriate reagent for roasting is key for the success of the process and overall lithium recovery. Several researches have been conducted to identify good reagents for the process efficiency, leading to sulfation, alkaline, chlorination, fluorination, and carbonizing as the methods of lithium recovery from the mineral.HSC Chemistry is a thermochemical software that can be used to model metallurgical process feasibility and predict possible reaction products prior to experimental investigation. The software was employed to investigate and explain the various reagent characteristics as employed in literature during spodumene roasting up to 1200°C. The simulation indicated that all used reagents for sulfation and alkaline were feasible in the direction of lithium salt production. Chlorination was only feasible when Cl2 and CaCl2 were used as chlorination agents but not NaCl nor KCl. Depending on the kind of lithium salt formed during carbonizing and fluorination, the process was either spontaneous or nonspontaneous throughout the temperature range investigated. The HSC software was further used to simulate and predict some promising reagents which may be equally good for roasting the mineral for efficient lithium extraction but have not yet been considered by researchers.Keywords: thermochemical modelling, HSC chemistry software, lithium, spodumene, roasting
Procedia PDF Downloads 15816579 Exploring Weld Rejection Rate Limits and Tracers Effects in Construction Projects
Authors: Abdalaziz M. Alsalhabi, Loai M. Alowa
Abstract:
This paper investigates Weld Rejection Rate (WRR) limits and tracer effects in construction projects, with a specific focus on a Gas Plant Project, a mega-project held by Saudi Aramco (SA) in Saudi Arabia. The study included a comprehensive examination of various factors impacting WRR limits. It commenced by comparing the Company practices with ASME standards, followed by an in-depth analysis of both weekly and cumulative projects' historical WRR data, evaluation of Radiographic Testing (RT) reports for rejected welds, and proposal of mitigation methods to eliminate future rejections. Additionally, the study revealed the causes of fluctuation in WRR data and benchmarked with the industry practices. Furthermore, a case study was conducted to explore the impact of tracers on WRR, providing insights into their influence on the welding process. This paper aims to achieve three primary objectives. Firstly, it seeks to validate the existing practices of WRR limits and advocate for their inclusion within relevant International Industry Standards. Secondly, it aims to validate the effectiveness of the WRR formula that incorporates tracer effects, ensuring its reliability in assessing weld quality. Lastly, this study aims to identify opportunities for process improvement in WRR control, with the ultimate goal of enhancing project processes and ensuring the integrity, safety, and efficiency of constructed assets.Keywords: weld rejection rate, weld repair rate in joint and linear basis, tracers effects, construction projects
Procedia PDF Downloads 4016578 Two Stage Fuzzy Methodology to Evaluate the Credit Risks of Investment Projects
Authors: O. Badagadze, G. Sirbiladze, I. Khutsishvili
Abstract:
The work proposes a decision support methodology for the credit risk minimization in selection of investment projects. The methodology provides two stages of projects’ evaluation. Preliminary selection of projects with minor credit risks is made using the Expertons Method. The second stage makes ranking of chosen projects using the Possibilistic Discrimination Analysis Method. The latter is a new modification of a well-known Method of Fuzzy Discrimination Analysis.Keywords: expert valuations, expertons, investment project risks, positive and negative discriminations, possibility distribution
Procedia PDF Downloads 67616577 The Influence of Gossip on the Absorption Probabilities in Moran Process
Authors: Jurica Hižak
Abstract:
Getting to know the agents, i.e., identifying the free riders in a population, can be considered one of the main challenges in establishing cooperation. An ordinary memory-one agent such as Tit-for-tat may learn “who is who” in the population through direct interactions. Past experiences serve them as a landmark to know with whom to cooperate and against whom to retaliate in the next encounter. However, this kind of learning is risky and expensive. A cheaper and less painful way to detect free riders may be achieved by gossiping. For this reason, as part of this research, a special type of Tit-for-tat agent was designed – a “Gossip-Tit-for-tat” agent that can share data with other agents of its kind. The performances of both strategies, ordinary Tit-for-tat and Gossip-Tit-for-tat, against Always-defect have been compared in the finite-game framework of the Iterated Prisoner’s Dilemma via the Moran process. Agents were able to move in a random-walk fashion, and they were programmed to play Prisoner’s Dilemma each time they met. Moreover, at each step, one randomly selected individual was eliminated, and one individual was reproduced in accordance with the Moran process of selection. In this way, the size of the population always remained the same. Agents were selected for reproduction via the roulette wheel rule, i.e., proportionally to the relative fitness of the strategy. The absorption probability was calculated after the population had been absorbed completely by cooperators, which means that all the states have been occupied and all of the transition probabilities have been determined. It was shown that gossip increases absorption probabilities and therefore enhances the evolution of cooperation in the population.Keywords: cooperation, gossip, indirect reciprocity, Moran process, prisoner’s dilemma, tit-for-tat
Procedia PDF Downloads 9716576 Lessons from Vernacular Architecture for Lightweight Construction
Authors: Alireza Taghdiri, Sara Ghanbarzade Ghomi
Abstract:
With the gravity load reduction in the structural and non-structural components, the lightweight construction will be achieved as well as the improvement of efficiency and functional specifications. The advantages of lightweight construction can be examined in two levels. The first is the mass reduction of load bearing structure which results in increasing internal useful space and the other one is the mass reduction of building which decreases the effects of seismic load as a result. In order to achieve this goal, the essential building materials specifications and also optimum load bearing geometry of structural systems and elements have to be considered, so lightweight materials selection particularly with lightweight aggregate for building components will be the first step of lightweight construction. In the next step, in addition to selecting the prominent samples of Iran's traditional architecture, the process of these works improvement is analyzed through the viewpoints of structural efficiency and lightweighting and also the practical methods of lightweight construction have been extracted. The optimum design of load bearing geometry of structural system has to be considered not only in the structural system elements, but also in their composition and the selection of dimensions, proportions, forms and optimum orientations, can lead to get a maximum materials efficiency for loads and stresses bearing.Keywords: gravity load, light-weighting structural system, load bearing geometry, seismic behavior
Procedia PDF Downloads 54316575 Challenges in Employment and Adjustment of Academic Expatriates Based in Higher Education Institutions in the KwaZulu-Natal Province, South Africa
Authors: Thulile Ndou
Abstract:
The purpose of this study was to examine the challenges encountered in the mediation of attracting and recruiting academic expatriates who in turn encounter their own obstacles in adjusting into and settling in their host country, host academic institutions and host communities. The none-existence of literature on attraction, placement and management of academic expatriates in the South African context has been acknowledged. Moreover, Higher Education Institutions in South Africa have voiced concerns relating to delayed and prolonged recruitment and selection processes experienced in the employment process of academic expatriates. Once employed, academic expatriates should be supported and acquainted with the surroundings, the local communities as well as be assisted to establish working relations with colleagues in order to facilitate their adjustment and integration process. Hence, an employer should play a critical role in facilitating the adjustment of academic expatriates. This mixed methods study was located in four Higher Education Institutions based in the KwaZulu-Natal province, in South Africa. The explanatory sequential design approach was deployed in the study. The merits of this approach were chiefly that it employed both the quantitative and qualitative techniques of inquiry. Therefore, the study examined and interrogated its subject from a multiplicity of quantitative and qualitative vantage points, yielding a much more enriched and enriching illumination. Mixing the strengths of both the quantitative and the qualitative techniques delivered much more durable articulation and understanding of the subject. A 5-point Likert scale questionnaire was used to collect quantitative data relating to interaction adjustment, general adjustment and work adjustment from academic expatriates. One hundred and forty two (142) academic expatriates participated in the quantitative study. Qualitative data relating to employment process and support offered to academic expatriates was collected through a structured questionnaire and semi-structured interviews. A total of 48 respondents; including, line managers, human resources practitioners, and academic expatriates participated in the qualitative study. The Independent T-test, ANOVA and Descriptive Statistics were performed to analyse, interpret and make meaning of quantitative data and thematic analysis was used to analyse qualitative data. The qualitative results revealed that academic talent is sourced from outside the borders of the country because of the academic skills shortage in almost all academic disciplines especially in the disciplines associated with Science, Engineering and Accounting. However, delays in work permit application process made it difficult to finalise the recruitment and selection process on time. Furthermore, the quantitative results revealed that academic expatriates experience general and interaction adjustment challenges associated with the use of local language and understanding of local culture. However, female academic expatriates were found to be better adjusted in the two areas as compared to male academic expatriates. Moreover, significant mean differences were found between institutions suggesting that academic expatriates based in rural areas experienced adjustment challenges differently from the academic expatriates based in urban areas. The study gestured to the need for policy revisions in the area of immigration, human resources and academic administration.Keywords: academic expatriates, recruitment and selection, interaction and general adjustment, work adjustment
Procedia PDF Downloads 30616574 Solution of Logistics Center Selection Problem Using the Axiomatic Design Method
Authors: Fulya Zaralı, Harun Resit Yazgan
Abstract:
Logistics centers represent areas that all national and international logistics and activities related to logistics can be implemented by the various businesses. Logistics centers have a key importance in joining the transport stream and the transport system operations. Therefore, it is important where these centers are positioned to be effective and efficient and to show the expected performance of the centers. In this study, the location selection problem to position the logistics center is discussed. Alternative centers are evaluated according certain criteria. The most appropriate center is identified using the axiomatic design method.Keywords: axiomatic design, logistic center, facility location, information systems
Procedia PDF Downloads 34816573 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index
Authors: Todd Zhou, Mikhail Yurochkin
Abstract:
Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index
Procedia PDF Downloads 12416572 Open Forging of Cylindrical Blanks Subjected to Lateral Instability
Authors: A. H. Elkholy, D. M. Almutairi
Abstract:
The successful and efficient execution of a forging process is dependent upon the correct analysis of loading and metal flow of blanks. This paper investigates the Upper Bound Technique (UBT) and its application in the analysis of open forging process when a possibility of blank bulging exists. The UBT is one of the energy rate minimization methods for the solution of metal forming process based on the upper bound theorem. In this regards, the kinematically admissible velocity field is obtained by minimizing the total forging energy rate. A computer program is developed in this research to implement the UBT. The significant advantages of this method is the speed of execution while maintaining a fairly high degree of accuracy and the wide prediction capability. The information from this analysis is useful for the design of forging processes and dies. Results for the prediction of forging loads and stresses, metal flow and surface profiles with the assured benefits in terms of press selection and blank preform design are outlined in some detail. The obtained predictions are ready for comparison with both laboratory and industrial results.Keywords: forging, upper bound technique, metal forming, forging energy, forging die/platen
Procedia PDF Downloads 29316571 Weighted Rank Regression with Adaptive Penalty Function
Authors: Kang-Mo Jung
Abstract:
The use of regularization for statistical methods has become popular. The least absolute shrinkage and selection operator (LASSO) framework has become the standard tool for sparse regression. However, it is well known that the LASSO is sensitive to outliers or leverage points. We consider a new robust estimation which is composed of the weighted loss function of the pairwise difference of residuals and the adaptive penalty function regulating the tuning parameter for each variable. Rank regression is resistant to regression outliers, but not to leverage points. By adopting a weighted loss function, the proposed method is robust to leverage points of the predictor variable. Furthermore, the adaptive penalty function gives us good statistical properties in variable selection such as oracle property and consistency. We develop an efficient algorithm to compute the proposed estimator using basic functions in program R. We used an optimal tuning parameter based on the Bayesian information criterion (BIC). Numerical simulation shows that the proposed estimator is effective for analyzing real data set and contaminated data.Keywords: adaptive penalty function, robust penalized regression, variable selection, weighted rank regression
Procedia PDF Downloads 47416570 Firm Level Productivity Heterogeneity and Export Behavior: Evidence from UK
Authors: Umut Erksan Senalp
Abstract:
The aim of this study is to examine the link between firm level productivity heterogeneity and firm’s decision to export. Thus, we test the self selection hypothesis which suggests only more productive firms self select themselves to export markets. We analyze UK manufacturing sector by using firm-level data for the period 2003-2011. Although our preliminary results suggest that exporters outperform non-exporters when we pool all manufacturing industries, when we examine each industry individually, we find that self-selection hypothesis does not hold for each industries.Keywords: total factor productivity, firm heterogeneity, international trade, decision to export
Procedia PDF Downloads 36116569 Sustainability Assessment Tool for the Selection of Optimal Site Remediation Technologies for Contaminated Gasoline Sites
Authors: Connor Dunlop, Bassim Abbassi, Richard G. Zytner
Abstract:
Life cycle assessment (LCA) is a powerful tool established by the International Organization for Standardization (ISO) that can be used to assess the environmental impacts of a product or process from cradle to grave. Many studies utilize the LCA methodology within the site remediation field to compare various decontamination methods, including bioremediation, soil vapor extraction or excavation, and off-site disposal. However, with the authors' best knowledge, limited information is available in the literature on a sustainability tool that could be used to help with the selection of the optimal remediation technology. This tool, based on the LCA methodology, would consider site conditions like environmental, economic, and social impacts. Accordingly, this project was undertaken to develop a tool to assist with the selection of optimal sustainable technology. Developing a proper tool requires a large amount of data. As such, data was collected from previous LCA studies looking at site remediation technologies. This step identified knowledge gaps or limitations within project data. Next, utilizing the data obtained from the literature review and other organizations, an extensive LCA study is being completed following the ISO 14040 requirements. Initial technologies being compared include bioremediation, excavation with off-site disposal, and a no-remediation option for a generic gasoline-contaminated site. To complete the LCA study, the modelling software SimaPro is being utilized. A sensitivity analysis of the LCA results will also be incorporated to evaluate the impact on the overall results. Finally, the economic and social impacts associated with each option will then be reviewed to understand how they fluctuate at different sites. All the results will then be summarized, and an interactive tool using Excel will be developed to help select the best sustainable site remediation technology. Preliminary LCA results show improved sustainability for the decontamination of a gasoline-contaminated site for each technology compared to the no-remediation option. Sensitivity analyses are now being completed on on-site parameters to determine how the environmental impacts fluctuate at other contaminated gasoline locations as the parameters vary, including soil type and transportation distances. Additionally, the social improvements and overall economic costs associated with each technology are being reviewed. Utilizing these results, the sustainability tool created to assist in the selection of the overall best option will be refined.Keywords: life cycle assessment, site remediation, sustainability tool, contaminated sites
Procedia PDF Downloads 5816568 Applied Methods for Lightweighting Structural Systems
Authors: Alireza Taghdiri, Sara Ghanbarzade Ghomi
Abstract:
With gravity load reduction in the structural and non-structural components, the lightweight construction will be achieved as well as the improvement of efficiency and functional specifications. The advantages of lightweight construction can be examined in two levels. The first is the mass reduction of load bearing structure which results in increasing internal useful space and the other one is the mass reduction of building which decreases the effects of seismic load as a result. In order to achieve this goal, the essential building materials specifications and also optimum load bearing geometry of structural systems and elements have to be considered, so lightweight materials selection particularly with lightweight aggregate for building components will be the first step of lightweight construction. In the next step, in addition to selecting the prominent samples of Iran's traditional architecture, the process of these works improvement is analyzed through the viewpoints of structural efficiency and lightweighting and also the practical methods of lightweight construction have been extracted. The optimum design of load bearing geometry of structural system has to be considered not only in the structural system elements, but also in their composition and the selection of dimensions, proportions, forms and optimum orientations, can lead to get a maximum materials efficiency for loads and stresses bearing.Keywords: gravity load, lightweighting structural system, load bearing geometry, seismic behavior
Procedia PDF Downloads 52116567 Simulation of a Fluid Catalytic Cracking Process
Authors: Sungho Kim, Dae Shik Kim, Jong Min Lee
Abstract:
Fluid catalytic cracking (FCC) process is one of the most important process in modern refinery indusrty. This paper focuses on the fluid catalytic cracking (FCC) process. As the FCC process is difficult to model well, due to its nonlinearities and various interactions between its process variables, rigorous process modeling of whole FCC plant is demanded for control and plant-wide optimization of the plant. In this study, a process design for the FCC plant includes riser reactor, main fractionator, and gas processing unit was developed. A reactor model was described based on four-lumped kinetic scheme. Main fractionator, gas processing unit and other process units are designed to simulate real plant data, using a process flowsheet simulator, Aspen PLUS. The custom reactor model was integrated with the process flowsheet simulator to develop an integrated process model.Keywords: fluid catalytic cracking, simulation, plant data, process design
Procedia PDF Downloads 45416566 High-Throughput Screening and Selection of Electrogenic Microbial Communities Using Single Chamber Microbial Fuel Cells Based on 96-Well Plate Array
Authors: Lukasz Szydlowski, Jiri Ehlich, Igor Goryanin
Abstract:
We demonstrate a single chamber, 96-well-plated based Microbial Fuel Cell (MFC) with printed, electronic components. This invention is aimed at robust selection of electrogenic microbial community under specific conditions, e.g., electrode potential, pH, nutrient concentration, salt concentration that can be altered within the 96 well plate array. This invention enables robust selection of electrogenic microbial community under the homogeneous reactor, with multiple conditions that can be altered to allow comparative analysis. It can be used as a standalone technique or in conjunction with other selective processes, e.g., flow cytometry, microfluidic-based dielectrophoretic trapping. Mobile conductive elements, like carbon paper, carbon sponge, activated charcoal granules, metal mesh, can be inserted inside to increase the anode surface area in order to collect electrogenic microorganisms and to transfer them into new reactors or for other analytical works. An array of 96-well plate allows this device to be operated by automated pipetting stations.Keywords: bioengineering, electrochemistry, electromicrobiology, microbial fuel cell
Procedia PDF Downloads 14816565 Measuring the Embodied Energy of Construction Materials and Their Associated Cost Through Building Information Modelling
Authors: Ahmad Odeh, Ahmad Jrade
Abstract:
Energy assessment is an evidently significant factor when evaluating the sustainability of structures especially at the early design stage. Today design practices revolve around the selection of material that reduces the operational energy and yet meets their displinary need. Operational energy represents a substantial part of the building lifecycle energy usage but the fact remains that embodied energy is an important aspect unaccounted for in the carbon footprint. At the moment, little or no consideration is given to embodied energy mainly due to the complexity of calculation and the various factors involved. The equipment used, the fuel needed, and electricity required for each material vary with location and thus the embodied energy will differ for each project. Moreover, the method and the technique used in manufacturing, transporting and putting in place will have a significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at helping designers select the construction materials based on their embodied energy. Moreover, this paper presents a systematic approach that uses an efficient method of calculation and ultimately provides new insight into construction material selection. The model is developed in a BIM environment targeting the quantification of embodied energy for construction materials through the three main stages of their life: manufacturing, transportation and placement. The model contains three major databases each of which contains a set of the most commonly used construction materials. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by tools and cranes needed to place an item in its intended location. The model provides designers with sets of all available construction materials and their associated embodied energies to use for the selection during the design process. Through geospatial data and dimensional material analysis, the model will also be able to automatically calculate the distance between the factories and the construction site. To remain within the sustainability criteria set by LEED, a final database is created and used to calculate the overall construction cost based on R.M.S. means cost data and then automatically recalculate the costs for any modifications. Design criteria including both operational and embodied energies will cause designers to revaluate the current material selection for cost, energy, and most importantly sustainability.Keywords: building information modelling, energy, life cycle analysis, sustainablity
Procedia PDF Downloads 26916564 Opportunities and Challenges in Midwifery Education: A Literature Review
Authors: Abeer M. Orabi
Abstract:
Midwives are being seen as a key factor in returning birth care to a normal physiologic process that is woman-centered. On the other hand, more needs to be done to increase access for every woman to professional midwifery care. Because of the nature of the midwifery specialty, the magnitude of the effect that can result from a lack of knowledge if midwives make a mistake in their care has the potential to affect a large number of the birthing population. So, the development, running, and management of midwifery educational programs should follow international standards and come after a thorough community needs assessment. At the same time, the number of accredited midwifery educational programs needs to be increased so that larger numbers of midwives will be educated and qualified, as well as access to skilled midwifery care will be increased. Indeed, the selection of promising midwives is important for the successful completion of an educational program, achievement of the program goals, and retention of graduates in the field. Further, the number of schooled midwives in midwifery education programs, their background, and their experience constitute some concerns in the higher education industry. Basically, preceptors and clinical sites are major contributors to the midwifery education process, as educational programs rely on them to provide clinical practice opportunities. In this regard, the selection of clinical training sites should be based on certain criteria to ensure their readiness for the intended training experiences. After that, communication, collaboration, and liaison between teaching faculty and field staff should be maintained. However, the shortage of clinical preceptors and the massive reduction in the number of practicing midwives, in addition to unmanageable workloads, act as significant barriers to midwifery education. Moreover, the medicalized approach inherent in the hospital setting makes it difficult to practice the midwifery model of care, such as watchful waiting, non-interference in normal processes, and judicious use of interventions. Furthermore, creating a motivating study environment is crucial for avoiding unnecessary withdrawal and retention in any educational program. It is well understood that research is an essential component of any profession for achieving its optimal goal and providing a foundation and evidence for its practices, and midwifery is no exception. Midwives have been playing an important role in generating their own research. However, the selection of novel, researchable, and sustainable topics considering community health needs is also a challenge. In conclusion, ongoing education and research are the lifeblood of the midwifery profession to offer a highly competent and qualified workforce. However, many challenges are being faced, and barriers are hindering their improvement.Keywords: barriers, challenges, midwifery education, educational programs
Procedia PDF Downloads 11516563 Computer-Based Model for Design Selection of Lightning Arrester for 132/33kV Substation
Authors: Uma U. Uma, Uzoechi Laz
Abstract:
Protection of equipment insulation against lightning over voltages and selection of lightning arrester that will discharge at lower voltage level than the voltage required to breakdown the electrical equipment insulation is examined. The objectives of this paper are to design a computer based model using standard equations for the selection of appropriate lightning arrester with the lowest rated surge arrester that will provide adequate protection of equipment insulation and equally have a satisfactory service life when connected to a specified line voltage in power system network. The effectiveness and non-effectiveness of the earthing system of substation determine arrester properties. MATLAB program with GUI (graphic user interphase) its subprogram is used in the development of the model for the determination of required parameters like voltage rating, impulse spark over voltage, power frequency spark over voltage, discharge current, current rating and protection level of lightning arrester of a specified voltage level of a particular line.Keywords: lightning arrester, GUIs, MatLab program, computer based model
Procedia PDF Downloads 417