Search results for: factor models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11440

Search results for: factor models

10300 Aggregate Production Planning Framework in a Multi-Product Factory: A Case Study

Authors: Ignatio Madanhire, Charles Mbohwa

Abstract:

This study looks at the best model of aggregate planning activity in an industrial entity and uses the trial and error method on spreadsheets to solve aggregate production planning problems. Also linear programming model is introduced to optimize the aggregate production planning problem. Application of the models in a furniture production firm is evaluated to demonstrate that practical and beneficial solutions can be obtained from the models. Finally some benchmarking of other furniture manufacturing industries was undertaken to assess relevance and level of use in other furniture firms

Keywords: aggregate production planning, trial and error, linear programming, furniture industry

Procedia PDF Downloads 553
10299 The Employees' Classification Method in the Space of Their Job Satisfaction, Loyalty and Involvement

Authors: Svetlana Ignatjeva, Jelena Slesareva

Abstract:

The aim of the study is development and adaptation of the method to analyze and quantify the indicators characterizing the relationship between a company and its employees. Diagnostics of such indicators is one of the most complex and actual issues in psychology of labour. The offered method is based on the questionnaire; its indicators reflect cognitive, affective and connotative components of socio-psychological attitude of employees to be as efficient as possible in their professional activities. This approach allows measure not only the selected factors but also such parameters as cognitive and behavioural dissonances. Adaptation of the questionnaire includes factor structure analysis and suitability analysis of phenomena indicators measured in terms of internal consistency of individual factors. Structural validity of the questionnaire was tested by exploratory factor analysis. Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. Factor analysis allows reduce dimension of the phenomena moving from the indicators to aggregative indexes and latent variables. Aggregative indexes are obtained as the sum of relevant indicators followed by standardization. The coefficient Cronbach's Alpha was used to assess the reliability-consistency of the questionnaire items. The two-step cluster analysis in the space of allocated factors allows classify employees according to their attitude to work in the company. The results of psychometric testing indicate possibility of using the developed technique for the analysis of employees’ attitude towards their work in companies and development of recommendations on their optimization.

Keywords: involved in the organization, loyalty, organizations, method

Procedia PDF Downloads 354
10298 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 121
10297 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 138
10296 Characteristic Study on Conventional and Soliton Based Transmission System

Authors: Bhupeshwaran Mani, S. Radha, A. Jawahar, A. Sivasubramanian

Abstract:

Here, we study the characteristic feature of conventional (ON-OFF keying) and soliton based transmission system. We consider 20 Gbps transmission system implemented with Conventional Single Mode Fiber (C-SMF) to examine the role of Gaussian pulse which is the characteristic of conventional propagation and hyperbolic-secant pulse which is the characteristic of soliton propagation in it. We note the influence of these pulses with respect to different dispersion lengths and soliton period in conventional and soliton system, respectively, and evaluate the system performance in terms of quality factor. From the analysis, we could prove that the soliton pulse has more consistent performance even for long distance without dispersion compensation than the conventional system as it is robust to dispersion. For the length of transmission of 200 Km, soliton system yielded Q of 33.958 while the conventional system totally exhausted with Q=0.

Keywords: dispersion length, retrun-to-zero (rz), soliton, soliton period, q-factor

Procedia PDF Downloads 340
10295 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 98
10294 Review of Hydrologic Applications of Conceptual Models for Precipitation-Runoff Process

Authors: Oluwatosin Olofintoye, Josiah Adeyemo, Gbemileke Shomade

Abstract:

The relationship between rainfall and runoff is an important issue in surface water hydrology therefore the understanding and development of accurate rainfall-runoff models and their applications in water resources planning, management and operation are of paramount importance in hydrological studies. This paper reviews some of the previous works on the rainfall-runoff process modeling. The hydrologic applications of conceptual models and artificial neural networks (ANNs) for the precipitation-runoff process modeling were studied. Gradient training methods such as error back-propagation (BP) and evolutionary algorithms (EAs) are discussed in relation to the training of artificial neural networks and it is shown that application of EAs to artificial neural networks training could be an alternative to other training methods. Therefore, further research interest to exploit the abundant expert knowledge in the area of artificial intelligence for the solution of hydrologic and water resources planning and management problems is needed.

Keywords: artificial intelligence, artificial neural networks, evolutionary algorithms, gradient training method, rainfall-runoff model

Procedia PDF Downloads 449
10293 The Effect of Symmetry on the Perception of Happiness and Boredom in Design Products

Authors: Michele Sinico

Abstract:

The present research investigates the effect of symmetry on the perception of happiness and boredom in design products. Three experiments were carried out in order to verify the degree of the visual expressive value on different models of bookcases, wall clocks, and chairs. 60 participants directly indicated the degree of happiness and boredom using 7-point rating scales. The findings show that the participants acknowledged a different value of expressive quality in the different product models. Results show also that symmetry is not a significant constraint for an emotional design project.

Keywords: product experience, emotional design, symmetry, expressive qualities

Procedia PDF Downloads 146
10292 Airliner-UAV Flight Formation in Climb Regime

Authors: Pavel Zikmund, Robert Popela

Abstract:

Extreme formation is a theoretical concept of self-sustain flight when a big Airliner is followed by a small UAV glider flying in airliner’s wake vortex. The paper presents results of climb analysis with a goal to lift the gliding UAV to airliner’s cruise altitude. Wake vortex models, the UAV drag polar and basic parameters and airliner’s climb profile are introduced at first. Then, flight performance of the UAV in the wake vortex is evaluated by analytical methods. Time history of optimal distance between the airliner and the UAV during the climb is determined. The results are encouraging, therefore available UAV drag margin for electricity generation is figured out for different vortex models.

Keywords: flight in formation, self-sustained flight, UAV, wake vortex

Procedia PDF Downloads 436
10291 Top Management Support as an Enabling Factor for Academic Innovation through Knowledge Sharing

Authors: Sawsan J. Al-husseini, Talib A. Dosa

Abstract:

Educational institutions are today facing increasing pressures due to economic, political and social upheaval. This is only exacerbated by the nature of education as an intangible good which relies upon the intellectual assets of the organisation, its staff. Top management support has been acknowledged as having a positive general influence on knowledge management and creativity. However, there is a lack of models linking top management support, knowledge sharing, and innovation within higher education institutions, in general within developing countries, and particularly in Iraq. This research sought to investigate the impact of top management support on innovation through the mediating role of knowledge sharing in Iraqi private HEIs. A quantitative approach was taken and 262 valid responses were collected to test the causal relationships between top management support, knowledge sharing, and innovation. Employing structural equation modelling with AMOS v.25, the research demonstrated that knowledge sharing plays a pivotal role in the relationship between top management support and innovation. The research has produced some guidelines for researchers as well as leaders, and provided evidence to support the use of knowledge sharing to increase innovation within the higher education environment in developing countries, particularly Iraq.

Keywords: top management support, knowledge sharing, innovation, structural equation modelling

Procedia PDF Downloads 321
10290 RANS Simulation of Viscous Flow around Hull of Multipurpose Amphibious Vehicle

Authors: M. Nakisa, A. Maimun, Yasser M. Ahmed, F. Behrouzi, A. Tarmizi

Abstract:

The practical application of the Computational Fluid Dynamics (CFD), for predicting the flow pattern around Multipurpose Amphibious Vehicle (MAV) hull has made much progress over the last decade. Today, several of the CFD tools play an important role in the land and water going vehicle hull form design. CFD has been used for analysis of MAV hull resistance, sea-keeping, maneuvering and investigating its variation when changing the hull form due to varying its parameters, which represents a very important task in the principal and final design stages. Resistance analysis based on CFD (Computational Fluid Dynamics) simulation has become a decisive factor in the development of new, economically efficient and environmentally friendly hull forms. Three-dimensional finite volume method (FVM) based on Reynolds Averaged Navier-Stokes equations (RANS) has been used to simulate incompressible flow around three types of MAV hull bow models in steady-state condition. Finally, the flow structure and streamlines, friction and pressure resistance and velocity contours of each type of hull bow will be compared and discussed.

Keywords: RANS simulation, multipurpose amphibious vehicle, viscous flow structure, mechatronic

Procedia PDF Downloads 310
10289 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform

Procedia PDF Downloads 301
10288 Problem Gambling in the Conceptualization of Health Professionals: A Qualitative Analysis of the Discourses Produced by Psychologists, Psychiatrists and General Practitioners

Authors: T. Marinaci, C. Venuleo

Abstract:

Different conceptualizations of disease affect patient care. This study aims to address this gap. It explores how health professionals conceptualize gambling problem, addiction and the goals of recovery process. In-depth, semi-structured, open-ended interviews were conducted with Italian psychologists, psychiatrists, general practitioners, and support staff (N= 114), working within health centres for the treatment of addiction (public health services or therapeutic communities) or medical offices. A Lexical Correspondence Analysis (LCA) was applied to the verbatim transcripts. LCA allowed to identify two main factorial dimensions, which organize similarity and dissimilarity in the discourses of the interviewed. The first dimension labelled 'Models of relationship with the problem', concerns two different models of relationship with the health problem: one related to the request for help and the process of taking charge and the other related to the identification of the psychopathology underlying the disorder. The second dimension, labelled 'Organisers of the intervention' reflects the dialectic between two ways to address the problem. On the one hand, they are the gambling dynamics and its immediate life-consequences to organize the intervention (whatever the request of the user is); on the other hand, they are the procedures and the tools which characterize the health service to organize the way the professionals deal with the user’ s problem (whatever it is and despite the specify of the user’s request). The results highlight how, despite the differences, the respondents share a central assumption: understanding gambling problem implies the reference to the gambler’s identity, more than, for instance, to the relational, social, cultural or political context where the gambler lives. A passive stance is attributed to the user, who does not play any role in the definition of the goal of the intervention. The results will be discussed to highlight the relationship between professional models and users’ ways to understand and deal with the problems related to gambling.

Keywords: cultural models, health professionals, intervention models, problem gambling

Procedia PDF Downloads 152
10287 Probing Syntax Information in Word Representations with Deep Metric Learning

Authors: Bowen Ding, Yihao Kuang

Abstract:

In recent years, with the development of large-scale pre-trained lan-guage models, building vector representations of text through deep neural network models has become a standard practice for natural language processing tasks. From the performance on downstream tasks, we can know that the text representation constructed by these models contains linguistic information, but its encoding mode and extent are unclear. In this work, a structural probe is proposed to detect whether the vector representation produced by a deep neural network is embedded with a syntax tree. The probe is trained with the deep metric learning method, so that the distance between word vectors in the metric space it defines encodes the distance of words on the syntax tree, and the norm of word vectors encodes the depth of words on the syntax tree. The experiment results on ELMo and BERT show that the syntax tree is encoded in their parameters and the word representations they produce.

Keywords: deep metric learning, syntax tree probing, natural language processing, word representations

Procedia PDF Downloads 63
10286 Relational and Personal Variables Predicting Marital Satisfaction

Authors: Sezen Gulec, Bilge Uzun

Abstract:

Almost all of the world population marries at least once in their lifetime. Nevertheless, in reality, only half of all marriages last a lifetime. The most important factor in marriage to manage is the satisfaction that they obtain. It is reality that marital satisfaction does not only related to maintain the relationship but also related to the social and work relationships. In this respect, the purpose of the present research is to find the personal and relational factors predicted marital satisfaction. The sample including 378 (178 male and 200 females) married individuals were administered to marital life scale, multidimensional perfectionism scale, trait forgivingness scale, adjective based personality test and relationship happiness questionnaire. The findings revealed marital happiness, forgiveness and extravertedness and emotional inconsistency factors were found to be significant predictors of marital satisfaction.

Keywords: marital satisfaction, happiness, perfectionism, forgiveness, five factor personality

Procedia PDF Downloads 663
10285 Prediction of Bodyweight of Cattle by Artificial Neural Networks Using Digital Images

Authors: Yalçın Bozkurt

Abstract:

Prediction models were developed for accurate prediction of bodyweight (BW) by using Digital Images of beef cattle body dimensions by Artificial Neural Networks (ANN). For this purpose, the animal data were collected at a private slaughter house and the digital images and the weights of each live animal were taken just before they were slaughtered and the body dimensions such as digital wither height (DJWH), digital body length (DJBL), digital body depth (DJBD), digital hip width (DJHW), digital hip height (DJHH) and digital pin bone length (DJPL) were determined from the images, using the data with 1069 observations for each traits. Then, prediction models were developed by ANN. Digital body measurements were analysed by ANN for body prediction and R2 values of DJBL, DJWH, DJHW, DJBD, DJHH and DJPL were approximately 94.32, 91.31, 80.70, 83.61, 89.45 and 70.56 % respectively. It can be concluded that in management situations where BW cannot be measured it can be predicted accurately by measuring DJBL and DJWH alone or both DJBD and even DJHH and different models may be needed to predict BW in different feeding and environmental conditions and breeds

Keywords: artificial neural networks, bodyweight, cattle, digital body measurements

Procedia PDF Downloads 365
10284 Atmospheric Circulation Drivers Of Nationally-Aggregated Wind Energy Production Over Greece

Authors: Kostas Philippopoulos, Chris G. Tzanis, Despina Deligiorgi

Abstract:

Climate change adaptation requires the exploitation of renewable energy sources such as wind. However, climate variability can affect the regional wind energy potential and consequently the available wind power production. The goal of the research project is to examine the impact of atmospheric circulation on wind energy production over Greece. In the context of synoptic climatology, the proposed novel methodology employs Self-Organizing Maps for grouping and classifying the atmospheric circulation and nationally-aggregated capacity factor time series for a 30-year period. The results indicate the critical effect of atmospheric circulation on the national aggregated wind energy production values and therefore address the issue of optimum distribution of wind farms for a specific region.

Keywords: wind energy, atmospheric circulation, capacity factor, self-organizing maps

Procedia PDF Downloads 157
10283 Development and Validation of Sense of Humor Questionnaire in China

Authors: Yunshi Peng, Shanshan Gao, Sang Qin

Abstract:

The sense of humor is an integration of cognition, emotion and behavioral tendencies in the process of expressing humor. Previous studies evidenced the positive impact of sense of humor on promoting mental health. However, very few studies investigated this with Chinese populations. The absence of a validated questionnaire limits empirical research on sense of humor in China. This study aimed to develop a Chinese instrument to examine the sense of humor among college students in China. A pool of 72 items was developed by conducting a series of qualitative methods including open-ended questionnaire, individual interviews and literature analysis, followed by an expert rating. A total of 500 college students were recruited from 7 provinces in China to complete all 72 items. The factor structure of sense of humor was established and 25 items were eventually formed by utilizing the exploratory factor analyses (EFA). The questionnaire composed 4 subscales: humor comprehension, humor creativity, attitudes towards humor and optimism level. Confirmatory factor analyses (CFA) from a follow-up study with a different sample of 1200 colleges students showed good model fit. All subscales and the overall questionnaire display satisfying internal consistency. Correlations with criterion variables demonstrated good convergent and discriminant validity. The sense of humor questionnaire is a psychometrically-sound instrument for the population of college students in China. This is applicable for future studies to identify the structure of sense of humor and evaluate the levels of humor for individuals.

Keywords: college students, EFA and CFA, questionnaire, sense of humor

Procedia PDF Downloads 340
10282 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques

Authors: Jonathan Iworiso

Abstract:

Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.

Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains

Procedia PDF Downloads 101
10281 Structure of Turbulence Flow in the Wire-Wrappes Fuel Assemblies of BREST-OD-300

Authors: Dmitry V. Fomichev, Vladimir I. Solonin

Abstract:

In this paper, experimental and numerical study of hydrodynamic characteristics of the air coolant flow in the test wire-wrapped assembly is presented. The test assembly has 37 rods, which are similar to the real fuel pins of the BREST-OD-300 fuel assemblies geometrically. Air open loop test facility installed at the “Nuclear Power Plants and Installations” department of BMSTU was used to obtain the experimental data. The obtaining altitudinal distribution of static pressure in the near-wall test assembly as well as velocity and temperature distribution of coolant flow in the test sections can give us some new knowledge about the mechanism of formation of the turbulence flow structure in the wire wrapped fuel assemblies. Numerical simulations of the turbulence flow has been accomplished using ANSYS Fluent 14.5. Different non-local turbulence models have been considered, such as standard and RNG k-e models and k-w SST model. Results of numerical simulations of the flow based on the considered turbulence models give the best agreement with the experimental data and help us to carry out strong analysis of flow characteristics.

Keywords: wire-spaces fuel assembly, turbulent flow structure, computation fluid dynamics

Procedia PDF Downloads 456
10280 Dynamic Model Conception of Improving Services Quality in Railway Transport

Authors: Eva Nedeliakova, Jaroslav Masek, Juraj Camaj

Abstract:

This article describes the results of research focused on quality of railway freight transport services. Improvement of these services has a crucial importance in customer considering on the future use of railway transport. Processes filling the customer demands and output quality assessment were defined as a part of the research. In this, contribution is introduced the map of quality planning and the algorithm of applied methodology. It characterises a model which takes into account characters of transportation with linking a perception services quality in ordinary and extraordinary operation. Despite the fact that rail freight transport has its solid position in the transport market, lots of carriers worldwide have been experiencing a stagnation for a couple of years. Therefore, specific results of the research have a significant importance and belong to numerous initiatives aimed to develop and support railway transport not only by creating a single railway area or reducing noise but also by promoting railway services. This contribution is focused also on the application of dynamic quality models which represent an innovative method of evaluation quality services. Through this conception, time factor, expected and perceived quality in each moment of the transportation process can be taken into account.

Keywords: quality, railway, transport, service

Procedia PDF Downloads 442
10279 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 160
10278 Electron Density Discrepancy Analysis of Energy Metabolism Coenzymes

Authors: Alan Luo, Hunter N. B. Moseley

Abstract:

Many macromolecular structure entries in the Protein Data Bank (PDB) have a range of regional (localized) quality issues, be it derived from x-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy, or other experimental approaches. However, most PDB entries are judged by global quality metrics like R-factor, R-free, and resolution for x-ray crystallography or backbone phi-psi distribution statistics and average restraint violations for NMR. Regional quality is often ignored when PDB entries are re-used for a variety of structurally based analyses. The binding of ligands, especially ligands involved in energy metabolism, is of particular interest in many structurally focused protein studies. Using a regional quality metric that provides chemically interpretable information from electron density maps, a significant number of outliers in regional structural quality was detected across x-ray crystallographic PDB entries for proteins bound to biochemically critical ligands. In this study, a series of analyses was performed to evaluate both specific and general potential factors that could promote these outliers. In particular, these potential factors were the minimum distance to a metal ion, the minimum distance to a crystal contact, and the isotropic atomic b-factor. To evaluate these potential factors, Fisher’s exact tests were performed, using regional quality criteria of outlier (top 1%, 2.5%, 5%, or 10%) versus non-outlier compared to a potential factor metric above versus below a certain outlier cutoff. The results revealed a consistent general effect from region-specific normalized b-factors but no specific effect from metal ion contact distances and only a very weak effect from crystal contact distance as compared to the b-factor results. These findings indicate that no single specific potential factor explains a majority of the outlier ligand-bound regions, implying that human error is likely as important as these other factors. Thus, all factors, including human error, should be considered when regions of low structural quality are detected. Also, the downstream re-use of protein structures for studying ligand-bound conformations should screen the regional quality of the binding sites. Doing so prevents misinterpretation due to the presence of structural uncertainty or flaws in regions of interest.

Keywords: biomacromolecular structure, coenzyme, electron density discrepancy analysis, x-ray crystallography

Procedia PDF Downloads 127
10277 Serum Granulocyte Colony Stimulating Factor is a Potent Stimulator of Hematopoeitic Progenitor Cells Mobilization in Trauma Hemorrhagic Shock

Authors: Manoj Kumar, Sujata Mohanty, D. N. Rao, Arul Selvi, Sanjeev K. Bhoi

Abstract:

Background: Hematopoietic progenitor cells (HPC) mobilized from bone marrow to peripheral blood has been observed in severe trauma and hemorrhagic shock patients. Granulocyte-colony stimulating factor (G-CSF) is a potent stimulator that mobilized HPC from bone marrow to peripheral blood. Objective: Our aim of the study was to investigate the serum G-CSF levels and correlate with HPC and outcome. Methods: Peripheral blood sample from 50 hemorrhagic shock patients was collected on arrival for determination of G-CSF and peripheral blood HPC (PBHPC) and compared with healthy control (n=15). Determination of serum levels of G-CSF by sandwich ELISA and PBHPC by Sysmex XE-2100. Data were categorized by age, sex, Injury Severity Score (ISS), and laboratory data was prospectively collected. Data are expressed as mean±SD and median (min, max). Results: Significantly increased the serum level of G-CSF (264.8 vs. 79.1 pg/ml) and peripheral blood HPC (0.1 vs. 0.01 %) in the T/HS patients when compared with control group. Conclusions: Our studies suggest serum G-CSF elevated in T/HS patients. The elevated in G-CSF was also associated with mobilization of HPC from BM to peripheral blood HPC. Increased the levels of G-CSF in T/HS may play a significant role in the alteration of the hematopoietic compartment.

Keywords: granulocyte colony stimulating factor, G-CSF, hematopoietic progenitor cells, HPC, trauma hemorrhagic shock, T/HS, outcome

Procedia PDF Downloads 325
10276 Relationship of Epidermal Growth Factor Receptor Gene Mutations Andserum Levels of Ligands in Non-Small Cell Lung Carcinoma Patients

Authors: Abdolamir Allameh, Seyyed Mortaza Haghgoo, Adnan Khosravi, Esmaeil Mortaz, Mihan Pourabdollah-Toutkaboni, Sharareh Seifi

Abstract:

Non-Small Cell Lung Carcinoma (NSCLC) is associated with a number of gene mutations in epidermal growth factor receptor (EGFR). The prognostic significance of mutations in exons 19 and 21, together with serum levels of EGFR, amphiregulin (AR), and Transforming Growth Factor-alpha (TGF-α) are implicated in diagnosis and treatment. The aim of this study was to examine the relationship of EGFR mutations in selected exons with the expression of relevant ligands in sera samples of NSCLC patients. For this, a group of NSCLC patients (n=98) referred to the hospital for lung surgery with a mean age of 59±10.5 were enrolled (M/F: 75/23). Blood specimen was collected from each patient. Besides, formalin fixed paraffin embedded tissues were processed for DNA extraction. Gene mutations in exons 19 and 21 were detected by direct sequencing, following DNA amplification which was done by PCR (Polymerase Chain Reaction). Also, serum levels of EGFR, AR, and TGF-α were measured by ELISA. The results of our study show that EGFR mutations were present in 37% of Iranian NSCLC patients. The most frequently identified mutations were deletions in exon 19 (72.2%) and substitutions in exon 21 (27.8%). The most frequently identified alteration, which is considered as a rare mutation, was the E872K mutation in exon 21, which was found in 90% (9 out of 10) cases. EGFR mutation detected in exon 21 was significantly (P<0.05) correlated with the levels of its ligands, EGFR and TGF-α in serum samples. Furthermore, it was found that increased serum AR (>3pg/ml) and TGF-α (>10.5 pg/ml) were associated with shorter overall survival (P<0.05). The results clearly showed a close relationship between EGFR mutations and serum EGFR and serum TGF-α. Increased serum EGFR was associated with TGF-α and AR and linked to poor prognosis of NSCLC. These findings are implicated in clinical decision-making related to EGFR-Tyrosine kinase inhibitors (TKIs).

Keywords: lung cancer, Iranian patients, epidermal growth factor, mutation, prognosis

Procedia PDF Downloads 75
10275 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection

Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew

Abstract:

The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.

Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.

Procedia PDF Downloads 42
10274 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices

Authors: S. Srinivasan, E. Cretu

Abstract:

The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.

Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape

Procedia PDF Downloads 132
10273 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 72
10272 Development and Validation of the University of Mindanao Needs Assessment Scale (UMNAS) for College Students

Authors: Ryan Dale B. Elnar

Abstract:

This study developed a multidimensional need assessment scale for college students called The University of Mindanao Needs Assessment Scale (UMNAS). Although there are context-specific instruments measuring the needs of clinical and non-clinical samples, literature reveals no standardized scales to measure the needs of the college students thus a four-phase item development process was initiated to support its content validity. Comprising seven broad facets namely spiritual-moral, intrapersonal, socio-personal, psycho-emotional, cognitive, physical and sexual, a pyramid model of college needs was deconstructed through FGD sample to support the literature review. Using various construct validity procedures, the model was further tested using a total of 881 Filipino college samples. The result of the study revealed evidences of the reliability and validity of the UMNAS. The reliability indices range from .929-.933. Exploratory and confirmatory factor analyses revealed a one-factor-six-dimensional instrument to measure the needs of the college students. Using multivariate regression analysis, year level and course are found predictors of students’ needs. Content analysis attested the usefulness of the instrument to diagnose students’ personal and academic issues and concerns in conjunction with other measures. The norming process includes 1728 students from the different colleges of the University of Mindanao. Further validation is recommended to establish a national norm for the instrument.

Keywords: needs assessment scale, validity, factor analysis, college students

Procedia PDF Downloads 439
10271 Silver Grating for Strong and Reproducible SERS Response

Authors: Y. Kalachyova, O. Lyutakov, V. Svorcik

Abstract:

One of the most significant obstacles for the application of surface enhanced Raman spectroscopy (SERS) is the poor reproducibility of SERS active substrates: SERS intensity can be varied from one substrate to another and moreover along the one substrate surface. High enhancement of the near-field intensity is the key factor for ultrasensitive SERS realization. SERS substrate can be prepared through introduction of highly ordered metal array, where light focusing is achieved through excitation of surface plasmon-polaritons (SPPs). In this work, we report the preparation of silver nanostructures with plasmon absorption peaks tuned by the metal arrangement. Excimer laser modification of poly(methyl methacrylate) followed by silver evaporation is proposed as an effective way for the creation of reproducible and effective surface plasmon-polaritons (SPP)-based SERS substrate. Theoretical and experimental studies were performed to optimize structure parameter for effective SPP excitation. It was found that the narrow range of grating periodicity and metal thickness exist, where SPPs can be most efficiently excited. In spite of the fact, that SERS response was almost always achieved, the enhancement factor was found to vary more with the effectivity of SPP excitation. When the real structure parameters were set to optimal for SPP excitation, a SERS enhancement factor was achieved up to four times. Theoretical and experimental investigation of SPP excitation on the two-dimensional periodical silver array was performed with the aim to make SERS response as high as possible.

Keywords: grating, nanostructures, plasmon-polaritons, SERS

Procedia PDF Downloads 263