Search results for: context based planning model
39792 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 24639791 Environmental Quality in Urban Areas: Legal Aspect and Institutional Dimension: A Case Study of Algeria
Authors: Youcef Lakhdar Hamina
Abstract:
In order to tame the ecological damage specificity, it is imperative to assert the procedural and objective liability aspect, which leads us to analyse current trends based on the development of preventive civil liability based on the precautionary principle. Our research focuses on the instruments of the environment protection in urban areas based on two complementary aspects appearing contradictory and refer directly to the institutional dimensions: - The preventive aspect: considered as a main objective of the environmental policy which highlights the different legal mechanisms for the environment protection by highlighting the role of administration in its implementation (environmental planning, tax incentives, modes of participation of all actors, etc.). - The healing-repressive aspect: considered as an approach for the identification of ecological damage and the forms of reparation (spatial and temporal-responsibility) to the impossibility of predicting with rigor and precision, the appearance of ecological damage, which cannot be avoided.Keywords: environmental law, environmental taxes, environmental damage, eco responsibility, precautionary principle, environmental management
Procedia PDF Downloads 41839790 Agent-Based Modelling to Improve Dairy-origin Beef Production: Model Description and Evaluation
Authors: Addisu H. Addis, Hugh T. Blair, Paul R. Kenyon, Stephen T. Morris, Nicola M. Schreurs, Dorian J. Garrick
Abstract:
Agent-based modeling (ABM) enables an in silico representation of complex systems and cap-tures agent behavior resulting from interaction with other agents and their environment. This study developed an ABM to represent a pasture-based beef cattle finishing systems in New Zea-land (NZ) using attributes of the rearer, finisher, and processor, as well as specific attributes of dairy-origin beef cattle. The model was parameterized using values representing 1% of NZ dairy-origin cattle, and 10% of rearers and finishers in NZ. The cattle agent consisted of 32% Holstein-Friesian, 50% Holstein-Friesian–Jersey crossbred, and 8% Jersey, with the remainder being other breeds. Rearers and finishers repetitively and simultaneously interacted to determine the type and number of cattle populating the finishing system. Rearers brought in four-day-old spring-born calves and reared them until 60 calves (representing a full truck load) on average had a live weight of 100 kg before selling them on to finishers. Finishers mainly attained weaners from rearers, or directly from dairy farmers when weaner demand was higher than the supply from rearers. Fast-growing cattle were sent for slaughter before the second winter, and the re-mainder were sent before their third winter. The model finished a higher number of bulls than heifers and steers, although it was 4% lower than the industry reported value. Holstein-Friesian and Holstein-Friesian–Jersey-crossbred cattle dominated the dairy-origin beef finishing system. Jersey cattle account for less than 5% of total processed beef cattle. Further studies to include re-tailer and consumer perspectives and other decision alternatives for finishing farms would im-prove the applicability of the model for decision-making processes.Keywords: agent-based modelling, dairy cattle, beef finishing, rearers, finishers
Procedia PDF Downloads 10539789 Modeling and Simulation of Fluid Catalytic Cracking Process
Authors: Sungho Kim, Dae Shik Kim, Jong Min Lee
Abstract:
Fluid catalytic cracking (FCC) process is one of the most important process in modern refinery industry. This paper focuses on the fluid catalytic cracking (FCC) process. As the FCC process is difficult to model well, due to its non linearities and various interactions between its process variables, rigorous process modeling of whole FCC plant is demanded for control and plant-wide optimization of the plant. In this study, a process design for the FCC plant includes riser reactor, main fractionator, and gas processing unit was developed. A reactor model was described based on four-lumped kinetic scheme. Main fractionator, gas processing unit and other process units are designed to simulate real plant data, using a process flow sheet simulator, Aspen PLUS. The custom reactor model was integrated with the process flow sheet simulator to develop an integrated process model.Keywords: fluid catalytic cracking, simulation, plant data, process design
Procedia PDF Downloads 53239788 A New Suburb Renovation Concept
Authors: Anu Soikkelii, Laura Sorri
Abstract:
Finnish national research project, User- and Business-oriented Suburb Renovation Concept (KLIKK), was started in January 2012 and will end in June 2014. The perspective of energy efficiency is emphasised in the project, but also it addresses what improving the energy efficiency of suburban apartment buildings means from the standpoint of architecturally valuable buildings representing different periods. The project will also test the impacts of stricter energy efficiency requirements on renovation projects. The primary goal of the project is to develop a user-oriented, industrial, economic renovation concept for suburban apartment building renovation, extension and construction of additional storeys. The concept will make it possible to change from performance- and cost-based operation to novel service- and user-oriented, site-specifically tailored renovation methods utilizing integrated order and delivery chains.The present project is collaborating with Ministry of the Environment and participating cities in developing a new type of lighter town planning model for suburban renovations and in-fill construction. To support this, the project will simultaneously develop practices for environmental impact assessment tools in renovation and suburban supplementary and in-fill construction.Keywords: energy efficiency, prefabrication, renovation concept, suburbs, sustainability, user-orientated
Procedia PDF Downloads 33839787 A Fast Parallel and Distributed Type-2 Fuzzy Algorithm Based on Cooperative Mobile Agents Model for High Performance Image Processing
Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah
Abstract:
The aim of this paper is to present a distributed implementation of the Type-2 Fuzzy algorithm in a parallel and distributed computing environment based on mobile agents. The proposed algorithm is assigned to be implemented on a SPMD (Single Program Multiple Data) architecture which is based on cooperative mobile agents as AVPE (Agent Virtual Processing Element) model in order to improve the processing resources needed for performing the big data image segmentation. In this work we focused on the application of this algorithm in order to process the big data MRI (Magnetic Resonance Images) image of size (n x m). It is encapsulated on the Mobile agent team leader in order to be split into (m x n) pixels one per AVPE. Each AVPE perform and exchange the segmentation results and maintain asynchronous communication with their team leader until the convergence of this algorithm. Some interesting experimental results are obtained in terms of accuracy and efficiency analysis of the proposed implementation, thanks to the mobile agents several interesting skills introduced in this distributed computational model.Keywords: distributed type-2 fuzzy algorithm, image processing, mobile agents, parallel and distributed computing
Procedia PDF Downloads 43339786 Does Clinical Guidelines Affect Healthcare Quality and Populational Health: Quebec Colorectal Cancer Screening Program
Authors: Nizar Ghali, Bernard Fortin, Guy Lacroix
Abstract:
In Quebec, colonoscopies volumes have continued to rise in recent years in the absence of effective monitoring mechanism for the appropriateness and the quality of these exams. In 2010, November, Quebec Government introduced the colorectal cancer-screening program in the objective to control for volume and cost imperfection. This program is based on clinical standards and was initiated for first group of institutions. One year later, Government adds financial incentives for participants institutions. In this analysis, we want to assess for the causal effect of the two components of this program: clinical pathways and financial incentives. Especially we assess for the reform effect on healthcare quality and population health in the context that medical remuneration is not directly dependent on this additional funding offered by the program. We have data on admissions episodes and deaths for 8 years. We use multistate model analog to difference in difference approach to estimate reform effect on the transition probability between different states for each patient. Our results show that the reform reduced length of stay without deterioration in hospital mortality or readmission rate. In the other hand, the program contributed to decrease the hospitalization rate and a less invasive treatment approach for colorectal surgeries. This is a sign of healthcare quality and population health improvement. We demonstrate in this analysis that physicians’ behavior can be affected by both clinical standards and financial incentives even if offered to facilities.Keywords: multi-state and multi-episode transition model, healthcare quality, length of stay, transition probability, difference in difference
Procedia PDF Downloads 21739785 How to Perform Proper Indexing?
Authors: Watheq Mansour, Waleed Bin Owais, Mohammad Basheer Kotit, Khaled Khan
Abstract:
Efficient query processing is one of the utmost requisites in any business environment to satisfy consumer needs. This paper investigates the various types of indexing models, viz. primary, secondary, and multi-level. The investigation is done under the ambit of various types of queries to which each indexing model performs with efficacy. This study also discusses the inherent advantages and disadvantages of each indexing model and how indexing models can be chosen based on a particular environment. This paper also draws parallels between various indexing models and provides recommendations that would help a Database administrator to zero-in on a particular indexing model attributed to the needs and requirements of the production environment. In addition, to satisfy industry and consumer needs attributed to the colossal data generation nowadays, this study has proposed two novel indexing techniques that can be used to index highly unstructured and structured Big Data with efficacy. The study also briefly discusses some best practices that the industry should follow in order to choose an indexing model that is apposite to their prerequisites and requirements.Keywords: indexing, hashing, latent semantic indexing, B-tree
Procedia PDF Downloads 16439784 A Composite Beam Element Based on Global-Local Superposition Theory for Prediction of Delamination in Composite Laminates
Authors: Charles Mota Possatti Júnior, André Schwanz de Lima, Maurício Vicente Donadon, Alfredo Rocha de Faria
Abstract:
An interlaminar damage model is combined with a beam element formulation based on global-local superposition to assess delamination in composite laminates. The variations in the mechanical properties in the laminate, generated by the presence of delamination, are calculated as a function of the displacements in the interface layers. The global-local superposition of displacement fields ensures the zig-zag behaviour of stresses and displacement, and the number of degrees of freedom (DOFs) is independent of the number of layers. The displacements and stresses are calculated as a function of DOFs commonly used in traditional beam elements. Finally, the finite element(FE) formulation is extended to handle cases of different thicknesses, and then the FE model predictions are compared with results obtained from analytical solutions and commercial finite element codes.Keywords: delamination, global-local superposition theory, single beam element, zig-zag, interlaminar damage model
Procedia PDF Downloads 12439783 A Complex Network Approach to Structural Inequality of Educational Deprivation
Authors: Harvey Sanchez-Restrepo, Jorge Louca
Abstract:
Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics
Procedia PDF Downloads 12839782 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models
Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach
Abstract:
In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model
Procedia PDF Downloads 18839781 Evaluating Key Attributes of Effective Digital Games in Tertiary Education
Authors: Roopali Kulkarni, Yuliya Khrypko
Abstract:
A major problem in educational digital game design is that game developers are often focused on maintaining the fun and playability of an educational game, whereas educators are more concerned with the learning aspect of the game rather than its entertaining characteristics. There is a clear need to understand what key aspects of digital learning games make them an effective learning medium in tertiary education. Through a systematic literature review and content analysis, this paper identifies, evaluates, and summarizes twenty-three key attributes of digital games used in tertiary education and presents a summary digital game-based learning (DGBL) model for designing and evaluating an educational digital game of any genre that promotes effective learning in tertiary education. The proposed solution overcomes limitations of previously designed models for digital game evaluation, such as a small number of game attributes considered or applicability to a specific genre of digital games. The proposed DGBL model can be used to assist game designers and educators with creating effective and engaging educational digital games for the tertiary education curriculum.Keywords: DGBL model, digital games, educational games, game-based learning, tertiary education
Procedia PDF Downloads 29139780 Aerodynamic Analysis by Computational Fluids Dynamics in Building: Case Study
Authors: Javier Navarro Garcia, Narciso Vazquez Carretero
Abstract:
Eurocode 1, part 1-4, wind actions, includes in its article 1.5 the possibility of using numerical calculation methods to obtain information on the loads acting on a building. On the other hand, the analysis using computational fluids dynamics (CFD) in aerospace, aeronautical, and industrial applications is already in widespread use. The application of techniques based on CFD analysis on the building to study its aerodynamic behavior now opens a whole alternative field of possibilities for civil engineering and architecture; optimization of the results with respect to those obtained by applying the regulations, the possibility of obtaining information on pressures, speeds at any point of the model for each moment, the analysis of turbulence and the possibility of modeling any geometry or configuration. The present work compares the results obtained on a building, with respect to its aerodynamic behavior, from a mathematical model based on the analysis by CFD with the results obtained by applying Eurocode1, part1-4, wind actions. It is verified that the results obtained by CFD techniques suppose an optimization of the wind action that acts on the building with respect to the wind action obtained by applying the Eurocode1, part 1-4, wind actions. In order to carry out this verification, a 45m high square base truncated pyramid building has been taken. The mathematical model on CFD, based on finite volumes, has been calculated using the FLUENT commercial computer application using a scale-resolving simulation (SRS) type large eddy simulation (LES) turbulence model for an atmospheric boundary layer wind with turbulent component in the direction of the flow.Keywords: aerodynamic, CFD, computacional fluids dynamics, computational mechanics
Procedia PDF Downloads 14139779 Numerical Solutions of Fractional Order Epidemic Model
Authors: Sadia Arshad, Ayesha Sohail, Sana Javed, Khadija Maqbool, Salma Kanwal
Abstract:
The dynamical study of the carriers play an essential role in the evolution and global transmission of infectious diseases and will be discussed in this study. To make this approach novel, we will consider the fractional order model which is generalization of integer order derivative to an arbitrary number. Since the integration involved is non local therefore this property of fractional operator is very useful to study epidemic model for infectious diseases. An extended numerical method (ODE solver) is implemented on the model equations and we will present the simulations of the model for different values of fractional order to study the effect of carriers on transmission dynamics. Global dynamics of fractional model are established by using the reproduction number.Keywords: Fractional differential equation, Numerical simulations, epidemic model, transmission dynamics
Procedia PDF Downloads 60839778 Digital Curriculum Preservation Planning, Actions, and Challenges
Authors: Misook Ahn
Abstract:
This study examined the Digital Curriculum Repository (DCR) project initiated at Defense Language Institute Foreign Language Center (DLIFLC). The purpose of the DCR is to build a centralized curriculum infrastructure, preserve all curriculum materials, and provide academic service to users (faculty, students, or other agencies). The DCR collection includes core language curriculum materials developed by each language school—foreign language textbooks, language survival kits, and audio files currently in or not in use at the schools. All core curriculum materials with audio and video files have been coded, collected, and preserved at the DCR. The DCR website was designed with MS SharePoint for easy accessibility by the DLIFLC’s faculty and students. All metadata for the collected curriculum materials have been input by language, code, year, book type, level, user, version, and current status (in use/not in use). The study documents digital curriculum preservation planning, actions, and challenges, including collecting, coding, collaborating, designing DCR SharePoint, and policymaking. DCR Survey data is also collected and analyzed for this research. Based on the finding, the study concludes that the mandatory policy for the DCR system and collaboration with school leadership are critical elements of a successful repository system. The sample collected items, metadata, and DCR SharePoint site are presented in the evaluation section.Keywords: MS share point, digital preservation, repository, policy
Procedia PDF Downloads 16539777 Real-Time Network Anomaly Detection Systems Based on Machine-Learning Algorithms
Authors: Zahra Ramezanpanah, Joachim Carvallo, Aurelien Rodriguez
Abstract:
This paper aims to detect anomalies in streaming data using machine learning algorithms. In this regard, we designed two separate pipelines and evaluated the effectiveness of each separately. The first pipeline, based on supervised machine learning methods, consists of two phases. In the first phase, we trained several supervised models using the UNSW-NB15 data-set. We measured the efficiency of each using different performance metrics and selected the best model for the second phase. At the beginning of the second phase, we first, using Argus Server, sniffed a local area network. Several types of attacks were simulated and then sent the sniffed data to a running algorithm at short intervals. This algorithm can display the results of each packet of received data in real-time using the trained model. The second pipeline presented in this paper is based on unsupervised algorithms, in which a Temporal Graph Network (TGN) is used to monitor a local network. The TGN is trained to predict the probability of future states of the network based on its past behavior. Our contribution in this section is introducing an indicator to identify anomalies from these predicted probabilities.Keywords: temporal graph network, anomaly detection, cyber security, IDS
Procedia PDF Downloads 10639776 Play in College: Shifting Perspectives and Creative Problem-Based Play
Authors: Agni Stylianou-Georgiou, Eliza Pitri
Abstract:
This study is a design narrative that discusses researchers’ new learning based on changes made in pedagogies and learning opportunities in the context of a Cognitive Psychology and an Art History undergraduate course. The purpose of this study was to investigate how to encourage creative problem-based play in tertiary education engaging instructors and student-teachers in designing educational games. Course instructors modified content to encourage flexible thinking during game design problem-solving. Qualitative analyses of data sources indicated that Thinking Birds’ questions could encourage flexible thinking as instructors engaged in creative problem-based play. However, student-teachers demonstrated weakness in adopting flexible thinking during game design problem solving. Further studies of student-teachers’ shifting perspectives during different instructional design tasks would provide insights for developing the Thinking Birds’ questions as tools for creative problem solving.Keywords: creative problem-based play, educational games, flexible thinking, tertiary education
Procedia PDF Downloads 29639775 Finite Element Modeling of Heat and Moisture Transfer in Porous Material
Authors: V. D. Thi, M. Li, M. Khelifa, M. El Ganaoui, Y. Rogaume
Abstract:
This paper presents a two-dimensional model to study the heat and moisture transfer through porous building materials. Dynamic and static coupled models of heat and moisture transfer in porous material under low temperature are presented and the coupled models together with variable initial and boundary conditions have been considered in an analytical way and using the finite element method. The resulting coupled model is converted to two nonlinear partial differential equations, which is then numerically solved by an implicit iterative scheme. The numerical results of temperature and moisture potential changes are compared with the experimental measurements available in the literature. Predicted results demonstrate validation of the theoretical model and effectiveness of the developed numerical algorithms. It is expected to provide useful information for the porous building material design based on heat and moisture transfer model.Keywords: finite element method, heat transfer, moisture transfer, porous materials, wood
Procedia PDF Downloads 40439774 Credit Risk Prediction Based on Bayesian Estimation of Logistic Regression Model with Random Effects
Authors: Sami Mestiri, Abdeljelil Farhat
Abstract:
The aim of this current paper is to predict the credit risk of banks in Tunisia, over the period (2000-2005). For this purpose, two methods for the estimation of the logistic regression model with random effects: Penalized Quasi Likelihood (PQL) method and Gibbs Sampler algorithm are applied. By using the information on a sample of 528 Tunisian firms and 26 financial ratios, we show that Bayesian approach improves the quality of model predictions in terms of good classification as well as by the ROC curve result.Keywords: forecasting, credit risk, Penalized Quasi Likelihood, Gibbs Sampler, logistic regression with random effects, curve ROC
Procedia PDF Downloads 54439773 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem
Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee
Abstract:
Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research
Procedia PDF Downloads 34039772 An Educational Program Based on Health Belief Model to Prevent of Non-alcoholic Fatty Liver Disease Among Iranian Women
Authors: Arezoo Fallahi
Abstract:
Background and purpose: Non-alcoholic fatty liver is one of the most common liver disorders, which, as the most important cause of death from liver disease, has unpleasant consequences and complications. The aim of this study was to investigate the effect of an educational intervention based on a health belief model to prevent non-alcoholic fatty liver among women. Materials and Methods: This experimental study was performed among 110 women referring to comprehensive health service centers in Malayer City, west of Iran, in 2023. Using the available sampling method, 110 Participants were divided into experimental and control groups. The data collection tool included demographic characteristics and a questionnaire based on the health belief model. In The experimental group, three one-hour training sessions were conducted in the form of pamphlets, lectures and group discussions. Data were analyzed using SPSS software version 21, by correlation tests, paired t-tests independent t-tests. Results: The mean age of participants was 38.07±6.28 years, and Most of the participants were middle-aged, married, housewives with academic education, middle-income and overweight. After the educational intervention, the mean scores of the constructs include perceived sensitivity (p=0.01), perceived severity (p=0.01), perceived benefits (p=0.01), guidance for internal (p=0.01) and external action (p=0.01), and perceived self-efficacy (p=0.01) in the experimental group were significantly higher than the control group. The score of perceived barriers in the experimental group decreased after training. The perceived obstacles score in the test group decreased after the training (15.2 ± 3.9 v.s 11.2 ± 3.3, (p<0.01). Conclusion: The findings of the study showed that the design and implementation of educational programs based on the constructs of the health belief model can be effective in preventing women from developing higher levels of non-alcoholic fatty liver.Keywords: health, education, believe, behaviour
Procedia PDF Downloads 5739771 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 27239770 Using Structural Equation Modeling to Analyze the Impact of Remote Work on Job Satisfaction
Authors: Florian Pfeffel, Valentin Nickolai, Christian Louis Kühner
Abstract:
Digitalization has disrupted the traditional workplace environment by allowing many employees to work from anywhere at any time. This trend of working from home was further accelerated due to the COVID-19 crisis, which forced companies to rethink their workplace models. While in many companies, this shift happened out of pure necessity; many employees were left more satisfied with their job due to the opportunity to work from home. This study focuses on employees’ job satisfaction in the service sector in dependence on the different work models, which are defined as a “work from home” model, the traditional “work in office” model, and a hybrid model. Using structural equation modeling (SEM), these three work models have been analyzed based on 13 influencing factors on job satisfaction that have been further summarized in the three groups “classic influencing factors”, “influencing factors changed by remote working”, and “new remote working influencing factors”. Based on the influencing factors on job satisfaction, a survey has been conducted with n = 684 employees in the service sector. Cronbach’s alpha of the individual constructs was shown to be suitable. Furthermore, the construct validity of the constructs was confirmed by face validity, content validity, convergent validity (AVE > 0.5: CR > 0.7), and discriminant validity. Additionally, confirmatory factor analysis (CFA) confirmed the model fit for the investigated sample (CMIN/DF: 2.567; CFI: 0.927; RMSEA: 0.048). The SEM-analysis has shown that the most significant influencing factor on job satisfaction is “identification with the work” with β = 0.540, followed by “Appreciation” (β = 0.151), “Compensation” (β = 0.124), “Work-Life-Balance” (β = 0.116), and “Communication and Exchange of Information” (β = 0.105). While the significance of each factor can vary depending on the work model, the SEM-analysis shows that the identification with the work is the most significant factor in all three work models and, in the case of the traditional office work model, it is the only significant influencing factor. The study shows that employees who work entirely remotely or have a hybrid work model are significantly more satisfied with their job, with a job satisfaction score of 5.0 respectively on a scale from 1 (very dissatisfied) to 7 (very satisfied), than employees do not have the option to work from home with a score of 4.6. This comes as a result of the lower identification with the work in the model without any remote working. Furthermore, the responses indicate that it is important to consider the individual preferences of each employee when it comes to the work model to achieve overall higher job satisfaction. Thus, it can be argued that companies can profit off of more motivation and higher productivity by considering the individual work model preferences, therefore, increasing the identification with the respective work.Keywords: home-office, identification with work, job satisfaction, new work, remote work, structural equation modeling
Procedia PDF Downloads 8639769 Children’s Concept of Forgiveness
Authors: Lida Landicho, Analiza R. Adarlo, Janine Mae V. Corpuz, Joan C. Villanueva
Abstract:
Testing the idea that the process of forgiveness is intrinsically different across diverse relationships, this study examined whether forgiveness can already be facilitated by children ages 4-6. Two different intervention sessions which consists of 40 children (half heard stories about unfair blame and half heard stories about a double standard (between subjects variable) was completed. Investigators performed experimental analyses to examine the role of forgiveness in social and familial context. Results indicated that forgiveness can already be facilitated by children. Children see scenarios on double standard to be more unfair than normal scenarios (Scenario 2 (double standard) (M=7.54) Scenario 1 (unfair blame) (M=4.50), Scenario 4 (double standard) (M=7.) Scenario 3 (getting blamed for something the friend did) (M=6.80)p <.05.The findings confirmed that children were generally willing to grant forgiveness to a mother even though she was unfair, but less so to a friend. Correlations between sex, age and forgiveness were analyzed. Significant relationships was found on scenarios presented and caring task scores (rxy= -.314).Their tendency to forgive was related to dispositional and situational factors.Keywords: forgiveness, situational and dispositional factors, familial context, social context
Procedia PDF Downloads 42939768 A Methodological Approach to Development of Mental Script for Mental Practice of Micro Suturing
Authors: Vaikunthan Rajaratnam
Abstract:
Intro: Motor imagery (MI) and mental practice (MP) can be an alternative to acquire mastery of surgical skills. One component of using this technique is the use of a mental script. The aim of this study was to design and develop a mental script for basic micro suturing training for skill acquisition using a low-fidelity rubber glove model and to describe the detailed methodology for this process. Methods: This study was based on a design and development research framework. The mental script was developed with 5 expert surgeons performing a cognitive walkthrough of the repair of a vertical opening in a rubber glove model using 8/0 nylon. This was followed by a hierarchal task analysis. A draft script was created, and face and content validity assessed with a checking-back process. The final script was validated with the recruitment of 28 participants, assessed using the Mental Imagery Questionnaire (MIQ). Results: The creation of the mental script is detailed in the full text. After assessment by the expert panel, the mental script had good face and content validity. The average overall MIQ score was 5.2 ± 1.1, demonstrating the validity of generating mental imagery from the mental script developed in this study for micro suturing in the rubber glove model. Conclusion: The methodological approach described in this study is based on an instructional design framework to teach surgical skills. This MP model is inexpensive and easily accessible, addressing the challenge of reduced opportunities to practice surgical skills. However, while motor skills are important, other non-technical expertise required by the surgeon is not addressed with this model. Thus, this model should act a surgical training augment, but not replace it.Keywords: mental script, motor imagery, cognitive walkthrough, verbal protocol analysis, hierarchical task analysis
Procedia PDF Downloads 10739767 Human Resources and Business Result: An Empirical Approach Based on RBV Theory
Authors: Xhevrie Mamaqi
Abstract:
Organization capacity learning is a process referring to the sum total of individual and collective learning through training programs, experience and experimentation, among others. Today, in-business ongoing training is one of the most important strategies for human capital development and it is crucial to sustain and improve workers’ knowledge and skills. Many organizations, firms and business are adopting a strategy of continuous learning, encouraging employees to learn new skills continually to be innovative and to try new processes and work in order to achieve a competitive advantage and superior business results. This paper uses the Resource Based View and Capacities (RBV) approach to construct a hypothetical relationships model between training and business results. The test of the model is applied on transversal data. A sample of 266 business of Spanish sector service has been selected. A Structural Equation Model (SEM) is used to estimate the relationship between ongoing training, represented by two latent dimension denominated Human and Social Capital resources and economic business results. The coefficients estimated have shown the efficient of some training aspects explaining the variation in business results.Keywords: business results, human and social capital resources, training, RBV theory, SEM
Procedia PDF Downloads 30439766 The Analysis of Gizmos Online Program as Mathematics Diagnostic Program: A Story from an Indonesian Private School
Authors: Shofiayuningtyas Luftiani
Abstract:
Some private schools in Indonesia started integrating the online program Gizmos in the teaching-learning process. Gizmos was developed to supplement the existing curriculum by integrating it into the instructional programs. The program has some features using an inquiry-based simulation, in which students conduct exploration by using a worksheet while teachers use the teacher guidelines to direct and assess students’ performance In this study, the discussion about Gizmos highlights its features as the assessment media of mathematics learning for secondary school students. The discussion is based on the case study and literature review from the Indonesian context. The purpose of applying Gizmos as an assessment media refers to the diagnostic assessment. As a part of the diagnostic assessment, the teachers review the student exploration sheet, analyze particularly in the students’ difficulties and consider findings in planning future learning process. This assessment becomes important since the teacher needs the data about students’ persistent weaknesses. Additionally, this program also helps to build student’ understanding by its interactive simulation. Currently, the assessment over-emphasizes the students’ answers in the worksheet based on the provided answer keys while students perform their skill in translating the question, doing the simulation and answering the question. Whereas, the assessment should involve the multiple perspectives and sources of students’ performance since teacher should adjust the instructional programs with the complexity of students’ learning needs and styles. Consequently, the approach to improving the assessment components is selected to challenge the current assessment. The purpose of this challenge is to involve not only the cognitive diagnosis but also the analysis of skills and error. Concerning the selected setting for this diagnostic assessment that develops the combination of cognitive diagnosis, skills analysis and error analysis, the teachers should create an assessment rubric. The rubric plays the important role as the guide to provide a set of criteria for the assessment. Without the precise rubric, the teacher potentially ineffectively documents and follows up the data about students at risk of failure. Furthermore, the teachers who employ the program of Gizmos as the diagnostic assessment might encounter some obstacles. Based on the condition of assessment in the selected setting, the obstacles involve the time constrain, the reluctance of higher teaching burden and the students’ behavior. Consequently, the teacher who chooses the Gizmos with those approaches has to plan, implement and evaluate the assessment. The main point of this assessment is not in the result of students’ worksheet. However, the diagnostic assessment has the two-stage process; the process to prompt and effectively follow-up both individual weaknesses and those of the learning process. Ultimately, the discussion of Gizmos as the media of the diagnostic assessment refers to the effort to improve the mathematical learning process.Keywords: diagnostic assessment, error analysis, Gizmos online program, skills analysis
Procedia PDF Downloads 18439765 1-g Shake Table Tests to Study the Impact of PGA on Foundation Settlement in Liquefiable Soil
Authors: Md. Kausar Alam, Mohammad Yazdi, Peiman Zogh, Ramin Motamed
Abstract:
The liquefaction-induced ground settlement has caused severe damage to structures in the past decades. However, the amount of building settlement caused by liquefaction is directly proportional to the intensity of the ground shaking. To reduce this soil liquefaction effect, it is essential to examine the influence of peak ground acceleration (PGA). Unfortunately, limited studies have been carried out on this issue. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada Reno to evaluate the influence of PGA with the same duration in liquefiable soil layers. The model is prepared based on a large-scale shake table with a scaling factor of N = 5, which has been conducted at the University of California, San Diego. The model ground has three soil layers with relative densities of 50% for crust, 30% for liquefiable, and 90% for dense layer, respectively. In addition, a shallow foundation is seated over an unsaturated crust layer. After preparing the model, the input motions having various peak ground accelerations (i.e., 0.16g, 0.25g, and 0.37g) for the same duration (10 sec) were applied. Based on the experimental results, when the PGA increased from 0.16g to 0.37g, the foundation increased from 20 mm to 100 mm. In addition, the expected foundation settlement based on the scaling factor was 25 mm, while the actual settlement for PGA 0.25g for 10 seconds was 50 mm.Keywords: foundation settlement, liquefaction, peak ground acceleration, shake table test
Procedia PDF Downloads 8239764 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 26739763 Mathematical Modelling and AI-Based Degradation Analysis of the Second-Life Lithium-Ion Battery Packs for Stationary Applications
Authors: Farhad Salek, Shahaboddin Resalati
Abstract:
The production of electric vehicles (EVs) featuring lithium-ion battery technology has substantially escalated over the past decade, demonstrating a steady and persistent upward trajectory. The imminent retirement of electric vehicle (EV) batteries after approximately eight years underscores the critical need for their redirection towards recycling, a task complicated by the current inadequacy of recycling infrastructures globally. A potential solution for such concerns involves extending the operational lifespan of electric vehicle (EV) batteries through their utilization in stationary energy storage systems during secondary applications. Such adoptions, however, require addressing the safety concerns associated with batteries’ knee points and thermal runaways. This paper develops an accurate mathematical model representative of the second-life battery packs from a cell-to-pack scale using an equivalent circuit model (ECM) methodology. Neural network algorithms are employed to forecast the degradation parameters based on the EV batteries' aging history to develop a degradation model. The degradation model is integrated with the ECM to reflect the impacts of the cycle aging mechanism on battery parameters during operation. The developed model is tested under real-life load profiles to evaluate the life span of the batteries in various operating conditions. The methodology and the algorithms introduced in this paper can be considered the basis for Battery Management System (BMS) design and techno-economic analysis of such technologies.Keywords: second life battery, electric vehicles, degradation, neural network
Procedia PDF Downloads 71