Search results for: daily probability model
17369 A Curricular Approach to Organizational Mentoring Programs: The Integrated Mentoring Curriculum Model
Authors: Christopher Webb
Abstract:
This work presents a new model of mentoring in an organizational environment and has important implications for both practice and research, the model frames the organizational environment as organizational curriculum, which includes the elements that affect learning within the organization. This includes the organizational structure and culture, roles within the organization, and accessibility of knowledge. The program curriculum includes the elements of the mentoring program, including materials, training, and scheduled events for the program participants. The term dyadic curriculum is coined in this work. The dyadic curriculum describes the participation, behavior, and identities of the pairs participating in mentorships. This also includes the identity work of the participants and their views of each other. Much of this curriculum is unprescribed and is unique within each dyad. It describes how participants mediate the elements of organizational and program curricula. These three curricula interact and affect each other in predictable ways. A detailed example of a mentoring program framed in this model is provided.Keywords: curriculum, mentoring, organizational learning and development, social learning
Procedia PDF Downloads 20217368 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures
Authors: A. T. Al-Isawi, P. E. F. Collins
Abstract:
The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction
Procedia PDF Downloads 12217367 A Stochastic Volatility Model for Optimal Market-Making
Authors: Zubier Arfan, Paul Johnson
Abstract:
The electronification of financial markets and the rise of algorithmic trading has sparked a lot of interest from the mathematical community, for the market making-problem in particular. The research presented in this short paper solves the classic stochastic control problem in order to derive the strategy for a market-maker. It also shows how to calibrate and simulate the strategy with real limit order book data for back-testing. The ambiguity of limit-order priority in back-testing is dealt with by considering optimistic and pessimistic priority scenarios. The model, although it does outperform a naive strategy, assumes constant volatility, therefore, is not best suited to the LOB data. The Heston model is introduced to describe the price and variance process of the asset. The Trader's constant absolute risk aversion utility function is optimised by numerically solving a 3-dimensional Hamilton-Jacobi-Bellman partial differential equation to find the optimal limit order quotes. The results show that the stochastic volatility market-making model is more suitable for a risk-averse trader and is also less sensitive to calibration error than the constant volatility model.Keywords: market-making, market-microsctrucure, stochastic volatility, quantitative trading
Procedia PDF Downloads 15017366 Using Lean Six-Sigma in the Improvement of Service Quality at Aviation Industry: Case Study at the Departure Area in KKIA
Authors: Tareq Al Muhareb, Jasper Graham-Jones
Abstract:
The service quality is a significant element in aviation industry especially in the international airports. Through this paper, the researchers built a model based on Lean six sigma methodologies and applied it in the departure area at KKIA (King Khalid International Airport) in order to assess it. This model characterized with many special features that can become over the cultural differences in aviation industry since it is considered the most critical circumstance in this field. Applying the model of this study is depending on following the DMAIC procedure systemized in lean thinking aspects. This model of Lean-six-sigma as a managerial procedure is mostly focused on the change management culture that requires high level of planning, organizing, modifying, and controlling in order to benefit from strengths as well as revoke weaknesses.Keywords: lean-six-sigma, service quality, aviation industry, KKIA (King Khalid International Airport), SERVQUAL
Procedia PDF Downloads 43017365 Optimization of Syngas Quality for Fischer-Tropsch Synthesis
Authors: Ali Rabah
Abstract:
This research received no grant or financial support from any public, commercial, or none governmental agency. The author conducted this work as part of his normal research activities as a professor of Chemical Engineering at the University of Khartoum, Sudan. Abstract While fossil oil reserves have been receding, the demand for diesel and gasoline has been growing. In recent years, syngas of biomass origin has been emerging as a viable feedstock for Fischer-Tropsch (FT) synthesis, a process for manufacturing synthetic gasoline and diesel. This paper reports the optimization of syngas quality to match FT synthesis requirements. The optimization model maximizes the thermal efficiency under the constraint of H2/CO≥2.0 and operating conditions of equivalent ratio (0 ≤ ER ≤ 1.0), steam to biomass ratio (0 ≤ SB ≤ 5), and gasification temperature (500 °C ≤ Tg ≤ 1300 °C). The optimization model is executed using the optimization section of the Model Analysis Tools of the Aspen Plus simulator. The model is tested using eleven (11) types of MSW. The optimum operating conditions under which the objective function and the constraint are satisfied are ER=0, SB=0.66-1.22, and Tg=679 - 763°C. Under the optimum operating conditions, the syngas quality is H2=52.38 - 58.67-mole percent, LHV=12.55 - 17.15 MJ/kg, N2=0.38 - 2.33-mole percent, and H2/CO≥2.15. The generalized optimization model reported could be extended to any other type of biomass and coal. Keywords: MSW, Syngas, Optimization, Fischer-Tropsch.Keywords: syngas, MSW, optimization, Fisher-Tropsh
Procedia PDF Downloads 8017364 Extension of a Competitive Location Model Considering a Given Number of Servers and Proposing a Heuristic for Solving
Authors: Mehdi Seifbarghy, Zahra Nasiri
Abstract:
Competitive location problem deals with locating new facilities to provide a service (or goods) to the customers of a given geographical area where other facilities (competitors) offering the same service are already present. The new facilities will have to compete with the existing facilities for capturing the market share. This paper proposes a new model to maximize the market share in which customers choose the facilities based on traveling time, waiting time and attractiveness. The attractiveness of a facility is considered as a parameter in the model. A heuristic is proposed to solve the problem.Keywords: competitive location, market share, facility attractiveness, heuristic
Procedia PDF Downloads 52417363 Research on Air pollution Spatiotemporal Forecast Model Based on LSTM
Authors: JingWei Yu, Hong Yang Yu
Abstract:
At present, the increasingly serious air pollution in various cities of China has made people pay more attention to the air quality index(hereinafter referred to as AQI) of their living areas. To face this situation, it is of great significance to predict air pollution in heavily polluted areas. In this paper, based on the time series model of LSTM, a spatiotemporal prediction model of PM2.5 concentration in Mianyang, Sichuan Province, is established. The model fully considers the temporal variability and spatial distribution characteristics of PM2.5 concentration. The spatial correlation of air quality at different locations is based on the Air quality status of other nearby monitoring stations, including AQI and meteorological data to predict the air quality of a monitoring station. The experimental results show that the method has good prediction accuracy that the fitting degree with the actual measured data reaches more than 0.7, which can be applied to the modeling and prediction of the spatial and temporal distribution of regional PM2.5 concentration.Keywords: LSTM, PM2.5, neural networks, spatio-temporal prediction
Procedia PDF Downloads 13417362 System Survivability in Networks in the Context of Defense/Attack Strategies: The Large Scale
Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez, Mehdi Mrad
Abstract:
We investigate the large scale of networks in the context of network survivability under attack. We use appropriate techniques to evaluate and the attacker-based- and the defender-based-network survivability. The attacker is unaware of the operated links by the defender. Each attacked link has some pre-specified probability to be disconnected. The defender choice is so that to maximize the chance of successfully sending the flow to the destination node. The attacker however will select the cut-set with the highest chance to be disabled in order to partition the network. Moreover, we extend the problem to the case of selecting the best p paths to operate by the defender and the best k cut-sets to target by the attacker, for arbitrary integers p,k > 1. We investigate some variations of the problem and suggest polynomial-time solutions.Keywords: defense/attack strategies, large scale, networks, partitioning a network
Procedia PDF Downloads 28317361 Evaluation of Ceres Wheat and Rice Model for Climatic Conditions in Haryana, India
Authors: Mamta Rana, K. K. Singh, Nisha Kumari
Abstract:
The simulation models with its soil-weather-plant atmosphere interacting system are important tools for assessing the crops in changing climate conditions. The CERES-Wheat & Rice vs. 4.6 DSSAT was calibrated and evaluated for one of the major producers of wheat and rice state- Haryana, India. The simulation runs were made under irrigated conditions and three fertilizer applications dose of N-P-K to estimate crop yield and other growth parameters along with the phenological development of the crop. The genetic coefficients derived by iteratively manipulating the relevant coefficients that characterize the phenological process of wheat and rice crop to the best fit match between the simulated and observed anthesis, physological maturity and final grain yield. The model validated by plotting the simulated and remote sensing derived LAI. LAI product from remote sensing provides the edge of spatial, timely and accurate assessment of crop. For validating the yield and yield components, the error percentage between the observed and simulated data was calculated. The analysis shows that the model can be used to simulate crop yield and yield components for wheat and rice cultivar under different management practices. During the validation, the error percentage was less than 10%, indicating the utility of the calibrated model for climate risk assessment in the selected region.Keywords: simulation model, CERES-wheat and rice model, crop yield, genetic coefficient
Procedia PDF Downloads 30517360 Single Ended Primary Inductance Converter with Internal Model Controller
Authors: Fatih Suleyman Taskincan, Ahmet Karaarslan
Abstract:
In this article, the study and analysis of Single Ended Primary Inductance Converter (SEPIC) are presented for battery charging applications that will be used in military applications. The usage of this kind of converters come from its advantage of non-reverse polarity at outputs. As capacitors charge and discharge through inductance, peak current does not occur on capacitors. Therefore, the efficiency will be high compared to buck-boost converters. In this study, the converter (SEPIC) is designed to be operated with Internal Model Controller (IMC). The traditional controllers like Proportional Integral Controller are not preferred as its linearity behavior. Hence IMC is designed for this converter. This controller is a model-based control and provides more robustness and better set point monitoring. Moreover, it can be used for an unstable process where the conventional controller cannot handle the dynamic operation. Matlab/Simulink environment is used to simulate the converter and its controller, then, the results are shown and discussed.Keywords: DC/DC converter, single ended primary inductance converter, SEPIC, internal model controller, IMC, switched mode power supply
Procedia PDF Downloads 62917359 Comparing Three Complementary Interventions (Mindfulness-Meditation, Gratitude, and Affirmations) in the Context of Stress
Authors: Regina Bowler
Abstract:
Rationale & Aims: Complementary interventions such as mindfulness-meditation, gratitude, and self-affirmation are often used by therapists to treat stress. Many studies have been conducted using these interventions either individually or adjunctively with regard to stress. However, there has been little work comparing these interventions to investigate which of them is the most effective in treating stress. This study aims to compare these interventions and to determine which of them has the strongest perceived and physiological impact on stress. Participants: 120 law students preparing to take the bar exam: 3 experimental groups of 30 individuals, 1 control group of 30 individuals. Methods: One day prior to administering the interventions, baseline salivary cortisol samples will be taken, and the participants will complete the perceived stress scale (Cohen et al., 1983). Thirty days prior to the bar exam, each experimental group will be given an intervention to practice. Interventions will be practiced once in the morning after waking and once at night at bedtime. In group one, each participant will do a recorded three-minute mindfulness meditation. In group two, each participant will practice gratitude by writing down three things he/she/they are grateful for. In group three, each participant will practice affirmation by writing three sentences affirming his/her/their core values. The control group will not have an intervention to practice. Starting experimental day 1, upon waking and prior to practicing the intervention, the participants will take a salivary cortisol sample. Then they will practice their given intervention. Every night, before going to bed, the participants will practice their given intervention for a second time. The participants will practice their interventions and take salivary cortisol samples for 28 days. After each seven-day period (days 7, 14, 21, 28), the participants will fill out a brief questionnaire about the effects their intervention has on their stress, daily life, and relationships with themselves and others. On day 29, the participants will take a final salivary cortisol sample and will fill out the Perceived Stress Scale (Cohen et al., 1983). Applications of findings: Findings from this study would inform therapists of best practices when working with clients with stress. Moreover, therapists will gain knowledge of how individuals perceive these interventions and their impact on stress, daily life, somatic symptoms, and relationships with self and others. Thus, therapists will be able to administer these interventions with more precision to the stress-related contexts and issues their clients bring.Keywords: stress, mindfulness-meditation, gratitude, affirmations, complementary interventions
Procedia PDF Downloads 4517358 Study of Inhibition of the End Effect Based on AR Model Predict of Combined Data Extension and Window Function
Authors: Pan Hongxia, Wang Zhenhua
Abstract:
In this paper, the EMD decomposition in the process of endpoint effect adopted data based on AR model to predict the continuation and window function method of combining the two effective inhibition. Proven by simulation of the simulation signal obtained the ideal effect, then, apply this method to the gearbox test data is also achieved good effect in the process, for the analysis of the subsequent data processing to improve the calculation accuracy. In the end, under various working conditions for the gearbox fault diagnosis laid a good foundation.Keywords: gearbox, fault diagnosis, ar model, end effect
Procedia PDF Downloads 36617357 Media Planning Decisions and Preferences through a Goal Programming Model: An Application to a Media Campaign for a Mature Product in Italy
Authors: Cinzia Colapinto, Davide La Torre
Abstract:
Goal Programming (GP) and its variants were applied to marketing and specific marketing issues, such as media scheduling problems in the last decades. The concept of satisfaction functions has been widely utilized in the GP model to explicitly integrate the Decision-Maker’s preferences. These preferences can be guided by the available information regarding the decision-making situation. A GP model with satisfaction functions for media planning decisions is proposed and then illustrated through a case study related to a marketing/media campaign in the Italian market.Keywords: goal programming, satisfaction functions, media planning, tourism management
Procedia PDF Downloads 39917356 Measurement Tools of the Maturity Model for IT Service Outsourcing in Higher Education Institutions
Authors: Victoriano Valencia García, Luis Usero Aragonés, Eugenio J. Fernández Vicente
Abstract:
Nowadays, the successful implementation of ICTs is vital for almost any kind of organization. Good governance and ICT management are essential for delivering value, managing technological risks, managing resources and performance measurement. In addition, outsourcing is a strategic IT service solution which complements IT services provided internally in organizations. This paper proposes the measurement tools of a new holistic maturity model based on standards ISO/IEC 20000 and ISO/IEC 38500, and the frameworks and best practices of ITIL and COBIT, with a specific focus on IT outsourcing. These measurement tools allow independent validation and practical application in the field of higher education, using a questionnaire, metrics tables, and continuous improvement plan tables as part of the measurement process. Guidelines and standards are proposed in the model for facilitating adaptation to universities and achieving excellence in the outsourcing of IT services.Keywords: IT governance, IT management, IT services, outsourcing, maturity model, measurement tools
Procedia PDF Downloads 59217355 Variability Management of Contextual Feature Model in Multi-Software Product Line
Authors: Muhammad Fezan Afzal, Asad Abbas, Imran Khan, Salma Imtiaz
Abstract:
Software Product Line (SPL) paradigm is used for the development of the family of software products that share common and variable features. Feature model is a domain of SPL that consists of common and variable features with predefined relationships and constraints. Multiple SPLs consist of a number of similar common and variable features, such as mobile phones and Tabs. Reusability of common and variable features from the different domains of SPL is a complex task due to the external relationships and constraints of features in the feature model. To increase the reusability of feature model resources from domain engineering, it is required to manage the commonality of features at the level of SPL application development. In this research, we have proposed an approach that combines multiple SPLs into a single domain and converts them to a common feature model. Extracting the common features from different feature models is more effective, less cost and time to market for the application development. For extracting features from multiple SPLs, the proposed framework consists of three steps: 1) find the variation points, 2) find the constraints, and 3) combine the feature models into a single feature model on the basis of variation points and constraints. By using this approach, reusability can increase features from the multiple feature models. The impact of this research is to reduce the development of cost, time to market and increase products of SPL.Keywords: software product line, feature model, variability management, multi-SPLs
Procedia PDF Downloads 6917354 Use of Two-Dimensional Hydraulics Modeling for Design of Erosion Remedy
Authors: Ayoub. El Bourtali, Abdessamed.Najine, Amrou Moussa. Benmoussa
Abstract:
One of the main goals of river engineering is river training, which is defined as controlling and predicting the behavior of a river. It is taking effective measurements to eliminate all related risks and thus improve the river system. In some rivers, the riverbed continues to erode and degrade; therefore, equilibrium will never be reached. Generally, river geometric characteristics and riverbed erosion analysis are some of the most complex but critical topics in river engineering and sediment hydraulics; riverbank erosion is the second answering process in hydrodynamics, which has a major impact on the ecological chain and socio-economic process. This study aims to integrate the new computer technology that can analyze erosion and hydraulic problems through computer simulation and modeling. Choosing the right model remains a difficult and sensitive job for field engineers. This paper makes use of the 5.0.4 version of the HEC-RAS model. The river section is adopted according to the gauged station and the proximity of the adjustment. In this work, we will demonstrate how 2D hydraulic modeling helped clarify the design and cover visuals to set up depth and velocities at riverbanks and throughout advanced structures. The hydrologic engineering center's-river analysis system (HEC-RAS) 2D model was used to create a hydraulic study of the erosion model. The geometric data were generated from the 12.5-meter x 12.5-meter resolution digital elevation model. In addition to showing eroded or overturned river sections, the model output also shows patterns of riverbank changes, which can help us reduce problems caused by erosion.Keywords: 2D hydraulics model, erosion, floodplain, hydrodynamic, HEC-RAS, riverbed erosion, river morphology, resolution digital data, sediment
Procedia PDF Downloads 18917353 Numerical Simulation of the Kurtosis Effect on the EHL Problem
Authors: S. Gao, S. Srirattayawong
Abstract:
In this study, a computational fluid dynamics (CFD) model has been developed for studying the effect of surface roughness profile on the EHL problem. The cylinders contact geometry, meshing and calculation of the conservation of mass and momentum equations are carried out by using the commercial software packages ICEMCFD and ANSYS Fluent. The user defined functions (UDFs) for density, viscosity and elastic deformation of the cylinders as the functions of pressure and temperature have been defined for the CFD model. Three different surface roughness profiles are created and incorporated into the CFD model. It is found that the developed CFD model can predict the characteristics of fluid flow and heat transfer in the EHL problem, including the leading parameters such as the pressure distribution, minimal film thickness, viscosity, and density changes. The obtained results show that the pressure profile at the center of the contact area directly relates to the roughness amplitude. The rough surface with kurtosis value over 3 influences the fluctuated shape of pressure distribution higher than other cases.Keywords: CFD, EHL, kurtosis, surface roughness
Procedia PDF Downloads 32017352 Socio-Economic Analysis of Water Saving Technologies in Agricultural Sector
Authors: Saeed Yazdani, F. Nekoofar
Abstract:
Considering the importance and scarcity of water resources, the efficient management of water resources is of great importance. In the agriculture sector, farmers are facilitated with various practices and technologies to encounter water insufficiency. This study aims to assess socio-economic factors affecting the application of water-saving technologies. A Logit method was employed to examine the impact of different variables on the use of water-saving technology. The required data was gathered from a sample of 204 farmers in 2021 in Alborz Province in Iran. The results indicate that different variables such as crop price variability, water sources, farm size, income, education, experience, membership in cooperatives have positive effects, and variables such as age and number of plots have negative effects on the probability of applying modern water-saving technologies.Keywords: socio-economics, water, irrigation, water saving technologies, scarcity
Procedia PDF Downloads 2417351 Risk of Fatal and Non-Fatal Coronary Heart Disease and Stroke Events among Adult Patients with Hypertension: Basic Markov Model Inputs for Evaluating Cost-Effectiveness of Hypertension Treatment: Systematic Review of Cohort Studies
Authors: Mende Mensa Sorato, Majid Davari, Abbas Kebriaeezadeh, Nizal Sarrafzadegan, Tamiru Shibru, Behzad Fatemi
Abstract:
Markov model, like cardiovascular disease (CVD) policy model based simulation, is being used for evaluating the cost-effectiveness of hypertension treatment. Stroke, angina, myocardial infarction (MI), cardiac arrest, and all-cause mortality were included in this model. Hypertension is a risk factor for a number of vascular and cardiac complications and CVD outcomes. Objective: This systematic review was conducted to evaluate the comprehensiveness of this model across different regions globally. Methods: We searched articles written in the English language from PubMed/Medline, Ovid/Medline, Embase, Scopus, Web of Science, and Google scholar with a systematic search query. Results: Thirteen cohort studies involving a total of 2,165,770 (1,666,554 hypertensive adult population and 499,226 adults with treatment-resistant hypertension) were included in this scoping review. Hypertension is clearly associated with coronary heart disease (CHD) and stroke mortality, unstable angina, stable angina, MI, heart failure (HF), sudden cardiac death, transient ischemic attack, ischemic stroke, subarachnoid hemorrhage, intracranial hemorrhage, peripheral arterial disease (PAD), and abdominal aortic aneurism (AAA). Association between HF and hypertension is variable across regions. Treatment resistant hypertension is associated with a higher relative risk of developing major cardiovascular events and all-cause mortality when compared with non-resistant hypertension. However, it is not included in the previous CVD policy model. Conclusion: The CVD policy model used can be used in most regions for the evaluation of the cost-effectiveness of hypertension treatment. However, hypertension is highly associated with HF in Latin America, the Caribbean, Eastern Europe, and Sub-Saharan Africa. Therefore, it is important to consider HF in the CVD policy model for evaluating the cost-effectiveness of hypertension treatment in these regions. We do not suggest the inclusion of PAD and AAA in the CVD policy model for evaluating the cost-effectiveness of hypertension treatment due to a lack of sufficient evidence. Researchers should consider the effect of treatment-resistant hypertension either by including it in the basic model or during setting the model assumptions.Keywords: cardiovascular disease policy model, cost-effectiveness analysis, hypertension, systematic review, twelve major cardiovascular events
Procedia PDF Downloads 7117350 Standard Resource Parameter Based Trust Model in Cloud Computing
Authors: Shyamlal Kumawat
Abstract:
Cloud computing is shifting the approach IT capital are utilized. Cloud computing dynamically delivers convenient, on-demand access to shared pools of software resources, platform and hardware as a service through internet. The cloud computing model—made promising by sophisticated automation, provisioning and virtualization technologies. Users want the ability to access these services including infrastructure resources, how and when they choose. To accommodate this shift in the consumption model technology has to deal with the security, compatibility and trust issues associated with delivering that convenience to application business owners, developers and users. Absent of these issues, trust has attracted extensive attention in Cloud computing as a solution to enhance the security. This paper proposes a trusted computing technology through Standard Resource parameter Based Trust Model in Cloud Computing to select the appropriate cloud service providers. The direct trust of cloud entities is computed on basis of the interaction evidences in past and sustained on its present performances. Various SLA parameters between consumer and provider are considered in trust computation and compliance process. The simulations are performed using CloudSim framework and experimental results show that the proposed model is effective and extensible.Keywords: cloud, Iaas, Saas, Paas
Procedia PDF Downloads 33017349 Verification & Validation of Map Reduce Program Model for Parallel K-Mediod Algorithm on Hadoop Cluster
Authors: Trapti Sharma, Devesh Kumar Srivastava
Abstract:
This paper is basically a analysis study of above MapReduce implementation and also to verify and validate the MapReduce solution model for Parallel K-Mediod algorithm on Hadoop Cluster. MapReduce is a programming model which authorize the managing of huge amounts of data in parallel, on a large number of devices. It is specially well suited to constant or moderate changing set of data since the implementation point of a position is usually high. MapReduce has slowly become the framework of choice for “big data”. The MapReduce model authorizes for systematic and instant organizing of large scale data with a cluster of evaluate nodes. One of the primary affect in Hadoop is how to minimize the completion length (i.e. makespan) of a set of MapReduce duty. In this paper, we have verified and validated various MapReduce applications like wordcount, grep, terasort and parallel K-Mediod clustering algorithm. We have found that as the amount of nodes increases the completion time decreases.Keywords: hadoop, mapreduce, k-mediod, validation, verification
Procedia PDF Downloads 36917348 An Energy-Efficient Model of Integrating Telehealth IoT Devices with Fog and Cloud Computing-Based Platform
Authors: Yunyong Guo, Sudhakar Ganti, Bryan Guo
Abstract:
The rapid growth of telehealth Internet of Things (IoT) devices has raised concerns about energy consumption and efficient data processing. This paper introduces an energy-efficient model that integrates telehealth IoT devices with a fog and cloud computing-based platform, offering a sustainable and robust solution to overcome these challenges. Our model employs fog computing as a localized data processing layer while leveraging cloud computing for resource-intensive tasks, significantly reducing energy consumption. We incorporate adaptive energy-saving strategies. Simulation analysis validates our approach's effectiveness in enhancing energy efficiency for telehealth IoT systems integrated with localized fog nodes and both private and public cloud infrastructures. Future research will focus on further optimization of the energy-saving model, exploring additional functional enhancements, and assessing its broader applicability in other healthcare and industry sectors.Keywords: energy-efficient, fog computing, IoT, telehealth
Procedia PDF Downloads 8617347 Petri Net Modeling and Simulation of a Call-Taxi System
Authors: T. Godwin
Abstract:
A call-taxi system is a type of taxi service where a taxi could be requested through a phone call or mobile app. A schematic functioning of a call-taxi system is modeled using Petri net, which provides the necessary conditions for a taxi to be assigned by a dispatcher to pick a customer as well as the conditions for the taxi to be released by the customer. A Petri net is a graphical modeling tool used to understand sequences, concurrences, and confluences of activities in the working of discrete event systems. It uses tokens on a directed bipartite multi-graph to simulate the activities of a system. The Petri net model is translated into a simulation model and a call-taxi system is simulated. The simulation model helps in evaluating the operation of a call-taxi system based on the fleet size as well as the operating policies for call-taxi assignment and empty call-taxi repositioning. The developed Petri net based simulation model can be used to decide the fleet size as well as the call-taxi assignment policies for a call-taxi system.Keywords: call-taxi, discrete event system, petri net, simulation modeling
Procedia PDF Downloads 42417346 Modeling Waiting and Service Time for Patients: A Case Study of Matawale Health Centre, Zomba, Malawi
Authors: Moses Aron, Elias Mwakilama, Jimmy Namangale
Abstract:
Spending more time on long queues for a basic service remains a common challenge to most developing countries, including Malawi. For health sector in particular, Out-Patient Department (OPD) experiences long queues. This puts the lives of patients at risk. However, using queuing analysis to under the nature of the problems and efficiency of service systems, such problems can be abated. Based on a kind of service, literature proposes different possible queuing models. However, unlike using generalized assumed models proposed by literature, use of real time case study data can help in deeper understanding the particular problem model and how such a model can vary from one day to the other and also from each case to another. As such, this study uses data obtained from one urban HC for BP, Pediatric and General OPD cases to investigate an average queuing time for patients within the system. It seeks to highlight the proper queuing model by investigating the kind of distributions functions over patient’s arrival time, inter-arrival time, waiting time and service time. Comparable with the standard set values by WHO, the study found that patients at this HC spend more waiting times than service times. On model investigation, different days presented different models ranging from an assumed M/M/1, M/M/2 to M/Er/2. As such, through sensitivity analysis, in general, a commonly assumed M/M/1 model failed to fit the data but rather an M/Er/2 demonstrated to fit well. An M/Er/3 model seemed to be good in terms of measuring resource utilization, proposing a need to increase medical personnel at this HC. However, an M/Er/4 showed to cause more idleness of human resources.Keywords: health care, out-patient department, queuing model, sensitivity analysis
Procedia PDF Downloads 43517345 An Improved Multiple Scattering Reflectance Model Based on Specular V-Cavity
Authors: Hongbin Yang, Mingxue Liao, Changwen Zheng, Mengyao Kong, Chaohui Liu
Abstract:
Microfacet-based reflection models are widely used to model light reflections for rough surfaces. Microfacet models have become the standard surface material building block for describing specular components with varying roughness; and yet, while they possess many desirable properties as well as produce convincing results, their design ignores important sources of scattering, which can cause a significant loss of energy. Specifically, they only simulate the single scattering on the microfacets and ignore the subsequent interactions. As the roughness increases, the interaction will become more and more important. So a multiple-scattering microfacet model based on specular V-cavity is presented for this important open problem. However, it spends much unnecessary rendering time because of setting the same number of scatterings for different roughness surfaces. In this paper, we design a geometric attenuation term G to compute the BRDF (Bidirectional reflection distribution function) of multiple scattering of rough surfaces. Moreover, we consider determining the number of scattering by deterministic heuristics for different roughness surfaces. As a result, our model produces a similar appearance of the objects with the state of the art model with significantly improved rendering efficiency. Finally, we derive a multiple scattering BRDF based on the original microfacet framework.Keywords: bidirectional reflection distribution function, BRDF, geometric attenuation term, multiple scattering, V-cavity model
Procedia PDF Downloads 11617344 Numerical Study on Parallel Rear-Spoiler on Super Cars
Authors: Anshul Ashu
Abstract:
Computers are applied to the vehicle aerodynamics in two ways. One of two is Computational Fluid Dynamics (CFD) and other is Computer Aided Flow Visualization (CAFV). Out of two CFD is chosen because it shows the result with computer graphics. The simulation of flow field around the vehicle is one of the important CFD applications. The flow field can be solved numerically using panel methods, k-ε method, and direct simulation methods. The spoiler is the tool in vehicle aerodynamics used to minimize unfavorable aerodynamic effects around the vehicle and the parallel spoiler is set of two spoilers which are designed in such a manner that it could effectively reduce the drag. In this study, the standard k-ε model of the simplified version of Bugatti Veyron, Audi R8 and Porsche 911 are used to simulate the external flow field. Flow simulation is done for variable Reynolds number. The flow simulation consists of three different levels, first over the model without a rear spoiler, second for over model with single rear spoiler, and third over the model with parallel rear-spoiler. The second and third level has following parameter: the shape of the spoiler, the angle of attack and attachment position. A thorough analysis of simulations results has been found. And a new parallel spoiler is designed. It shows a little improvement in vehicle aerodynamics with a decrease in vehicle aerodynamic drag and lift. Hence, it leads to good fuel economy and traction force of the model.Keywords: drag, lift, flow simulation, spoiler
Procedia PDF Downloads 50017343 Predicting Returns Volatilities and Correlations of Stock Indices Using Multivariate Conditional Autoregressive Range and Return Models
Authors: Shay Kee Tan, Kok Haur Ng, Jennifer So-Kuen Chan
Abstract:
This paper extends the conditional autoregressive range (CARR) model to multivariate CARR (MCARR) model and further to the two-stage MCARR-return model to model and forecast volatilities, correlations and returns of multiple financial assets. The first stage model fits the scaled realised Parkinson volatility measures using individual series and their pairwise sums of indices to the MCARR model to obtain in-sample estimates and forecasts of volatilities for these individual and pairwise sum series. Then covariances are calculated to construct the fitted variance-covariance matrix of returns which are imputed into the stage-two return model to capture the heteroskedasticity of assets’ returns. We investigate different choices of mean functions to describe the volatility dynamics. Empirical applications are based on the Standard and Poor 500, Dow Jones Industrial Average and Dow Jones United States Financial Service Indices. Results show that the stage-one MCARR models using asymmetric mean functions give better in-sample model fits than those based on symmetric mean functions. They also provide better out-of-sample volatility forecasts than those using CARR models based on two robust loss functions with the scaled realised open-to-close volatility measure as the proxy for the unobserved true volatility. We also find that the stage-two return models with constant means and multivariate Student-t errors give better in-sample fits than the Baba, Engle, Kraft, and Kroner type of generalized autoregressive conditional heteroskedasticity (BEKK-GARCH) models. The estimates and forecasts of value-at-risk (VaR) and conditional VaR based on the best MCARR-return models for each asset are provided and tested using Kupiec test to confirm the accuracy of the VaR forecasts.Keywords: range-based volatility, correlation, multivariate CARR-return model, value-at-risk, conditional value-at-risk
Procedia PDF Downloads 9917342 Analysis of Aerodynamic Forces Acting on a Train Passing Through a Tornado
Authors: Masahiro Suzuki, Nobuyuki Okura
Abstract:
The crosswind effect on ground transportations has been extensively investigated for decades. The effect of tornado, however, has been hardly studied in spite of the fact that even heavy ground vehicles, namely, trains were overturned by tornadoes with casualties in the past. Therefore, aerodynamic effects of the tornado on the train were studied by several approaches in this study. First, an experimental facility was developed to clarify aerodynamic forces acting on a vehicle running through a tornado. Our experimental set-up consists of two apparatus. One is a tornado simulator, and the other is a moving model rig. PIV measurements showed that the tornado simulator can generate a swirling-flow field similar to those of the natural tornadoes. The flow field has the maximum tangential velocity of 7.4 m/s and the vortex core radius of 96 mm. The moving model rig makes a 1/40 scale model train of single-car/three-car unit run thorough the swirling flow with the maximum speed of 4.3 m/s. The model car has 72 pressure ports on its surface to estimate the aerodynamic forces. The experimental results show that the aerodynamic forces vary its magnitude and direction depends on the location of the vehicle in the flow field. Second, the aerodynamic forces on the train were estimated by using Rankin vortex model. The Rankin vortex model is a simple tornado model which widely used in the field of civil engineering. The estimated aerodynamic forces on the middle car were fairly good agreement with the experimental results. Effects of the vortex core radius and the path of the train on the aerodynamic forces were investigated using the Rankin vortex model. The results shows that the side and lift forces increases as the vortex core radius increases, while the yawing moment is maximum when the core radius is 0.3875 times of the car length. Third, a computational simulation was conducted to clarify the flow field around the train. The simulated results qualitatively agreed with the experimental ones.Keywords: aerodynamic force, experimental method, tornado, train
Procedia PDF Downloads 23617341 Calibration Model of %Titratable Acidity (Citric Acid) for Intact Tomato by Transmittance SW-NIR Spectroscopy
Authors: K. Petcharaporn, S. Kumchoo
Abstract:
The acidity (citric acid) is one of the chemical contents that can refer to the internal quality and the maturity index of tomato. The titratable acidity (%TA) can be predicted by a non-destructive method prediction by using the transmittance short wavelength (SW-NIR). Spectroscopy in the wavelength range between 665-955 nm. The set of 167 tomato samples divided into groups of 117 tomatoes sample for training set and 50 tomatoes sample for test set were used to establish the calibration model to predict and measure %TA by partial least squares regression (PLSR) technique. The spectra were pretreated with MSC pretreatment and it gave the optimal result for calibration model as (R = 0.92, RMSEC = 0.03%) and this model obtained high accuracy result to use for %TA prediction in test set as (R = 0.81, RMSEP = 0.05%). From the result of prediction in test set shown that the transmittance SW-NIR spectroscopy technique can be used for a non-destructive method for %TA prediction of tomatoes.Keywords: tomato, quality, prediction, transmittance, titratable acidity, citric acid
Procedia PDF Downloads 27317340 Computational Fluid Dynamics Analysis of Convergent–Divergent Nozzle and Comparison against Theoretical and Experimental Results
Authors: Stewart A. Keir, Faik A. Hamad
Abstract:
This study aims to use both analytical and experimental methods of analysis to examine the accuracy of Computational Fluid Dynamics (CFD) models that can then be used for more complex analyses, accurately representing more elaborate flow phenomena such as internal shockwaves and boundary layers. The geometry used in the analytical study and CFD model is taken from the experimental rig. The analytical study is undertaken using isentropic and adiabatic relationships and the output of the analytical study, the 'shockwave location tool', is created. The results from the analytical study are then used to optimize the redesign an experimental rig for more favorable placement of pressure taps and gain a much better representation of the shockwaves occurring in the divergent section of the nozzle. The CFD model is then optimized through the selection of different parameters, e.g. turbulence models (Spalart-Almaras, Realizable k-epsilon & Standard k-omega) in order to develop an accurate, robust model. The results from the CFD model can then be directly compared to experimental and analytical results in order to gauge the accuracy of each method of analysis. The CFD model will be used to visualize the variation of various parameters such as velocity/Mach number, pressure and turbulence across the shock. The CFD results will be used to investigate the interaction between the shock wave and the boundary layer. The validated model can then be used to modify the nozzle designs which may offer better performance and ease of manufacture and may present feasible improvements to existing high-speed flow applications.Keywords: CFD, nozzle, fluent, gas dynamics, shock-wave
Procedia PDF Downloads 233