Search results for: generalised linear model
16221 Performance of the Strong Stability Method in the Univariate Classical Risk Model
Authors: Safia Hocine, Zina Benouaret, Djamil A¨ıssani
Abstract:
In this paper, we study the performance of the strong stability method of the univariate classical risk model. We interest to the stability bounds established using two approaches. The first based on the strong stability method developed for a general Markov chains. The second approach based on the regenerative processes theory . By adopting an algorithmic procedure, we study the performance of the stability method in the case of exponential distribution claim amounts. After presenting numerically and graphically the stability bounds, an interpretation and comparison of the results have been done.Keywords: Marcov chain, regenerative process, risk model, ruin probability, strong stability
Procedia PDF Downloads 32416220 Development, Optimization, and Validation of a Synchronous Fluorescence Spectroscopic Method with Multivariate Calibration for the Determination of Amlodipine and Olmesartan Implementing: Experimental Design
Authors: Noha Ibrahim, Eman S. Elzanfaly, Said A. Hassan, Ahmed E. El Gendy
Abstract:
Objectives: The purpose of the study is to develop a sensitive synchronous spectrofluorimetric method with multivariate calibration after studying and optimizing the different variables affecting the native fluorescence intensity of amlodipine and olmesartan implementing an experimental design approach. Method: In the first step, the fractional factorial design used to screen independent factors affecting the intensity of both drugs. The objective of the second step was to optimize the method performance using a Central Composite Face-centred (CCF) design. The optimal experimental conditions obtained from this study were; a temperature of (15°C ± 0.5), the solvent of 0.05N HCl and methanol with a ratio of (90:10, v/v respectively), Δλ of 42 and the addition of 1.48 % surfactant providing a sensitive measurement of amlodipine and olmesartan. The resolution of the binary mixture with a multivariate calibration method has been accomplished mainly by using partial least squares (PLS) model. Results: The recovery percentage for amlodipine besylate and atorvastatin calcium in tablets dosage form were found to be (102 ± 0.24, 99.56 ± 0.10, for amlodipine and Olmesartan, respectively). Conclusion: Method is valid according to some International Conference on Harmonization (ICH) guidelines, providing to be linear over a range of 200-300, 500-1500 ng mL⁻¹ for amlodipine and Olmesartan. The methods were successful to estimate amlodipine besylate and olmesartan in bulk powder and pharmaceutical preparation.Keywords: amlodipine, central composite face-centred design, experimental design, fractional factorial design, multivariate calibration, olmesartan
Procedia PDF Downloads 15016219 The Aesthetics of Time in Thus Spoke Zarathustra: A Reappraisal of the Eternal Recurrence of the Same
Authors: Melanie Tang
Abstract:
According to Nietzsche, the eternal recurrence is his most important idea. However, it is perhaps his most cryptic and difficult to interpret. Early readings considered it as a cosmological hypothesis about the cyclic nature of time. However, following Nehamas’s ‘Life as Literature’ (1985), it has become a widespread interpretation that the eternal recurrence never really had any theoretical dimensions, and is not actually a philosophy of time, but a practical thought experiment intended to measure the extent to which we have mastered and perfected our lives. This paper endeavours to challenge this line of thought becoming scholarly consensus, and to carry out a more complex analysis of the eternal recurrence as it is presented in Thus Spoke Zarathustra. In its wider scope, this research proposes that Thus Spoke Zarathustra — as opposed to The Birth of Tragedy — be taken as the primary source for a study of Nietzsche’s Aesthetics, due to its more intrinsic aesthetic qualities and expressive devices. The eternal recurrence is the central philosophy of a work that communicates its ideas in unprecedentedly experimental and aesthetic terms, and a more in-depth understanding of why Nietzsche chooses to present his conception of time in aesthetic terms is warranted. Through hermeneutical analysis of Thus Spoke Zarathustra and engagement with secondary sources such as those by Nehamas, Karl Löwith, and Jill Marsden, the present analysis challenges the ethics of self-perfection upon which current interpretations of the recurrence are based, as well as their reliance upon a linear conception of time. Instead, it finds the recurrence to be a cyclic interplay between the self and the world, rather than a metric pertaining solely to the self. In this interpretation, time is found to be composed of an intertemporal rather than linear multitude of will to power, which structures itself through tensional cycles into an experience of circular time that can be seen to have aesthetic dimensions. In putting forth this understanding of the eternal recurrence, this research hopes to reopen debate on this key concept in the field of Nietzsche studies.Keywords: Nietzsche, eternal recurrence, Zarathustra, aesthetics, time
Procedia PDF Downloads 15016218 Flow Dynamics of Nanofluids in a Horizontal Cylindrical Annulus Using Nonhomogeneous Dynamic Model
Authors: M. J. Uddin, M. M. Rahman
Abstract:
Transient natural convective flow dynamics of nanofluids in a horizontal homocentric annulus using nonhomogeneous dynamic model has been experimented numerically. The simulation is carried out for four different shapes of the inner wall, which is either cylindrical, elliptical, square or triangular. The outer surface of the annulus is maintained at constant low temperature while the inner wall is maintained at a uniform temperature; higher than the outer one. The enclosure is permeated by a uniform magnetic field having variable orientation. The Brownian motion and thermophoretic deposition phenomena of the nanoparticles are taken into account in model construction. The governing nonlinear momentum, energy, and concentration equations are solved numerically using Galerkin weighted residual finite element method. To find the best performer, the local Nusselt number is demonstrated for different shapes of the inner wall. The heat transfer enhancement for different nanofluids for four different shapes of the inner wall is exhibited.Keywords: nanofluids, annulus, nonhomogeneous dynamic model, heat transfer
Procedia PDF Downloads 17016217 Finite Element Modelling and Analysis of Human Knee Joint
Authors: R. Ranjith Kumar
Abstract:
Computer modeling and simulation of human movement is playing an important role in sports and rehabilitation. Accurate modeling and analysis of human knee join is more complex because of complicated structure whose geometry is not easily to represent by a solid model. As part of this project, from the number of CT scan images of human knee join surface reconstruction is carried out using 3D slicer software, an open source software. From this surface reconstruction model, using mesh lab (another open source software) triangular meshes are created on reconstructed surface. This final triangular mesh model is imported to Solid Works, 3D mechanical CAD modeling software. Finally this CAD model is imported to ABAQUS, finite element analysis software for analyzing the knee joints. The results obtained are encouraging and provides an accurate way of modeling and analysis of biological parts without human intervention.Keywords: solid works, CATIA, Pro-e, CAD
Procedia PDF Downloads 12416216 Identifying Autism Spectrum Disorder Using Optimization-Based Clustering
Authors: Sharifah Mousli, Sona Taheri, Jiayuan He
Abstract:
Autism spectrum disorder (ASD) is a complex developmental condition involving persistent difficulties with social communication, restricted interests, and repetitive behavior. The challenges associated with ASD can interfere with an affected individual’s ability to function in social, academic, and employment settings. Although there is no effective medication known to treat ASD, to our best knowledge, early intervention can significantly improve an affected individual’s overall development. Hence, an accurate diagnosis of ASD at an early phase is essential. The use of machine learning approaches improves and speeds up the diagnosis of ASD. In this paper, we focus on the application of unsupervised clustering methods in ASD as a large volume of ASD data generated through hospitals, therapy centers, and mobile applications has no pre-existing labels. We conduct a comparative analysis using seven clustering approaches such as K-means, agglomerative hierarchical, model-based, fuzzy-C-means, affinity propagation, self organizing maps, linear vector quantisation – as well as the recently developed optimization-based clustering (COMSEP-Clust) approach. We evaluate the performances of the clustering methods extensively on real-world ASD datasets encompassing different age groups: toddlers, children, adolescents, and adults. Our experimental results suggest that the COMSEP-Clust approach outperforms the other seven methods in recognizing ASD with well-separated clusters.Keywords: autism spectrum disorder, clustering, optimization, unsupervised machine learning
Procedia PDF Downloads 11616215 Logistic Regression Based Model for Predicting Students’ Academic Performance in Higher Institutions
Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu
Abstract:
In recent years, there has been a desire to forecast student academic achievement prior to graduation. This is to help them improve their grades, particularly for individuals with poor performance. The goal of this study is to employ supervised learning techniques to construct a predictive model for student academic achievement. Many academics have already constructed models that predict student academic achievement based on factors such as smoking, demography, culture, social media, parent educational background, parent finances, and family background, to name a few. This feature and the model employed may not have correctly classified the students in terms of their academic performance. This model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester as a prerequisite to predict if the student will perform well in future on related courses. The model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost, returning a 96.7% accuracy. This model is available as a desktop application, allowing both instructors and students to benefit from user-friendly interfaces for predicting student academic achievement. As a result, it is recommended that both students and professors use this tool to better forecast outcomes.Keywords: artificial intelligence, ML, logistic regression, performance, prediction
Procedia PDF Downloads 9716214 A Curricular Approach to Organizational Mentoring Programs: The Integrated Mentoring Curriculum Model
Authors: Christopher Webb
Abstract:
This work presents a new model of mentoring in an organizational environment and has important implications for both practice and research, the model frames the organizational environment as organizational curriculum, which includes the elements that affect learning within the organization. This includes the organizational structure and culture, roles within the organization, and accessibility of knowledge. The program curriculum includes the elements of the mentoring program, including materials, training, and scheduled events for the program participants. The term dyadic curriculum is coined in this work. The dyadic curriculum describes the participation, behavior, and identities of the pairs participating in mentorships. This also includes the identity work of the participants and their views of each other. Much of this curriculum is unprescribed and is unique within each dyad. It describes how participants mediate the elements of organizational and program curricula. These three curricula interact and affect each other in predictable ways. A detailed example of a mentoring program framed in this model is provided.Keywords: curriculum, mentoring, organizational learning and development, social learning
Procedia PDF Downloads 20216213 A Stochastic Volatility Model for Optimal Market-Making
Authors: Zubier Arfan, Paul Johnson
Abstract:
The electronification of financial markets and the rise of algorithmic trading has sparked a lot of interest from the mathematical community, for the market making-problem in particular. The research presented in this short paper solves the classic stochastic control problem in order to derive the strategy for a market-maker. It also shows how to calibrate and simulate the strategy with real limit order book data for back-testing. The ambiguity of limit-order priority in back-testing is dealt with by considering optimistic and pessimistic priority scenarios. The model, although it does outperform a naive strategy, assumes constant volatility, therefore, is not best suited to the LOB data. The Heston model is introduced to describe the price and variance process of the asset. The Trader's constant absolute risk aversion utility function is optimised by numerically solving a 3-dimensional Hamilton-Jacobi-Bellman partial differential equation to find the optimal limit order quotes. The results show that the stochastic volatility market-making model is more suitable for a risk-averse trader and is also less sensitive to calibration error than the constant volatility model.Keywords: market-making, market-microsctrucure, stochastic volatility, quantitative trading
Procedia PDF Downloads 15016212 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 12216211 Using Lean Six-Sigma in the Improvement of Service Quality at Aviation Industry: Case Study at the Departure Area in KKIA
Authors: Tareq Al Muhareb, Jasper Graham-Jones
Abstract:
The service quality is a significant element in aviation industry especially in the international airports. Through this paper, the researchers built a model based on Lean six sigma methodologies and applied it in the departure area at KKIA (King Khalid International Airport) in order to assess it. This model characterized with many special features that can become over the cultural differences in aviation industry since it is considered the most critical circumstance in this field. Applying the model of this study is depending on following the DMAIC procedure systemized in lean thinking aspects. This model of Lean-six-sigma as a managerial procedure is mostly focused on the change management culture that requires high level of planning, organizing, modifying, and controlling in order to benefit from strengths as well as revoke weaknesses.Keywords: lean-six-sigma, service quality, aviation industry, KKIA (King Khalid International Airport), SERVQUAL
Procedia PDF Downloads 43016210 Optimization of Syngas Quality for Fischer-Tropsch Synthesis
Authors: Ali Rabah
Abstract:
This research received no grant or financial support from any public, commercial, or none governmental agency. The author conducted this work as part of his normal research activities as a professor of Chemical Engineering at the University of Khartoum, Sudan. Abstract While fossil oil reserves have been receding, the demand for diesel and gasoline has been growing. In recent years, syngas of biomass origin has been emerging as a viable feedstock for Fischer-Tropsch (FT) synthesis, a process for manufacturing synthetic gasoline and diesel. This paper reports the optimization of syngas quality to match FT synthesis requirements. The optimization model maximizes the thermal efficiency under the constraint of H2/CO≥2.0 and operating conditions of equivalent ratio (0 ≤ ER ≤ 1.0), steam to biomass ratio (0 ≤ SB ≤ 5), and gasification temperature (500 °C ≤ Tg ≤ 1300 °C). The optimization model is executed using the optimization section of the Model Analysis Tools of the Aspen Plus simulator. The model is tested using eleven (11) types of MSW. The optimum operating conditions under which the objective function and the constraint are satisfied are ER=0, SB=0.66-1.22, and Tg=679 - 763°C. Under the optimum operating conditions, the syngas quality is H2=52.38 - 58.67-mole percent, LHV=12.55 - 17.15 MJ/kg, N2=0.38 - 2.33-mole percent, and H2/CO≥2.15. The generalized optimization model reported could be extended to any other type of biomass and coal. Keywords: MSW, Syngas, Optimization, Fischer-Tropsch.Keywords: syngas, MSW, optimization, Fisher-Tropsh
Procedia PDF Downloads 8016209 Free Vibration Analysis of Timoshenko Beams at Higher Modes with Central Concentrated Mass Using Coupled Displacement Field Method
Authors: K. Meera Saheb, K. Krishna Bhaskar
Abstract:
Complex structures used in many fields of engineering are made up of simple structural elements like beams, plates etc. These structural elements, sometimes carry concentrated masses at discrete points, and when subjected to severe dynamic environment tend to vibrate with large amplitudes. The frequency amplitude relationship is very much essential in determining the response of these structural elements subjected to the dynamic loads. For Timoshenko beams, the effects of shear deformation and rotary inertia are to be considered to evaluate the fundamental linear and nonlinear frequencies. A commonly used method for solving vibration problem is energy method, or a finite element analogue of the same. In the present Coupled Displacement Field method the number of undetermined coefficients is reduced to half when compared to the famous Rayleigh Ritz method, which significantly simplifies the procedure to solve the vibration problem. This is accomplished by using a coupling equation derived from the static equilibrium of the shear flexible structural element. The prime objective of the present paper here is to study, in detail, the effect of a central concentrated mass on the large amplitude free vibrations of uniform shear flexible beams. Accurate closed form expressions for linear frequency parameter for uniform shear flexible beams with a central concentrated mass was developed and the results are presented in digital form.Keywords: coupled displacement field, coupling equation, large amplitude vibrations, moderately thick plates
Procedia PDF Downloads 22616208 Extension of a Competitive Location Model Considering a Given Number of Servers and Proposing a Heuristic for Solving
Authors: Mehdi Seifbarghy, Zahra Nasiri
Abstract:
Competitive location problem deals with locating new facilities to provide a service (or goods) to the customers of a given geographical area where other facilities (competitors) offering the same service are already present. The new facilities will have to compete with the existing facilities for capturing the market share. This paper proposes a new model to maximize the market share in which customers choose the facilities based on traveling time, waiting time and attractiveness. The attractiveness of a facility is considered as a parameter in the model. A heuristic is proposed to solve the problem.Keywords: competitive location, market share, facility attractiveness, heuristic
Procedia PDF Downloads 52416207 Research on Air pollution Spatiotemporal Forecast Model Based on LSTM
Authors: JingWei Yu, Hong Yang Yu
Abstract:
At present, the increasingly serious air pollution in various cities of China has made people pay more attention to the air quality index(hereinafter referred to as AQI) of their living areas. To face this situation, it is of great significance to predict air pollution in heavily polluted areas. In this paper, based on the time series model of LSTM, a spatiotemporal prediction model of PM2.5 concentration in Mianyang, Sichuan Province, is established. The model fully considers the temporal variability and spatial distribution characteristics of PM2.5 concentration. The spatial correlation of air quality at different locations is based on the Air quality status of other nearby monitoring stations, including AQI and meteorological data to predict the air quality of a monitoring station. The experimental results show that the method has good prediction accuracy that the fitting degree with the actual measured data reaches more than 0.7, which can be applied to the modeling and prediction of the spatial and temporal distribution of regional PM2.5 concentration.Keywords: LSTM, PM2.5, neural networks, spatio-temporal prediction
Procedia PDF Downloads 13416206 Evaluation of Ceres Wheat and Rice Model for Climatic Conditions in Haryana, India
Authors: Mamta Rana, K. K. Singh, Nisha Kumari
Abstract:
The simulation models with its soil-weather-plant atmosphere interacting system are important tools for assessing the crops in changing climate conditions. The CERES-Wheat & Rice vs. 4.6 DSSAT was calibrated and evaluated for one of the major producers of wheat and rice state- Haryana, India. The simulation runs were made under irrigated conditions and three fertilizer applications dose of N-P-K to estimate crop yield and other growth parameters along with the phenological development of the crop. The genetic coefficients derived by iteratively manipulating the relevant coefficients that characterize the phenological process of wheat and rice crop to the best fit match between the simulated and observed anthesis, physological maturity and final grain yield. The model validated by plotting the simulated and remote sensing derived LAI. LAI product from remote sensing provides the edge of spatial, timely and accurate assessment of crop. For validating the yield and yield components, the error percentage between the observed and simulated data was calculated. The analysis shows that the model can be used to simulate crop yield and yield components for wheat and rice cultivar under different management practices. During the validation, the error percentage was less than 10%, indicating the utility of the calibrated model for climate risk assessment in the selected region.Keywords: simulation model, CERES-wheat and rice model, crop yield, genetic coefficient
Procedia PDF Downloads 30516205 Single Ended Primary Inductance Converter with Internal Model Controller
Authors: Fatih Suleyman Taskincan, Ahmet Karaarslan
Abstract:
In this article, the study and analysis of Single Ended Primary Inductance Converter (SEPIC) are presented for battery charging applications that will be used in military applications. The usage of this kind of converters come from its advantage of non-reverse polarity at outputs. As capacitors charge and discharge through inductance, peak current does not occur on capacitors. Therefore, the efficiency will be high compared to buck-boost converters. In this study, the converter (SEPIC) is designed to be operated with Internal Model Controller (IMC). The traditional controllers like Proportional Integral Controller are not preferred as its linearity behavior. Hence IMC is designed for this converter. This controller is a model-based control and provides more robustness and better set point monitoring. Moreover, it can be used for an unstable process where the conventional controller cannot handle the dynamic operation. Matlab/Simulink environment is used to simulate the converter and its controller, then, the results are shown and discussed.Keywords: DC/DC converter, single ended primary inductance converter, SEPIC, internal model controller, IMC, switched mode power supply
Procedia PDF Downloads 62916204 Study of Inhibition of the End Effect Based on AR Model Predict of Combined Data Extension and Window Function
Authors: Pan Hongxia, Wang Zhenhua
Abstract:
In this paper, the EMD decomposition in the process of endpoint effect adopted data based on AR model to predict the continuation and window function method of combining the two effective inhibition. Proven by simulation of the simulation signal obtained the ideal effect, then, apply this method to the gearbox test data is also achieved good effect in the process, for the analysis of the subsequent data processing to improve the calculation accuracy. In the end, under various working conditions for the gearbox fault diagnosis laid a good foundation.Keywords: gearbox, fault diagnosis, ar model, end effect
Procedia PDF Downloads 36616203 Media Planning Decisions and Preferences through a Goal Programming Model: An Application to a Media Campaign for a Mature Product in Italy
Authors: Cinzia Colapinto, Davide La Torre
Abstract:
Goal Programming (GP) and its variants were applied to marketing and specific marketing issues, such as media scheduling problems in the last decades. The concept of satisfaction functions has been widely utilized in the GP model to explicitly integrate the Decision-Maker’s preferences. These preferences can be guided by the available information regarding the decision-making situation. A GP model with satisfaction functions for media planning decisions is proposed and then illustrated through a case study related to a marketing/media campaign in the Italian market.Keywords: goal programming, satisfaction functions, media planning, tourism management
Procedia PDF Downloads 39916202 Measurement Tools of the Maturity Model for IT Service Outsourcing in Higher Education Institutions
Authors: Victoriano Valencia García, Luis Usero Aragonés, Eugenio J. Fernández Vicente
Abstract:
Nowadays, the successful implementation of ICTs is vital for almost any kind of organization. Good governance and ICT management are essential for delivering value, managing technological risks, managing resources and performance measurement. In addition, outsourcing is a strategic IT service solution which complements IT services provided internally in organizations. This paper proposes the measurement tools of a new holistic maturity model based on standards ISO/IEC 20000 and ISO/IEC 38500, and the frameworks and best practices of ITIL and COBIT, with a specific focus on IT outsourcing. These measurement tools allow independent validation and practical application in the field of higher education, using a questionnaire, metrics tables, and continuous improvement plan tables as part of the measurement process. Guidelines and standards are proposed in the model for facilitating adaptation to universities and achieving excellence in the outsourcing of IT services.Keywords: IT governance, IT management, IT services, outsourcing, maturity model, measurement tools
Procedia PDF Downloads 59216201 Variability Management of Contextual Feature Model in Multi-Software Product Line
Authors: Muhammad Fezan Afzal, Asad Abbas, Imran Khan, Salma Imtiaz
Abstract:
Software Product Line (SPL) paradigm is used for the development of the family of software products that share common and variable features. Feature model is a domain of SPL that consists of common and variable features with predefined relationships and constraints. Multiple SPLs consist of a number of similar common and variable features, such as mobile phones and Tabs. Reusability of common and variable features from the different domains of SPL is a complex task due to the external relationships and constraints of features in the feature model. To increase the reusability of feature model resources from domain engineering, it is required to manage the commonality of features at the level of SPL application development. In this research, we have proposed an approach that combines multiple SPLs into a single domain and converts them to a common feature model. Extracting the common features from different feature models is more effective, less cost and time to market for the application development. For extracting features from multiple SPLs, the proposed framework consists of three steps: 1) find the variation points, 2) find the constraints, and 3) combine the feature models into a single feature model on the basis of variation points and constraints. By using this approach, reusability can increase features from the multiple feature models. The impact of this research is to reduce the development of cost, time to market and increase products of SPL.Keywords: software product line, feature model, variability management, multi-SPLs
Procedia PDF Downloads 6916200 Use of Two-Dimensional Hydraulics Modeling for Design of Erosion Remedy
Authors: Ayoub. El Bourtali, Abdessamed.Najine, Amrou Moussa. Benmoussa
Abstract:
One of the main goals of river engineering is river training, which is defined as controlling and predicting the behavior of a river. It is taking effective measurements to eliminate all related risks and thus improve the river system. In some rivers, the riverbed continues to erode and degrade; therefore, equilibrium will never be reached. Generally, river geometric characteristics and riverbed erosion analysis are some of the most complex but critical topics in river engineering and sediment hydraulics; riverbank erosion is the second answering process in hydrodynamics, which has a major impact on the ecological chain and socio-economic process. This study aims to integrate the new computer technology that can analyze erosion and hydraulic problems through computer simulation and modeling. Choosing the right model remains a difficult and sensitive job for field engineers. This paper makes use of the 5.0.4 version of the HEC-RAS model. The river section is adopted according to the gauged station and the proximity of the adjustment. In this work, we will demonstrate how 2D hydraulic modeling helped clarify the design and cover visuals to set up depth and velocities at riverbanks and throughout advanced structures. The hydrologic engineering center's-river analysis system (HEC-RAS) 2D model was used to create a hydraulic study of the erosion model. The geometric data were generated from the 12.5-meter x 12.5-meter resolution digital elevation model. In addition to showing eroded or overturned river sections, the model output also shows patterns of riverbank changes, which can help us reduce problems caused by erosion.Keywords: 2D hydraulics model, erosion, floodplain, hydrodynamic, HEC-RAS, riverbed erosion, river morphology, resolution digital data, sediment
Procedia PDF Downloads 18916199 Design and Implementation of the Embedded Control System for the Electrical Motor Based Cargo Vehicle
Authors: Syed M. Rizvi, Yiqing Meng, Simon Iwnicki
Abstract:
With an increased demand in the land cargo industry, it is predicted that the freight trade will rise to a record $1.1 trillion in revenue and volume in the following years to come. This increase is mainly driven by the e-commerce model ever so popular in the consumer market. Many innovative ideas have stemmed from this demand and change in lifestyle likes of which include e-bike cargo and drones. Rural and urban areas are facing air quality challenges to keep pollution levels in city centre to a minimum. For this purpose, this paper presents the design and implementation of a non-linear PID control system, employing a micro-controller and low cost sensing technique, for controlling an electrical motor based cargo vehicle with various loads, to follow a leading vehicle (bike). Within using this system, the cargo vehicle will have no load influence on the bike rider on different gradient conditions, such as hill climbing. The system is being integrated with a microcontroller to continuously measure several parameters such as relative displacement between bike and the cargo vehicle and gradient of the road, and process these measurements to create a portable controller capable of controlling the performance of electrical vehicle without the need of a PC. As a result, in the case of carrying 180kg of parcel weight, the cargo vehicle can maintain a reasonable spacing over a short length of sensor travel between the bike and itself.Keywords: cargo, e-bike, microcontroller, embedded system, nonlinear pid, self-adaptive, inertial measurement unit (IMU)
Procedia PDF Downloads 20916198 Numerical Simulation of the Kurtosis Effect on the EHL Problem
Authors: S. Gao, S. Srirattayawong
Abstract:
In this study, a computational fluid dynamics (CFD) model has been developed for studying the effect of surface roughness profile on the EHL problem. The cylinders contact geometry, meshing and calculation of the conservation of mass and momentum equations are carried out by using the commercial software packages ICEMCFD and ANSYS Fluent. The user defined functions (UDFs) for density, viscosity and elastic deformation of the cylinders as the functions of pressure and temperature have been defined for the CFD model. Three different surface roughness profiles are created and incorporated into the CFD model. It is found that the developed CFD model can predict the characteristics of fluid flow and heat transfer in the EHL problem, including the leading parameters such as the pressure distribution, minimal film thickness, viscosity, and density changes. The obtained results show that the pressure profile at the center of the contact area directly relates to the roughness amplitude. The rough surface with kurtosis value over 3 influences the fluctuated shape of pressure distribution higher than other cases.Keywords: CFD, EHL, kurtosis, surface roughness
Procedia PDF Downloads 32016197 Risk of Fatal and Non-Fatal Coronary Heart Disease and Stroke Events among Adult Patients with Hypertension: Basic Markov Model Inputs for Evaluating Cost-Effectiveness of Hypertension Treatment: Systematic Review of Cohort Studies
Authors: Mende Mensa Sorato, Majid Davari, Abbas Kebriaeezadeh, Nizal Sarrafzadegan, Tamiru Shibru, Behzad Fatemi
Abstract:
Markov model, like cardiovascular disease (CVD) policy model based simulation, is being used for evaluating the cost-effectiveness of hypertension treatment. Stroke, angina, myocardial infarction (MI), cardiac arrest, and all-cause mortality were included in this model. Hypertension is a risk factor for a number of vascular and cardiac complications and CVD outcomes. Objective: This systematic review was conducted to evaluate the comprehensiveness of this model across different regions globally. Methods: We searched articles written in the English language from PubMed/Medline, Ovid/Medline, Embase, Scopus, Web of Science, and Google scholar with a systematic search query. Results: Thirteen cohort studies involving a total of 2,165,770 (1,666,554 hypertensive adult population and 499,226 adults with treatment-resistant hypertension) were included in this scoping review. Hypertension is clearly associated with coronary heart disease (CHD) and stroke mortality, unstable angina, stable angina, MI, heart failure (HF), sudden cardiac death, transient ischemic attack, ischemic stroke, subarachnoid hemorrhage, intracranial hemorrhage, peripheral arterial disease (PAD), and abdominal aortic aneurism (AAA). Association between HF and hypertension is variable across regions. Treatment resistant hypertension is associated with a higher relative risk of developing major cardiovascular events and all-cause mortality when compared with non-resistant hypertension. However, it is not included in the previous CVD policy model. Conclusion: The CVD policy model used can be used in most regions for the evaluation of the cost-effectiveness of hypertension treatment. However, hypertension is highly associated with HF in Latin America, the Caribbean, Eastern Europe, and Sub-Saharan Africa. Therefore, it is important to consider HF in the CVD policy model for evaluating the cost-effectiveness of hypertension treatment in these regions. We do not suggest the inclusion of PAD and AAA in the CVD policy model for evaluating the cost-effectiveness of hypertension treatment due to a lack of sufficient evidence. Researchers should consider the effect of treatment-resistant hypertension either by including it in the basic model or during setting the model assumptions.Keywords: cardiovascular disease policy model, cost-effectiveness analysis, hypertension, systematic review, twelve major cardiovascular events
Procedia PDF Downloads 7116196 Standard Resource Parameter Based Trust Model in Cloud Computing
Authors: Shyamlal Kumawat
Abstract:
Cloud computing is shifting the approach IT capital are utilized. Cloud computing dynamically delivers convenient, on-demand access to shared pools of software resources, platform and hardware as a service through internet. The cloud computing model—made promising by sophisticated automation, provisioning and virtualization technologies. Users want the ability to access these services including infrastructure resources, how and when they choose. To accommodate this shift in the consumption model technology has to deal with the security, compatibility and trust issues associated with delivering that convenience to application business owners, developers and users. Absent of these issues, trust has attracted extensive attention in Cloud computing as a solution to enhance the security. This paper proposes a trusted computing technology through Standard Resource parameter Based Trust Model in Cloud Computing to select the appropriate cloud service providers. The direct trust of cloud entities is computed on basis of the interaction evidences in past and sustained on its present performances. Various SLA parameters between consumer and provider are considered in trust computation and compliance process. The simulations are performed using CloudSim framework and experimental results show that the proposed model is effective and extensible.Keywords: cloud, Iaas, Saas, Paas
Procedia PDF Downloads 33016195 Verification & Validation of Map Reduce Program Model for Parallel K-Mediod Algorithm on Hadoop Cluster
Authors: Trapti Sharma, Devesh Kumar Srivastava
Abstract:
This paper is basically a analysis study of above MapReduce implementation and also to verify and validate the MapReduce solution model for Parallel K-Mediod algorithm on Hadoop Cluster. MapReduce is a programming model which authorize the managing of huge amounts of data in parallel, on a large number of devices. It is specially well suited to constant or moderate changing set of data since the implementation point of a position is usually high. MapReduce has slowly become the framework of choice for “big data”. The MapReduce model authorizes for systematic and instant organizing of large scale data with a cluster of evaluate nodes. One of the primary affect in Hadoop is how to minimize the completion length (i.e. makespan) of a set of MapReduce duty. In this paper, we have verified and validated various MapReduce applications like wordcount, grep, terasort and parallel K-Mediod clustering algorithm. We have found that as the amount of nodes increases the completion time decreases.Keywords: hadoop, mapreduce, k-mediod, validation, verification
Procedia PDF Downloads 36916194 An Energy-Efficient Model of Integrating Telehealth IoT Devices with Fog and Cloud Computing-Based Platform
Authors: Yunyong Guo, Sudhakar Ganti, Bryan Guo
Abstract:
The rapid growth of telehealth Internet of Things (IoT) devices has raised concerns about energy consumption and efficient data processing. This paper introduces an energy-efficient model that integrates telehealth IoT devices with a fog and cloud computing-based platform, offering a sustainable and robust solution to overcome these challenges. Our model employs fog computing as a localized data processing layer while leveraging cloud computing for resource-intensive tasks, significantly reducing energy consumption. We incorporate adaptive energy-saving strategies. Simulation analysis validates our approach's effectiveness in enhancing energy efficiency for telehealth IoT systems integrated with localized fog nodes and both private and public cloud infrastructures. Future research will focus on further optimization of the energy-saving model, exploring additional functional enhancements, and assessing its broader applicability in other healthcare and industry sectors.Keywords: energy-efficient, fog computing, IoT, telehealth
Procedia PDF Downloads 8616193 The Effect of AMBs Number of a Dynamics Behavior of a Spur Gear Reducer in Non-Stationary Regime
Authors: Najib Belhadj Messaoud, Slim Souissi
Abstract:
The non-linear dynamic behavior of a single stage spur gear reducer is studied in this paper in transient regime. Driving and driver rotors are, respectively, powered by a motor torque Cm and loaded by a resistive torque Cr. They are supported by two identical Active Magnetic Bearings (AMBs). Gear excitation is induced by the motor torque and load variation in addition to the fluctuation of meshing stiff-ness due to the variation of input rotational speed. Three models of AMBs were used with four, six and eight magnets. They are operated by P.D controller and powered by control and bias currents. The dynamic parameters of the AMBs are modeled by stiffness and damping matrices computed by the derivation of the electromagnetic forces. The equations of motion are solved iteratively using Newmark time integration method. In the first part of the study, the model is powered by an electric motor and by a four strokes four cylinders diesel engine in the second part. The numerical results of the dynamic responses of the system come to confirm the significant effect of the transient regime on the dynamic behavior of a gear set, particularly in the case of engine acyclism condition. Results also confirm the influence of the magnet number by AMBs on the dynamic behavior of the system. Indeed, vibrations were more important in the case of gear reducer supported by AMBs with four magnets.Keywords: motor, stiffness, gear, acyclism, fluctuation, torque
Procedia PDF Downloads 45916192 Petri Net Modeling and Simulation of a Call-Taxi System
Authors: T. Godwin
Abstract:
A call-taxi system is a type of taxi service where a taxi could be requested through a phone call or mobile app. A schematic functioning of a call-taxi system is modeled using Petri net, which provides the necessary conditions for a taxi to be assigned by a dispatcher to pick a customer as well as the conditions for the taxi to be released by the customer. A Petri net is a graphical modeling tool used to understand sequences, concurrences, and confluences of activities in the working of discrete event systems. It uses tokens on a directed bipartite multi-graph to simulate the activities of a system. The Petri net model is translated into a simulation model and a call-taxi system is simulated. The simulation model helps in evaluating the operation of a call-taxi system based on the fleet size as well as the operating policies for call-taxi assignment and empty call-taxi repositioning. The developed Petri net based simulation model can be used to decide the fleet size as well as the call-taxi assignment policies for a call-taxi system.Keywords: call-taxi, discrete event system, petri net, simulation modeling
Procedia PDF Downloads 424