Search results for: decision tree model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20262

Search results for: decision tree model

15192 Inferring Human Mobility in India Using Machine Learning

Authors: Asra Yousuf, Ajaykumar Tannirkulum

Abstract:

Inferring rural-urban migration trends can help design effective policies that promote better urban planning and rural development. In this paper, we describe how machine learning algorithms can be applied to predict internal migration decisions of people. We consider data collected from household surveys in Tamil Nadu to train our model. To measure the performance of the model, we use data on past migration from National Sample Survey Organisation of India. The factors for training the model include socioeconomic characteristic of each individual like age, gender, place of residence, outstanding loans, strength of the household, etc. and his past migration history. We perform a comparative analysis of the performance of a number of machine learning algorithm to determine their prediction accuracy. Our results show that machine learning algorithms provide a stronger prediction accuracy as compared to statistical models. Our goal through this research is to propose the use of data science techniques in understanding human decisions and behaviour in developing countries.

Keywords: development, migration, internal migration, machine learning, prediction

Procedia PDF Downloads 274
15191 Multichannel Scheme under Fairness Environment for Cognitive Radio Networks

Authors: Hans Marquez Ramos, Cesar Hernandez, Ingrid Páez

Abstract:

This paper develops a multiple channel assignment model, which allows to take advantage in most efficient way, spectrum opportunities in cognitive radio networks. Developed scheme allows make several available and frequency adjacent channel assignments, which require a bigger wide band, under an equality environment. The hybrid assignment model it is made by to algorithms, one who makes the ranking and select available frequency channels and the other one in charge of establishing an equality criteria, in order to not restrict spectrum opportunities for all other secondary users who wish to make transmissions. Measurements made were done for average bandwidth, average delay, as well fairness computation for several channel assignment. Reached results were evaluated with experimental spectrum occupational data from GSM frequency band captured. Developed model, shows evidence of improvement in spectrum opportunity use and a wider average transmit bandwidth for each secondary user, maintaining equality criteria in channel assignment.

Keywords: bandwidth, fairness, multichannel, secondary users

Procedia PDF Downloads 508
15190 Effect of Measured and Calculated Static Torque on Instantaneous Torque Profile of Switched Reluctance Motor

Authors: Ali Asghar Memon

Abstract:

The simulation modeling of switched reluctance (SR) machine often relies and uses the three data tables identified as static torque characteristics that include flux linkage characteristics, co energy characteristics and static torque characteristics separately. It has been noticed from the literature that the data of static torque used in the simulation model is often calculated so far the literature is concerned. This paper presents the simulation model that include the data of measured and calculated static torque separately to see its effect on instantaneous torque profile of the machine. This is probably for the first time so far the literature review is concerned that static torque from co energy information, and measured static torque directly from experiments are separately used in the model. This research is helpful for accurate modeling of switched reluctance drive.

Keywords: static characteristics, current chopping, flux linkage characteristics, switched reluctance motor

Procedia PDF Downloads 294
15189 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 400
15188 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 145
15187 Educational Institutional Approach for Livelihood Improvement and Sustainable Development

Authors: William Kerua

Abstract:

The PNG University of Technology (Unitech) has mandatory access to teaching, research and extension education. Given such function, the Agriculture Department has established the ‘South Pacific Institute of Sustainable Agriculture and Rural Development (SPISARD)’ in 2004. SPISARD is established as a vehicle to improve farming systems practiced in selected villages by undertaking pluralistic extension method through ‘Educational Institutional Approach’. Unlike other models, SPISARD’s educational institutional approach stresses on improving the whole farming systems practiced in a holistic manner and has a two-fold focus. The first is to understand the farming communities and improve the productivity of the farming systems in a sustainable way to increase income, improve nutrition and food security as well as livelihood enhancement trainings. The second is to enrich the Department’s curriculum through teaching, research, extension and getting inputs from farming community. SPISARD has established number of model villages in various provinces in Papua New Guinea (PNG) and with many positive outcome and success stories. Adaption of ‘educational institutional approach’ thus binds research, extension and training into one package with the use of students and academic staff through model village establishment in delivering development and extension to communities. This centre (SPISARD) coordinates the activities of the model village programs and linkages. The key to the development of the farming systems is establishing and coordinating linkages, collaboration, and developing partnerships both within and external institutions, organizations and agencies. SPISARD has a six-point step strategy for the development of sustainable agriculture and rural development. These steps are (i) establish contact and identify model villages, (ii) development of model village resource centres for research and trainings, (iii) conduct baseline surveys to identify problems/needs of model villages, (iv) development of solution strategies, (v) implementation and (vi) evaluation of impact of solution programs. SPISARD envisages that the farming systems practiced being improved if the villages can be made the centre of SPISARD activities. Therefore, SPISARD has developed a model village approach to channel rural development. The model village when established become the conduit points where teaching, training, research, and technology transfer takes place. This approach is again different and unique to the existing ones, in that, the development process take place in the farmers’ environment with immediate ‘real time’ feedback mechanisms based on the farmers’ perspective and satisfaction. So far, we have developed 14 model villages and have conducted 75 trainings in 21 different areas/topics in 8 provinces to a total of 2,832 participants of both sex. The aim of these trainings is to directly participate with farmers in the pursuit to improving their farming systems to increase productivity, income and to secure food security and nutrition, thus to improve their livelihood.

Keywords: development, educational institutional approach, livelihood improvement, sustainable agriculture

Procedia PDF Downloads 158
15186 Competitiveness and Pricing Policy Assessment for Resilience Surface Access System at Airports

Authors: Dimitrios J. Dimitriou

Abstract:

Considering a worldwide tendency, air transports are growing very fast and many changes have taken place in planning, management and decision making process. Given the complexity of airport operation, the best use of existing capacity is the key driver of efficiency and productivity. This paper deals with the evaluation framework for the ground access at airports, by using a set of mode choice indicators providing key messages towards airport’s ground access performance. The application presents results for a sample of 12 European airports, illustrating recommendations to define policy and improve service for the air transport access chain.

Keywords: airport ground access, air transport chain, airport access performance, airport policy

Procedia PDF Downloads 374
15185 Simulation Programs to Education of Crisis Management Members

Authors: Jiri Barta

Abstract:

This paper deals with a simulation programs and technologies using in the educational process for members of the crisis management. Risk analysis, simulation, preparation and planning are among the main activities of workers of crisis management. Made correctly simulation of emergency defines the extent of the danger. On this basis, it is possible to effectively prepare and plan measures to minimize damage. The paper is focused on simulation programs that are trained at the University of Defence. Implementation of the outputs from simulation programs in decision-making processes of crisis staffs is one of the main tasks of the research project.

Keywords: crisis management, continuity, critical infrastructure, dangerous substance, education, flood, simulation programs

Procedia PDF Downloads 468
15184 Resource Constrained Time-Cost Trade-Off Analysis in Construction Project Planning and Control

Authors: Sangwon Han, Chengquan Jin

Abstract:

Time-cost trade-off (TCTO) is one of the most significant part of construction project management. Despite the significance, current TCTO analysis, based on the Critical Path Method, does not consider resource constraint, and accordingly sometimes generates an impractical and/or infeasible schedule planning in terms of resource availability. Therefore, resource constraint needs to be considered when doing TCTO analysis. In this research, genetic algorithms (GA) based optimization model is created in order to find the optimal schedule. This model is utilized to compare four distinct scenarios (i.e., 1) initial CPM, 2) TCTO without considering resource constraint, 3) resource allocation after TCTO, and 4) TCTO with considering resource constraint) in terms of duration, cost, and resource utilization. The comparison results identify that ‘TCTO with considering resource constraint’ generates the optimal schedule with the respect of duration, cost, and resource. This verifies the need for consideration of resource constraint when doing TCTO analysis. It is expected that the proposed model will produce more feasible and optimal schedule.

Keywords: time-cost trade-off, genetic algorithms, critical path, resource availability

Procedia PDF Downloads 191
15183 Robust Shrinkage Principal Component Parameter Estimator for Combating Multicollinearity and Outliers’ Problems in a Poisson Regression Model

Authors: Arum Kingsley Chinedu, Ugwuowo Fidelis Ifeanyi, Oranye Henrietta Ebele

Abstract:

The Poisson regression model (PRM) is a nonlinear model that belongs to the exponential family of distribution. PRM is suitable for studying count variables using appropriate covariates and sometimes experiences the problem of multicollinearity in the explanatory variables and outliers on the response variable. This study aims to address the problem of multicollinearity and outliers jointly in a Poisson regression model. We developed an estimator called the robust modified jackknife PCKL parameter estimator by combining the principal component estimator, modified jackknife KL and transformed M-estimator estimator to address both problems in a PRM. The superiority conditions for this estimator were established, and the properties of the estimator were also derived. The estimator inherits the characteristics of the combined estimators, thereby making it efficient in addressing both problems. And will also be of immediate interest to the research community and advance this study in terms of novelty compared to other studies undertaken in this area. The performance of the estimator (robust modified jackknife PCKL) with other existing estimators was compared using mean squared error (MSE) as a performance evaluation criterion through a Monte Carlo simulation study and the use of real-life data. The results of the analytical study show that the estimator outperformed other existing estimators compared with by having the smallest MSE across all sample sizes, different levels of correlation, percentages of outliers and different numbers of explanatory variables.

Keywords: jackknife modified KL, outliers, multicollinearity, principal component, transformed M-estimator.

Procedia PDF Downloads 71
15182 Promotional Mix as a Determinant of Consumer Buying Decision in the Food and Beverages Industry: A Case Study of Nigeria Bottling Company Plc., Asejire Ibadan

Authors: Adedeji S. Adegoke, Olakunle N. Popoola

Abstract:

Promotion is indispensible and inestimable property of marketing through which different organizations persuade their prospective customers. The idea of passing information about a product to the consumer at outside the world is known as promotional activities. A study was determined whether there was relationship between promotional mix and consumer buying decision, that is may be customers were influenced by promotion. It was investigated to determine whether promotion can be used to influence competitors’ activities in the market and also research was conducted to determine if there was any problem encountered by Nigeria bottling company plc, in promoting its beverages products. The various forms of promotional mix available for an organization were examined and recommended the appropriate promotional mix that company can adopt to boost the company sales. The research design was depended on the primary and secondary data. The primary data were information collected from the subjects using methods of data collection, that is through the use of questionnaire, interview, direct observation, etc. The secondary data consist of information that already exists having been collected for another purpose by some researchers. These include internal and external sources. The questionnaire was designed and administered to the staff of production and marketing department of Nigeria bottling company plc., which served as the population of this study, out of which sample was drawn randomly from the population, using sample random technique. It was deduced that 90% of the respondents opined that advertising influenced competition in the market and that there was a good sale after they started advert while 10% of them were not sure. At advertising level, 85% of the respondents chose 81-100% as the increase in the percentage recorded in their sales level, while 10% of them agreed that increase in the percentage recorded in their sales was within 61-80% and 5% of them chose 45-60% as the percentage increase in their sales record. Due to unstable economic condition of the Nigeria, many business organizations adopted the promotional strategies. Apart from advertising, it was discovered through research that sales promotion served as an incentive to consumers of Nigeria bottling company plc at a time offer gifts and prizes to consumers which drastically increased their level of sales. Since advertising and sales promotion increased the level of sales, more money should be allocated for this purpose to maintain market share and thereby increase profit.

Keywords: consumer, marketing, organization, promotional mix

Procedia PDF Downloads 165
15181 Shear Stress and Effective Structural Stress ‎Fields of an Atherosclerotic Coronary Artery

Authors: Alireza Gholipour, Mergen H. Ghayesh, Anthony Zander, Stephen J. Nicholls, Peter J. Psaltis

Abstract:

A three-dimensional numerical model of an atherosclerotic coronary ‎artery is developed for the determination of high-risk situation and ‎hence heart attack prediction. Employing the finite element method ‎‎(FEM) using ANSYS, fluid-structure interaction (FSI) model of the ‎artery is constructed to determine the shear stress distribution as well ‎as the von Mises stress field. A flexible model for an atherosclerotic ‎coronary artery conveying pulsatile blood is developed incorporating ‎three-dimensionality, artery’s tapered shape via a linear function for ‎artery wall distribution, motion of the artery, blood viscosity via the ‎non-Newtonian flow theory, blood pulsation via use of one-period ‎heartbeat, hyperelasticity via the Mooney-Rivlin model, viscoelasticity ‎via the Prony series shear relaxation scheme, and micro-calcification ‎inside the plaque. The material properties used to relate the stress field ‎to the strain field have been extracted from clinical data from previous ‎in-vitro studies. The determined stress fields has potential to be used as ‎a predictive tool for plaque rupture and dissection.‎ The results show that stress concentration due to micro-calcification ‎increases the von Mises stress significantly; chance of developing a ‎crack inside the plaque increases. Moreover, the blood pulsation varies ‎the stress distribution substantially for some cases.‎

Keywords: atherosclerosis, fluid-structure interaction‎, coronary arteries‎, pulsatile flow

Procedia PDF Downloads 177
15180 Readiness of Intellectual Capital Measurement: A Review of the Property Development and Investment Industry

Authors: Edward C. W. Chan, Benny C. F. Cheung

Abstract:

In the knowledge economy, the financial indicator is not the unique instrument to gauge the performance of a company. The role of intellectual capital contributing to the company performance is increasing. To measure the company performance due to intellectual capital, the value-added intellectual capital (VAIC) model is adopted to measure the intellectual capital utilisation efficiency of the subject companies. The purpose of this study is to review the readiness of measuring intellectual capital for the Hong Kong listed companies in the property development and property investment industry by using VAIC model. This study covers the financial reports from the representative Hong Kong listed property development companies and property investment companies in the period 2014-2019. The findings from this study indicated the industry is ready for IC measurement employing VAIC framework but not yet ready for using the extended VAIC model.

Keywords: intellectual capital, intellectual capital measurement, property development, property investment, Skandia navigator, VAIC

Procedia PDF Downloads 121
15179 Assessment of Environmental Impacts and Determination of Sustainability Level of BOOG Granite Mine Using a Mathematical Model

Authors: Gholamhassan Kakha, Mohsen Jami, Daniel Alex Merino Natorce

Abstract:

Sustainable development refers to the creation of a balance between the development and the environment too; it consists of three key principles namely environment, society and economy. These three parameters are related to each other and the imbalance occurs in each will lead to the disparity of the other parts. Mining is one of the most important tools of the economic growth and social welfare in many countries. Meanwhile, assessment of the environmental impacts has directed to the attention of planners toward the natural environment of the areas surrounded by mines and allowing for monitoring and controlling of the current situation by the designers. In this look upon, a semi-quantitative model using a matrix method is presented for assessing the environmental impacts in the BOOG Granite Mine located in Sistan and Balouchestan, one of the provinces of Iran for determining the effective factors and environmental components. For accomplishing this purpose, the initial data are collected by the experts at the next stage; the effect of the factors affects each environmental component is determined by specifying the qualitative viewpoints. Based on the results, factors including air quality, ecology, human health and safety along with the environmental damages resulted from mining activities in that area. Finally, the results gained from the assessment of the environmental impact are used to evaluate the sustainability by using Philips mathematical model. The results show that the sustainability of this area is weak, so environmental preventive measures are recommended to reduce the environmental damages to its components.

Keywords: sustainable development, environmental impacts' assessment, BOOG granite, Philips mathematical model

Procedia PDF Downloads 203
15178 Development of Scratching Monitoring System Based on Mathematical Model of Unconstrained Bed Sensing Method

Authors: Takuya Sumi, Syoko Nukaya, Takashi Kaburagi, Hiroshi Tanaka, Kajiro Watanabe, Yosuke Kurihara

Abstract:

We propose an unconstrained measurement system for scratching motion based on mathematical model of unconstrained bed sensing method which could measure the bed vibrations due to the motion of the person on the bed. In this paper, we construct mathematical model of the unconstrained bed monitoring system, and we apply the unconstrained bed sensing method to the system for detecting scratching motion. The proposed sensors are placed under the three bed feet. When the person is lying on the bed, the output signals from the sensors are proportional to the magnitude of the vibration due to the scratching motion. Hence, we could detect the subject’s scratching motion from the output signals from ceramic sensors. We evaluated two scratching motions using the proposed system in the validity experiment as follows: First experiment is the subject’s scratching the right side cheek with his right hand, and; second experiment is the subject’s scratching the shin with another foot. As the results of the experiment, we recognized the scratching signals that enable the determination when the scratching occurred. Furthermore, the difference among the amplitudes of the output signals enabled us to estimate where the subject scratched.

Keywords: unconstrained bed sensing method, scratching, body movement, itchy, piezoceramics

Procedia PDF Downloads 418
15177 Employing Remotely Sensed Soil and Vegetation Indices and Predicting ‎by Long ‎Short-Term Memory to Irrigation Scheduling Analysis

Authors: Elham Koohikerade, Silvio Jose Gumiere

Abstract:

In this research, irrigation is highlighted as crucial for improving both the yield and quality of ‎potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long ‎Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in ‎Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate ‎soil moisture content, addressing the limitations of field data. Developed under the guidance of the ‎Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct ‎soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices ‎like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced ‎Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast ‎soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing ‎drought conditions and determining irrigation needs. This study validated the spectral characteristics ‎of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging ‎Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and ‎Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and ‎NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture ‎was developed using a machine learning approach combining model-based and satellite-based ‎datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and ‎times, with its accuracy verified through cross-validation and comparison with existing soil moisture ‎datasets. The model effectively captures temporal dynamics, making it valuable for applications ‎requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By ‎identifying typical peak soil moisture values and observing distribution shapes, irrigation can be ‎scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 ‎m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a ‎uniform irrigation strategy might be effective across multiple parcels, with adjustments based on ‎specific parcel characteristics and historical data trends. The application of the LSTM model to ‎predict soil moisture and vegetation indices yielded mixed results. While the model effectively ‎captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately ‎predicting EVI, NDVI, and NMDI.‎

Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation ‎monitoring

Procedia PDF Downloads 47
15176 Fuzzy Inference System for Risk Assessment Evaluation of Wheat Flour Product Manufacturing Systems

Authors: Yas Barzegaar, Atrin Barzegar

Abstract:

The aim of this research is to develop an intelligent system to analyze the risk level of wheat flour product manufacturing system. The model consists of five Fuzzy Inference Systems in two different layers to analyse the risk of a wheat flour product manufacturing system. The first layer of the model consists of four Fuzzy Inference Systems with three criteria. The output of each one of the Physical, Chemical, Biological and Environmental Failures will be the input of the final manufacturing systems. The proposed model based on Mamdani Fuzzy Inference Systems gives a performance ranking of wheat flour products manufacturing systems. The first step is obtaining data to identify the failure modes from expert’s opinions. The second step is the fuzzification process to convert crisp input to a fuzzy set., then the IF-then fuzzy rule applied through inference engine, and in the final step, the defuzzification process is applied to convert the fuzzy output into real numbers.

Keywords: failure modes, fuzzy rules, fuzzy inference system, risk assessment

Procedia PDF Downloads 109
15175 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model

Authors: Muhammad Karim Ahmadzai

Abstract:

Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.

Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis

Procedia PDF Downloads 121
15174 Comparing the Trophic Structure of the Moroccan Mediterranean Sea with the Moroccan Atlantic Coast Using Ecopath Model

Authors: Salma Aboussalam, Karima Khalil, Khalid Elkalay

Abstract:

To describe the structure, functioning, and state of the Moroccan Mediterranean Sea ecosystem, an Ecopath mass balance model has been applied. The model is based on 31 functional groups, containing 21 fishes, 7 invertebrates, 2 primary producers, and one dead group (detritus), which are considered in this work to explore the trophic interaction. The system's average trophic transfer efficiency was 23%. Both the total primary production and total respiration were calculated to be >1, suggesting that more energy is produced than respired in the system. The structure of our system is based on high respiration and consumption flows. Indicators of ecosystem stability and development showed low values of the Finn cycle index (13.97), system omnivory index (0.18), and average Finn path length (3.09), suggesting that our system is disturbed and has a more linear than web-like trophic structure. The keystone index and mixed trophic impact analysis indicated that other demersal invertebrates, zooplankton, and cephalopods had a tremendous impact on other groups and were recognized as keystone species.

Keywords: Ecopath, food web, trophic flux, Moroccan Mediterranean Sea

Procedia PDF Downloads 96
15173 Estimation of Human Absorbed Dose Using Compartmental Model

Authors: M. Mousavi-Daramoroudi, H. Yousefnia, F. Abbasi-Davani, S. Zolghadri

Abstract:

Dosimetry is an indispensable and precious factor in patient treatment planning to minimize the absorbed dose in vital tissues. In this study, compartmental model was used in order to estimate the human absorbed dose of 177Lu-DOTATOC from the biodistribution data in wild type rats. For this purpose, 177Lu-DOTATOC was prepared under optimized conditions and its biodistribution was studied in male Syrian rats up to 168 h. Compartmental model was applied to mathematical description of the drug behaviour in tissue at different times. Dosimetric estimation of the complex was performed using radiation absorbed dose assessment resource (RADAR). The biodistribution data showed high accumulation in the adrenal and pancreas as the major expression sites for somatostatin receptor (SSTR). While kidneys as the major route of excretion receive 0.037 mSv/MBq, pancreas and adrenal also obtain 0.039 and 0.028 mSv/MBq. Due to the usage of this method, the points of accumulated activity data were enhanced, and further information of tissues uptake was collected that it will be followed by high (or improved) precision in dosimetric calculations.

Keywords: compartmental modeling, human absorbed dose, ¹⁷⁷Lu-DOTATOC, Syrian rats

Procedia PDF Downloads 197
15172 Hybrid Wavelet-Adaptive Neuro-Fuzzy Inference System Model for a Greenhouse Energy Demand Prediction

Authors: Azzedine Hamza, Chouaib Chakour, Messaoud Ramdani

Abstract:

Energy demand prediction plays a crucial role in achieving next-generation power systems for agricultural greenhouses. As a result, high prediction quality is required for efficient smart grid management and therefore low-cost energy consumption. The aim of this paper is to investigate the effectiveness of a hybrid data-driven model in day-ahead energy demand prediction. The proposed model consists of Discrete Wavelet Transform (DWT), and Adaptive Neuro-Fuzzy Inference System (ANFIS). The DWT is employed to decompose the original signal in a set of subseries and then an ANFIS is used to generate the forecast for each subseries. The proposed hybrid method (DWT-ANFIS) was evaluated using a greenhouse energy demand data for a week and compared with ANFIS. The performances of the different models were evaluated by comparing the corresponding values of Mean Absolute Percentage Error (MAPE). It was demonstrated that discret wavelet transform can improve agricultural greenhouse energy demand modeling.

Keywords: wavelet transform, ANFIS, energy consumption prediction, greenhouse

Procedia PDF Downloads 93
15171 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image

Authors: Salah Abdul Hameed Saleh, Ghada Hasan

Abstract:

The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.

Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area

Procedia PDF Downloads 443
15170 Using Mechanical Alloying for Verification of Predicted Glass Forming Composition Range

Authors: F. Saadi, M. Fatahi, M. Heidari

Abstract:

Aim of this work was to determine the approximate glass forming composition range of Ni-Sn system for the alloys produced by mechanical alloying. It was predicted by Miedema semi-empirical model that the composition had to be in the range of 30-60 wt. % tin, while Ni-40Sn had the most susceptibility to produce amorphous alloy. In the next stage, some different compositions of Ni-Sn were mechanically alloyed, where one of them had the proper predicted composition. Products were characterized by XRD analysis. There was a good agreement between calculation and experiments, in which Ni-40Sn alloy had the most amorphization degree.

Keywords: Ni-Sn system, mechanical alloying, Amorphous alloy, Miedema model

Procedia PDF Downloads 442
15169 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 299
15168 Investigation of Residual Stress Relief by in-situ Rolling Deposited Bead in Directed Laser Deposition

Authors: Ravi Raj, Louis Chiu, Deepak Marla, Aijun Huang

Abstract:

Hybridization of the directed laser deposition (DLD) process using an in-situ micro-roller to impart a vertical compressive load on the deposited bead at elevated temperatures can relieve tensile residual stresses incurred in the process. To investigate this stress relief mechanism and its relationship with the in-situ rolling parameters, a fully coupled dynamic thermo-mechanical model is presented in this study. A single bead deposition of Ti-6Al-4V alloy with an in-situ roller made of mild steel moving at a constant speed with a fixed nominal bead reduction is simulated using the explicit solver of the finite element software, Abaqus. The thermal model includes laser heating during the deposition process and the heat transfer between the roller and the deposited bead. The laser heating is modeled using a moving heat source with a Gaussian distribution, applied along the pre-formed bead’s surface using the VDFLUX Fortran subroutine. The bead’s cross-section is assumed to be semi-elliptical. The interfacial heat transfer between the roller and the bead is considered in the model. Besides, the roller is cooled internally using axial water flow, considered in the model using convective heat transfer. The mechanical model for the bead and substrate includes the effects of rolling along with the deposition process, and their elastoplastic material behavior is captured using the J2 plasticity theory. The model accounts for strain, strain rate, and temperature effects on the yield stress based on Johnson-Cook’s theory. Various aspects of this material behavior are captured in the FE software using the subroutines -VUMAT for elastoplastic behavior, VUHARD for yield stress, and VUEXPAN for thermal strain. The roller is assumed to be elastic and does not undergo any plastic deformation. Also, contact friction at the roller-bead interface is considered in the model. Based on the thermal results of the bead, the distance between the roller and the deposition nozzle (roller o set) can be determined to ensure rolling occurs around the beta-transus temperature for the Ti-6Al-4V alloy. It is identified that roller offset and the nominal bead height reduction are crucial parameters that influence the residual stresses in the hybrid process. The results obtained from a simulation at roller offset of 20 mm and nominal bead height reduction of 7% reveal that the tensile residual stresses decrease to about 52% due to in-situ rolling throughout the deposited bead. This model can be used to optimize the rolling parameters to minimize the residual stresses in the hybrid DLD process with in-situ micro-rolling.

Keywords: directed laser deposition, finite element analysis, hybrid in-situ rolling, thermo-mechanical model

Procedia PDF Downloads 115
15167 Comparative Study of Titanium and Polyetheretherketone Cranial Implant Using Finite Element Model

Authors: Khaja Moiduddin, Sherif Mohammed Elseufy, Hisham Alkhalefah

Abstract:

Recent advances in three-dimensional (3D) printing, medical imaging, and implant design may alter how craniomaxillofacial surgeons construct individualized treatments using patient data. By utilizing medical image data, medical professionals can obtain detailed information about a patient's injuries, enabling them to conduct a thorough preoperative assessment while ensuring the implant's accuracy. However, selecting the right implant material requires careful consideration of various mechanical properties. This study aims to compare the two commonly used implant material for cranial reconstruction which includes titanium (Ti6Al4V) and Polyetheretherketone (PEEK). Biomechanical analysis was performed to study the implant behavior, by keeping the implant design and fixation constant in both cases. A finite element model was created and analyzed under loading conditions. The finite element analysis proves that although Ti6Al4V is stronger than PEEK but, its mechanical strength is adequate to bear the loads of the adjacent bone tissue.

Keywords: cranial reconstruction, titanium implants, PEEK, finite element model

Procedia PDF Downloads 71
15166 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 155
15165 Molecular Pathogenesis of NASH through the Dysregulation of Metabolic Organ Network in the NASH-HCC Model Mouse Treated with Streptozotocin-High Fat Diet

Authors: Bui Phuong Linh, Yuki Sakakibara, Ryuto Tanaka, Elizabeth H. Pigney, Taishi Hashiguchi

Abstract:

NASH is an increasingly prevalent chronic liver disease that can progress to hepatocellular carcinoma and now is attracting interest worldwide. The STAM™ model is a clinically-correlated murine NASH model which shows the same pathological progression as NASH patients and has been widely used for pharmacological and basic research. The multiple parallel hits hypothesis suggests abnormalities in adipocytokines, intestinal microflora, and endotoxins are intertwined and could contribute to the development of NASH. In fact, NASH patients often exhibit gut dysbiosis and dysfunction in adipose tissue and metabolism. However, the analysis of the STAM™ model has only focused on the liver. To clarify whether the STAM™ model can also mimic multiple pathways of NASH progression, we analyzed the organ crosstalk interactions between the liver and the gut and the phenotype of adipose tissue in the STAM™ model. NASH was induced in male mice by a single subcutaneous injection of 200 µg streptozotocin 2 days after birth and feeding with high-fat diet after 4 weeks of age. The mice were sacrificed at NASH stage. Colon samples were snap-frozen in liquid nitrogen and stored at -80˚C for tight junction-related protein analysis. Adipose tissue was prepared into paraffin blocks for HE staining. Blood adiponectin was analyzed to confirm changes in the adipocytokine profile. Tight junction-related proteins in the intestine showed that expression of ZO-1 decreased with the progression of the disease. Increased expression of endotoxin in the blood and decreased expression of Adiponectin were also observed. HE staining revealed hypertrophy of adipocytes. Decreased expression of ZO-1 in the intestine of STAM™ mice suggests the occurrence of leaky gut, and abnormalities in adipocytokine secretion were also observed. Together with the liver, phenotypes in these organs are highly similar to human NASH patients and might be involved in the pathogenesis of NASH.

Keywords: Non-alcoholic steatohepatitis, hepatocellular carcinoma, fibrosis, organ crosstalk, leaky gut

Procedia PDF Downloads 162
15164 Numerical Modeling of Determination of in situ Rock Mass Deformation Modulus Using the Plate Load Test

Authors: A. Khodabakhshi, A. Mortazavi

Abstract:

Accurate determination of rock mass deformation modulus, as an important design parameter, is one of the most controversial issues in most engineering projects. A 3D numerical model of standard plate load test (PLT) using the FLAC3D code was carried to investigate the mechanism governing the test process. Five objectives were the focus of this study. The first goal was to employ 3D modeling in the interpretation of PLT conducted at the Bazoft dam site, Iran. The second objective was to investigate the effect of displacements measuring depth from the loading plates on the calculated moduli. The magnitude of rock mass deformation modulus calculated from PLT depends on anchor depth, and in practice, this may be a cause of error in the selection of realistic deformation modulus for the rock mass. The third goal of the study was to investigate the effect of testing plate diameter on the calculated modulus. Moreover, a comparison of the calculated modulus from ISRM formula, numerical modeling and calculated modulus from the actual PLT carried out at right abutment of the Bazoft dam site was another objective of the study. Finally, the effect of plastic strains on the calculated moduli in each of the loading-unloading cycles for three loading plates was investigated. The geometry, material properties, and boundary conditions on the constructed 3D model were selected based on the in-situ conditions of PLT at Bazoft dam site. A good agreement was achieved between numerical model results and the field tests results.

Keywords: deformation modulus, numerical model, plate loading test, rock mass

Procedia PDF Downloads 172
15163 Comparative Mesh Sensitivity Study of Different Reynolds Averaged Navier Stokes Turbulence Models in OpenFOAM

Authors: Zhuoneng Li, Zeeshan A. Rana, Karl W. Jenkins

Abstract:

In industry, to validate a case, often a multitude of simulation are required and in order to demonstrate confidence in the process where users tend to use a coarser mesh. Therefore, it is imperative to establish the coarsest mesh that could be used while keeping reasonable simulation accuracy. To date, the two most reliable, affordable and broadly used advanced simulations are the hybrid RANS (Reynolds Averaged Navier Stokes)/LES (Large Eddy Simulation) and wall modelled LES. The potentials in these two simulations will still be developed in the next decades mainly because the unaffordable computational cost of a DNS (Direct Numerical Simulation). In the wall modelled LES, the turbulence model is applied as a sub-grid scale model in the most inner layer near the wall. The RANS turbulence models cover the entire boundary layer region in a hybrid RANS/LES (Detached Eddy Simulation) and its variants, therefore, the RANS still has a very important role in the state of art simulations. This research focuses on the turbulence model mesh sensitivity analysis where various turbulence models such as the S-A (Spalart-Allmaras), SSG (Speziale-Sarkar-Gatski), K-Omega transitional SST (Shear Stress Transport), K-kl-Omega, γ-Reθ transitional model, v2f are evaluated within the OpenFOAM. The simulations are conducted on a fully developed turbulent flow over a flat plate where the skin friction coefficient as well as velocity profiles are obtained to compare against experimental values and DNS results. A concrete conclusion is made to clarify the mesh sensitivity for different turbulence models.

Keywords: mesh sensitivity, turbulence models, OpenFOAM, RANS

Procedia PDF Downloads 264