Search results for: bilinear model
11505 Topology and Shape Optimization of Macpherson Control Arm under Fatigue Loading
Authors: Abolfazl Hosseinpour, Javad Marzbanrad
Abstract:
In this research, the topology and shape optimization of a Macpherson control arm has been accomplished to achieve lighter weight. Present automotive market demands low cost and light weight component to meet the need of fuel efficient and cost effective vehicle. This in turn gives the rise to more effective use of materials for automotive parts which can reduce the mass of vehicle. Since automotive components are under dynamic loads which cause fatigue damage, considering fatigue criteria seems to be essential in designing automotive components. At first, in order to create severe loading condition for control arm, some rough roads are generated through power spectral density. Then, the most critical loading conditions are obtained through multibody dynamics analysis of a full vehicle model. Then, the topology optimization is performed based on fatigue life criterion using HyperMesh software, which resulted to 50 percent mass reduction. In the next step a CAD model is created using CATIA software and shape optimization is performed to achieve accurate dimensions with less mass.Keywords: topology optimization, shape optimization, fatigue life, MacPherson control arm
Procedia PDF Downloads 31811504 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor
Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes
Abstract:
In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data
Procedia PDF Downloads 14911503 Modeling of Erosion and Sedimentation Impacts from off-Road Vehicles in Arid Regions
Authors: Abigail Rosenberg, Jennifer Duan, Michael Poteuck, Chunshui Yu
Abstract:
The Barry M. Goldwater Range, West in southwestern Arizona encompasses 2,808 square kilometers of Sonoran Desert. The hyper-arid range has an annual rainfall of less than 10 cm with an average high temperature of 41 degrees Celsius in July to an average low of 4 degrees Celsius in January. The range shares approximately 60 kilometers of the international border with Mexico. A majority of the range is open for recreational use, primarily off-highway vehicles. Because of its proximity to Mexico, the range is also heavily patrolled by U.S. Customs and Border Protection seeking to intercept and apprehend inadmissible people and illicit goods. Decades of off-roading and Border Patrol activities have negatively impacted this sensitive desert ecosystem. To assist the range program managers, this study is developing a model to identify erosion prone areas and calibrate the model’s parameters using the Automated Geospatial Watershed Assessment modeling tool.Keywords: arid lands, automated geospatial watershed assessment, erosion modeling, sedimentation modeling, watershed modeling
Procedia PDF Downloads 37611502 Climate Change and Landslide Risk Assessment in Thailand
Authors: Shotiros Protong
Abstract:
The incidents of sudden landslides in Thailand during the past decade have occurred frequently and more severely. It is necessary to focus on the principal parameters used for analysis such as land cover land use, rainfall values, characteristic of soil and digital elevation model (DEM). The combination of intense rainfall and severe monsoons is increasing due to global climate change. Landslide occurrences rapidly increase during intense rainfall especially in the rainy season in Thailand which usually starts around mid-May and ends in the middle of October. The rain-triggered landslide hazard analysis is the focus of this research. The combination of geotechnical and hydrological data are used to determine permeability, conductivity, bedding orientation, overburden and presence of loose blocks. The regional landslide hazard mapping is developed using the Slope Stability Index SINMAP model supported on Arc GIS software version 10.1. Geological and land use data are used to define the probability of landslide occurrences in terms of geotechnical data. The geological data can indicate the shear strength and the angle of friction values for soils above given rock types, which leads to the general applicability of the approach for landslide hazard analysis. To address the research objectives, the methods are described in this study: setup and calibration of the SINMAP model, sensitivity of the SINMAP model, geotechnical laboratory, landslide assessment at present calibration and landslide assessment under future climate simulation scenario A2 and B2. In terms of hydrological data, the millimetres/twenty-four hours of average rainfall data are used to assess the rain triggered landslide hazard analysis in slope stability mapping. During 1954-2012 period, is used for the baseline of rainfall data at the present calibration. The climate change in Thailand, the future of climate scenarios are simulated by spatial and temporal scales. The precipitation impact is need to predict for the climate future, Statistical Downscaling Model (SDSM) version 4.2, is used to assess the simulation scenario of future change between latitude 16o 26’ and 18o 37’ north and between longitude 98o 52’ and 103o 05’ east by SDSM software. The research allows the mapping of risk parameters for landslide dynamics, and indicates the spatial and time trends of landslide occurrences. Thus, regional landslide hazard mapping under present-day climatic conditions from 1954 to 2012 and simulations of climate change based on GCM scenarios A2 and B2 from 2013 to 2099 related to the threshold rainfall values for the selected the study area in Uttaradit province in the northern part of Thailand. Finally, the landslide hazard mapping will be compared and shown by areas (km2 ) in both the present and the future under climate simulation scenarios A2 and B2 in Uttaradit province.Keywords: landslide hazard, GIS, slope stability index (SINMAP), landslides, Thailand
Procedia PDF Downloads 56511501 Fuzzy Data, Random Drift, and a Theoretical Model for the Sequential Emergence of Religious Capacity in Genus Homo
Authors: Margaret Boone Rappaport, Christopher J. Corbally
Abstract:
The ancient ape ancestral population from which living great ape and human species evolved had demographic features affecting their evolution. The population was large, had great genetic variability, and natural selection was effective at honing adaptations. The emerging populations of chimpanzees and humans were affected more by founder effects and genetic drift because they were smaller. Natural selection did not disappear, but it was not as strong. Consequences of the 'population crash' and the human effective population size are introduced briefly. The history of the ancient apes is written in the genomes of living humans and great apes. The expansion of the brain began before the human line emerged. Coalescence times for some genes are very old – up to several million years, long before Homo sapiens. The mismatch between gene trees and species trees highlights the anthropoid speciation processes, and gives the human genome history a fuzzy, probabilistic quality. However, it suggests traits that might form a foundation for capacities emerging later. A theoretical model is presented in which the genomes of early ape populations provide the substructure for the emergence of religious capacity later on the human line. The model does not search for religion, but its foundations. It suggests a course by which an evolutionary line that began with prosimians eventually produced a human species with biologically based religious capacity. The model of the sequential emergence of religious capacity relies on cognitive science, neuroscience, paleoneurology, primate field studies, cognitive archaeology, genomics, and population genetics. And, it emphasizes five trait types: (1) Documented, positive selection of sensory capabilities on the human line may have favored survival, but also eventually enriched human religious experience. (2) The bonobo model suggests a possible down-regulation of aggression and increase in tolerance while feeding, as well as paedomorphism – but, in a human species that remains cognitively sharp (unlike the bonobo). The two species emerged from the same ancient ape population, so it is logical to search for shared traits. (3) An up-regulation of emotional sensitivity and compassion seems to have occurred on the human line. This finds support in modern genetic studies. (4) The authors’ published model of morality's emergence in Homo erectus encompasses a cognitively based, decision-making capacity that was hypothetically overtaken, in part, by religious capacity. Together, they produced a strong, variable, biocultural capability to support human sociability. (5) The full flowering of human religious capacity came with the parietal expansion and smaller face (klinorhynchy) found only in Homo sapiens. Details from paleoneurology suggest the stage was set for human theologies. Larger parietal lobes allowed humans to imagine inner spaces, processes, and beings, and, with the frontal lobe, led to the first theologies composed of structured and integrated theories of the relationships between humans and the supernatural. The model leads to the evolution of a small population of African hominins that was ready to emerge with religious capacity when the species Homo sapiens evolved two hundred thousand years ago. By 50-60,000 years ago, when human ancestors left Africa, they were fully enabled.Keywords: genetic drift, genomics, parietal expansion, religious capacity
Procedia PDF Downloads 34311500 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation
Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma
Abstract:
Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling
Procedia PDF Downloads 14411499 Operational Challenges of Marine Fiber Reinforced Polymer Composite Structures Coupled with Piezoelectric Transducers
Authors: H. Ucar, U. Aridogan
Abstract:
Composite structures become intriguing for the design of aerospace, automotive and marine applications due to weight reduction, corrosion resistance and radar signature reduction demands and requirements. Studies on piezoelectric ceramic transducers (PZT) for diagnostics and health monitoring have gained attention for their sensing capabilities, however PZT structures are prone to fail in case of heavy operational loads. In this paper, we develop a piezo-based Glass Fiber Reinforced Polymer (GFRP) composite finite element (FE) model, validate with experimental setup, and identify the applicability and limitations of PZTs for a marine application. A case study is conducted to assess the piezo-based sensing capabilities in a representative marine composite structure. A FE model of the composite structure combined with PZT patches is developed, afterwards the response and functionality are investigated according to the sea conditions. Results of this study clearly indicate the blockers and critical aspects towards industrialization and wide-range use of PZTs for marine composite applications.Keywords: FRP composite, operational challenges, piezoelectric transducers, FE modeling
Procedia PDF Downloads 17511498 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows
Authors: Thomas Rowan, Mohammed Seaid
Abstract:
A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.Keywords: erosion, finite volume method, sediment transport, shallow water equations
Procedia PDF Downloads 21811497 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation
Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber
Abstract:
Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.Keywords: indoor power line, fault location, fault map trace, series arc fault
Procedia PDF Downloads 13811496 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 11611495 The Relationship Between Cyberbullying Victimization, Parent and Peer Attachment and Unconditional Self-Acceptance
Authors: Florina Magdalena Anichitoae, Anca Dobrean, Ionut Stelian Florean
Abstract:
Due to the fact that cyberbullying victimization is an increasing problem nowadays, affecting more and more children and adolescents around the world, we wanted to take a step forward analyzing this phenomenon. So, we took a look at some variables which haven't been studied together before, trying to develop another way to view cyberbullying victimization. We wanted to test the effects of the mother, father, and peer attachment on adolescent involvement in cyberbullying as victims through unconditional self acceptance. Furthermore, we analyzed each subscale of the IPPA-R, the instrument we have used for parents and peer attachment measurement, in regards to cyberbullying victimization through unconditional self acceptance. We have also analyzed if gender and age could be taken into consideration as moderators in this model. The analysis has been performed on 653 adolescents aged 11-17 years old from Romania. We used structural equation modeling, working in R program. For the fidelity analysis of the IPPA-R subscales, USAQ, and Cyberbullying Test, we have calculated the internal consistency index, which varies between .68-.91. We have created 2 models: the first model including peer alienation, peer trust, peer communication, self acceptance and cyberbullying victimization, having CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07, and the second model including parental alienation, parental trust, parental communication, self acceptance and cyberbullying victimization and had CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07. Our results were interesting: on one hand, cyberbullying victimization is predicted by peer alienation and peer communication through unconditional self acceptance. Peer trust directly, significantly, and negatively predicted the implication in cyberbullying. In this regard, considering gender and age as moderators, we found that the relationship between unconditional self acceptance and cyberbullying victimization is stronger in girls, but age does not moderate the relationship between unconditional self acceptance and cyberbullying victimization. On the other hand, regarding the degree of cyberbullying victimization as being predicted through unconditional self acceptance by parental alienation, parental communication, and parental trust, this hypothesis was not supported. Still, we could identify a direct path to positively predict victimization through parental alienation and negatively through parental trust. There are also some limitations to this study, which we've discussed in the end.Keywords: adolescent, attachment, cyberbullying victimization, parents, peers, unconditional self-acceptance
Procedia PDF Downloads 20611494 N-Heptane as Model Molecule for Cracking Catalyst Evaluation to Improve the Yield of Ethylene and Propylene
Authors: Tony K. Joseph, Balasubramanian Vathilingam, Stephane Morin
Abstract:
Currently, the refiners around the world are more focused on improving the yield of light olefins (propylene and ethylene) as both of them are very prominent raw materials to produce wide spectrum of polymeric materials such as polyethylene and polypropylene. Henceforth, it is desirable to increase the yield of light olefins via selective cracking of heavy oil fractions. In this study, zeolite grown on SiC was used as the catalyst to do model cracking reaction of n-heptane. The catalytic cracking of n-heptane was performed in a fixed bed reactor (12 mm i.d.) at three different temperatures (425, 450 and 475 °C) and at atmospheric pressure. A carrier gas (N₂) was mixed with n-heptane with ratio of 90:10 (N₂:n-heptane), and the gaseous mixture was introduced into the fixed bed reactor. Various flow rate of reactants was tested to increase the yield of ethylene and propylene. For the comparison purpose, commercial zeolite was also tested in addition to Zeolite on SiC. The products were analyzed using an Agilent gas chromatograph (GC-9860) equipped with flame ionization detector (FID). The GC is connected online with the reactor and all the cracking tests were successfully reproduced. The entire catalytic evaluation results will be presented during the conference.Keywords: cracking, catalyst, evaluation, ethylene, heptane, propylene
Procedia PDF Downloads 13711493 On the Dwindling Supply of the Observable Cosmic Microwave Background Radiation
Authors: Jia-Chao Wang
Abstract:
The cosmic microwave background radiation (CMB) freed during the recombination era can be considered as a photon source of small duration; a one-time event happened everywhere in the universe simultaneously. If space is divided into concentric shells centered at an observer’s location, one can imagine that the CMB photons originated from the nearby shells would reach and pass the observer first, and those in shells farther away would follow as time goes forward. In the Big Bang model, space expands rapidly in a time-dependent manner as described by the scale factor. This expansion results in an event horizon coincident with one of the shells, and its radius can be calculated using cosmological calculators available online. Using Planck 2015 results, its value during the recombination era at cosmological time t = 0.379 million years (My) is calculated to be Revent = 56.95 million light-years (Mly). The event horizon sets a boundary beyond which the freed CMB photons will never reach the observer. The photons within the event horizon also exhibit a peculiar behavior. Calculated results show that the CMB observed today was freed in a shell located at 41.8 Mly away (inside the boundary set by Revent) at t = 0.379 My. These photons traveled 13.8 billion years (Gy) to reach here. Similarly, the CMB reaching the observer at t = 1, 5, 10, 20, 40, 60, 80, 100 and 120 Gy are calculated to be originated at shells of R = 16.98, 29.96, 37.79, 46.47, 53.66, 55.91, 56.62, 56.85 and 56.92 Mly, respectively. The results show that as time goes by, the R value approaches Revent = 56.95 Mly but never exceeds it, consistent with the earlier statement that beyond Revent the freed CMB photons will never reach the observer. The difference Revert - R can be used as a measure of the remaining observable CMB photons. Its value becomes smaller and smaller as R approaching Revent, indicating a dwindling supply of the observable CMB radiation. In this paper, detailed dwindling effects near the event horizon are analyzed with the help of online cosmological calculators based on the lambda cold dark matter (ΛCDM) model. It is demonstrated in the literature that assuming the CMB to be a blackbody at recombination (about 3000 K), then it will remain so over time under cosmological redshift and homogeneous expansion of space, but with the temperature lowered (2.725 K now). The present result suggests that the observable CMB photon density, besides changing with space expansion, can also be affected by the dwindling supply associated with the event horizon. This raises the question of whether the blackbody of CMB at recombination can remain so over time. Being able to explain the blackbody nature of the observed CMB is an import part of the success of the Big Bang model. The present results cast some doubts on that and suggest that the model may have an additional challenge to deal with.Keywords: blackbody of CMB, CMB radiation, dwindling supply of CMB, event horizon
Procedia PDF Downloads 12111492 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City
Authors: Jothilakshmy Nagammal
Abstract:
This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.Keywords: context based planning model, form based code, transect, systemic approach
Procedia PDF Downloads 33811491 Investigating (Im)Politeness Strategies in Email Communication: The Case Algerian PhD Supervisees and Irish Supervisors
Authors: Zehor Ktitni
Abstract:
In pragmatics, politeness is regarded as a feature of paramount importance to successful interpersonal relationships. On the other hand, emails have recently become one of the indispensable means of communication in educational settings. This research puts email communication at the core of the study and analyses it from a politeness perspective. More specifically, it endeavours to look closely at how the concept of (im)politeness is reflected through students’ emails. To this end, a corpus of Algerian supervisees’ email threads, exchanged with their Irish supervisors, was compiled. Leech’s model of politeness (2014) was selected as the main theoretical framework of this study, in addition to making reference to Brown and Levinson’s model (1987) as it is one of the most influential models in the area of pragmatic politeness. Further, some follow-up interviews are to be conducted with Algerian students to reinforce the results derived from the corpus. Initial findings suggest that Algerian Ph.D. students’ emails tend to include more politeness markers than impoliteness ones, they heavily make use of academic titles when addressing their supervisors (Dr. or Prof.), and they rely on hedging devices in order to sound polite.Keywords: politeness, email communication, corpus pragmatics, Algerian PhD supervisees, Irish supervisors
Procedia PDF Downloads 7111490 Applying Renowned Energy Simulation Engines to Neural Control System of Double Skin Façade
Authors: Zdravko Eškinja, Lovre Miljanić, Ognjen Kuljača
Abstract:
This paper is an overview of simulation tools used to model specific thermal dynamics that occurs while controlling double skin façade. Research has been conducted on simplified construction with single zone where one side is glazed. Heat flow and temperature responses are simulated in three different simulation tools: IDA-ICE, EnergyPlus and HAMBASE. The excitation of observed system, used in all simulations, was a temperature step of exterior environment. Air infiltration, insulation and other disturbances are excluded from this research. Although such isolated behaviour is not possible in reality, experiments are carried out to gain novel information about heat flow transients which are not observable under regular conditions. Results revealed new possibilities for adapting the parameters of the neural network regulator. Along numerical simulations, the same set-up has been also tested in a real-time experiment with a 1:18 scaled model and thermal chamber. The comparison analysis brings out interesting conclusion about simulation accuracy in this particular case.Keywords: double skin façade, experimental tests, heat control, heat flow, simulated tests, simulation tools
Procedia PDF Downloads 23511489 A Study on the Effect of the Work-Family Conflict on Work Engagement: A Mediated Moderation Model of Emotional Exhaustion and Positive Psychology Capital
Authors: Sungeun Hyun, Sooin Lee, Gyewan Moon
Abstract:
Work-Family Conflict has been an active research area for the past decades. Work-Family Conflict harms individuals and organizations, it is ultimately expected to bring the cost of losses to the company in the long run. WFC has mainly focused on effects of organizational effectiveness and job attitude such as Job Satisfaction, Organizational Commitment, and Turnover Intention variables. This study is different from consequence variable with previous research. For this purpose, we selected the positive job attitude 'Work Engagement' as a consequence of WFC. This research has its primary research purpose in identifying the negative effects of the Work-Family Conflict, and started out from the recognition of the problem that the research on the direct relationship on the influence of the WFC on Work Engagement is lacking. Based on the COR(Conservation of resource theory) and JD-R(Job Demand- Resource model), the empirical study model to examine the negative effects of WFC with Emotional Exhaustion as the link between WFC and Work Engagement was suggested and validated. Also, it was analyzed how much Positive Psychological Capital may buffer the negative effects arising from WFC within this relationship, and the Mediated Moderation model controlling the indirect effect influencing the Work Engagement by the Positive Psychological Capital mediated by the WFC and Emotional Exhaustion was verified. Data was collected by using questionnaires distributed to 500 employees engaged manufacturing, services, finance, IT industry, education services, and other sectors, of which 389 were used in the statistical analysis. The data are analyzed by statistical package, SPSS 21.0, SPSS macro and AMOS 21.0. The hierarchical regression analysis, SPSS PROCESS macro and Bootstrapping method for hypothesis testing were conducted. Results showed that all hypotheses are supported. First, WFC showed a negative effect on Work Engagement. Specifically, WIF appeared to be on more negative effects than FIW. Second, Emotional exhaustion found to mediate the relationship between WFC and Work Engagement. Third, Positive Psychological Capital showed to moderate the relationship between WFC and Emotional Exhaustion. Fourth, the effect of mediated moderation through the integration verification, Positive Psychological Capital demonstrated to buffer the relationship among WFC, Emotional Exhastion, and Work Engagement. Also, WIF showed a more negative effects than FIW through verification of all hypotheses. Finally, we discussed the theoretical and practical implications on research and management of the WFC, and proposed limitations and future research directions of research.Keywords: emotional exhaustion, positive psychological capital, work engagement, work-family conflict
Procedia PDF Downloads 22411488 Building a Composite Approach to Employees' Motivational Needs by Combining Cognitive Needs
Authors: Alexis Akinyemi, Laurene Houtin
Abstract:
Measures of employee motivation at work are often based on the theory of self-determined motivation, which implies that human resources departments and managers seek to motivate employees in the most self-determined way possible and use strategies to achieve this goal. In practice, they often tend to assess employee motivation and then adapt management to the most important source of motivation for their employees, for example by financially rewarding an employee who is extrinsically motivated, and by rewarding an intrinsically motivated employee with congratulations and recognition. Thus, the use of motivation measures contradicts theoretical positioning: theory does not provide for the promotion of extrinsically motivated behaviour. In addition, a corpus of social psychology linked to fundamental needs makes it possible to personally address a person’s different sources of motivation (need for cognition, need for uniqueness, need for effects and need for closure). By developing a composite measure of motivation based on these needs, we provide human resources professionals, and in particular occupational psychologists, with a tool that complements the assessment of self-determined motivation, making it possible to precisely address the objective of adapting work not to the self-determination of behaviours, but to the motivational traits of employees. To develop such a model, we gathered the French versions of the cognitive needs scales (need for cognition, need for uniqueness, need for effects, need for closure) and conducted a study with 645 employees of several French companies. On the basis of the data collected, we conducted a confirmatory factor analysis to validate the model, studied the correlations between the various needs, and highlighted the different reference groups that could be used to use these needs as a basis for interviews with employees (career, recruitment, etc.). The results showed a coherent model and the expected links between the different needs. Taken together, these results make it possible to propose a valid and theoretically adjusted tool to managers who wish to adapt their management to their employees’ current motivations, whether or not these motivations are self-determined.Keywords: motivation, personality, work commitment, cognitive needs
Procedia PDF Downloads 12411487 Experimental Verification of On-Board Power Generation System for Vehicle Application
Authors: Manish Kumar, Krupa Shah
Abstract:
The usage of renewable energy sources is increased day by day to overcome the dependency on fossil fuels. The wind energy is considered as a prominent source of renewable energy. This paper presents an approach for utilizing wind energy obtained from moving the vehicle for cell-phone charging. The selection of wind turbine, blades, generator, etc. is done to have the most efficient system. The calculation procedure for power generated and drag force is shown to know the effectiveness of the proposal. The location of the turbine is selected such that the system remains symmetric, stable and has the maximum induced wind. The calculation of the generated power at different velocity is presented. The charging is achieved for the speed 30 km/h and the system works well till 60 km/h. The model proposed seems very useful for the people traveling long distances in the absence of mobile electricity. The model is very economical and easy to fabricate. It has very less weight and area that makes it portable and comfortable to carry along. The practical results are shown by implementing the portable wind turbine system on two-wheeler.Keywords: cell-phone charging, on-board power generation, wind energy, vehicle
Procedia PDF Downloads 29711486 Characterization of a Dentigerous Cyst Cell Line and Its Secretion of Metalloproteinases
Authors: Muñiz-Lino Marcos A.
Abstract:
The ectomesenchymal tissues involved in tooth development and their remnants are the origin of different odontogenic lesions, including tumors and cysts of the jaws, with a wide range of clinical behaviors. A dentigerous cyst (DC) represents approximately 20% of all cases of odontogenic cysts, and it has been demonstrated that it can develop benign and malignant odontogenic tumors. DC is characterized by bone destruction of the area surrounding the crown of a tooth that has not erupted and contains liquid. The treatment of odontogenic tumors and cysts usually involves a partial or total removal of the jaw, causing important secondary co-morbidities. However, molecules implicated in DC pathogenesis, as well as in its development into odontogenic tumors, remain unknown. A cellular model may be useful to study these molecules, but that model has not been established yet. Here, we reported the establishment of a cell culture derived from a dentigerous cyst. This cell line was named DeCy-1. In spite of its ectomesenchymal morphology, DeCy-1 cells express epithelial markers such as cytokeratins 5, 6, and 8. Furthermore, these cells express the ODAM protein, which is present in odontogenesis and in dental follicles, indicating that DeCy-1 cells are derived from odontogenic epithelium. Analysis by electron microscopy of this cell line showed that it has a high vesicular activity, suggesting that DeCy-1 could secrete molecules that may be involved in DC pathogenesis. Thus, secreted proteins were analyzed by PAGE-SDS where we observed approximately 11 bands. In addition, the capacity of these secretions to degrade proteins was analyzed by gelatin substrate zymography. A degradation band of about 62 kDa was found in these assays. Western blot assays suggested that the matrix metalloproteinase 2 (MMP-2) is responsible for this protease activity. Thus, our results indicate that the establishment of a cell line derived from DC is a useful in vitro model to study the biology of this odontogenic lesion and its participation in the development of odontogenic tumors.Keywords: dentigerous cyst, ameloblastoma, MMP-2, odontogenic tumors
Procedia PDF Downloads 4511485 Compressible Lattice Boltzmann Method for Turbulent Jet Flow Simulations
Authors: K. Noah, F.-S. Lien
Abstract:
In Computational Fluid Dynamics (CFD), there are a variety of numerical methods, of which some depend on macroscopic model representatives. These models can be solved by finite-volume, finite-element or finite-difference methods on a microscopic description. However, the lattice Boltzmann method (LBM) is considered to be a mesoscopic particle method, with its scale lying between the macroscopic and microscopic scales. The LBM works well for solving incompressible flow problems, but certain limitations arise from solving compressible flows, particularly at high Mach numbers. An improved lattice Boltzmann model for compressible flow problems is presented in this research study. A higher-order Taylor series expansion of the Maxwell equilibrium distribution function is used to overcome limitations in LBM when solving high-Mach-number flows. Large eddy simulation (LES) is implemented in LBM to simulate turbulent jet flows. The results have been validated with available experimental data for turbulent compressible free jet flow at subsonic speeds.Keywords: compressible lattice Boltzmann method, multiple relaxation times, large eddy simulation, turbulent jet flows
Procedia PDF Downloads 27511484 Thermohydraulic Performance of Double Flow Solar Air Heater with Corrugated Absorber
Authors: S. P. Sharma, Som Nath Saha
Abstract:
This paper deals with the analytical investigation of thermal and thermohydraulic performance of double flow solar air heaters with corrugated and flat plate absorber. A mathematical model of double flow solar air heater has been presented, and a computer program in C++ language is developed to estimate the outlet temperature of air for the evaluation of thermal and thermohydraulic efficiency by solving the governing equations numerically using relevant correlations for heat transfer coefficients. The results obtained from the mathematical model is compared with the available experimental results and it is found to be reasonably good. The results show that the double flow solar air heaters have higher efficiency than conventional solar air heater, although the double flow corrugated absorber is superior to that of flat plate double flow solar air heater. It is also observed that the thermal efficiency increases with increase in mass flow rate; however, thermohydraulic efficiency increases with increase in mass flow rate up to a certain limit, attains the maximum value, then thereafter decreases sharply.Keywords: corrugated absorber, double flow, solar air heater, thermos-hydraulic efficiency
Procedia PDF Downloads 31611483 Spatiotemporal Neural Network for Video-Based Pose Estimation
Authors: Bin Ji, Kai Xu, Shunyu Yao, Jingjing Liu, Ye Pan
Abstract:
Human pose estimation is a popular research area in computer vision for its important application in human-machine interface. In recent years, 2D human pose estimation based on convolution neural network has got great progress and development. However, in more and more practical applications, people often need to deal with tasks based on video. It’s not far-fetched for us to consider how to combine the spatial and temporal information together to achieve a balance between computing cost and accuracy. To address this issue, this study proposes a new spatiotemporal model, namely Spatiotemporal Net (STNet) to combine both temporal and spatial information more rationally. As a result, the predicted keypoints heatmap is potentially more accurate and spatially more precise. Under the condition of ensuring the recognition accuracy, the algorithm deal with spatiotemporal series in a decoupled way, which greatly reduces the computation of the model, thus reducing the resource consumption. This study demonstrate the effectiveness of our network over the Penn Action Dataset, and the results indicate superior performance of our network over the existing methods.Keywords: convolutional long short-term memory, deep learning, human pose estimation, spatiotemporal series
Procedia PDF Downloads 15011482 Integrating Data Envelopment Analysis and Variance Inflation Factor to Measure the Efficiency of Decision Making Units
Authors: Mostafa Kazemi, Zahra N. Farkhani
Abstract:
This paper proposes an integrated Data Envelopment Analysis (DEA) and Variance Inflation Factor (VIF) model for measuring the technical efficiency of decision making units. The model is validated using a set of 69% sales representatives’ dairy products. The analysis is done in two stages, in the first stage, VIF technique is used to distinguish independent effective factors of resellers, and in the second stage we used DEA for measuring efficiency for both constant and variable return to scales status. Further DEA is used to examine the utilization of environmental factors on efficiency. Results of this paper indicated an average managerial efficiency of 83% in the whole sales representatives’ dairy products. In addition, technical and scale efficiency were counted 96% and 80% respectively. 38% of sales representative have the technical efficiency of 100% and 72% of the sales representative in terms of managerial efficiency are quite efficient.High levels of relative efficiency indicate a good condition for sales representative efficiency.Keywords: data envelopment analysis (DEA), relative efficiency, sales representatives’ dairy products, variance inflation factor (VIF)
Procedia PDF Downloads 57011481 Assesments of Some Environment Variables on Fisheries at Two Levels: Global and Fao Major Fishing Areas
Authors: Hyelim Park, Juan Martin Zorrilla
Abstract:
Climate change influences very widely and in various ways ocean ecosystem functioning. The consequences of climate change on marine ecosystems are an increase in temperature and irregular behavior of some solute concentrations. These changes would affect fisheries catches in several ways. Our aim is to assess the quantitative contribution change of fishery catches along the time and express them through four environment variables: Sea Surface Temperature (SST4) and the concentrations of Chlorophyll (CHL), Particulate Inorganic Carbon (PIC) and Particulate Organic Carbon (POC) at two spatial scales: Global and the nineteen FAO Major Fishing Areas divisions. Data collection was based on the FAO FishStatJ 2014 database as well as MODIS Aqua satellite observations from 2002 to 2012. Some data had to be corrected and interpolated using some existing methods. As the results, a multivariable regression model for average Global fisheries captures contained temporal mean of SST4, standard deviation of SST4, standard deviation of CHL and standard deviation of PIC. Global vector auto-regressive (VAR) model showed that SST4 was a statistical cause of global fishery capture. To accommodate varying conditions in fishery condition and influence of climate change variables, a model was constructed for each FAO major fishing area. From the management perspective it should be recognized some limitations of the FAO marine areas division that opens to possibility to the discussion of the subdivision of the areas into smaller units. Furthermore, it should be treated that the contribution changes of fishery species and the possible environment factor for specific species at various scale levels.Keywords: fisheries-catch, FAO FishStatJ, MODIS Aqua, sea surface temperature (SST), chlorophyll, particulate inorganic carbon (PIC), particulate organic carbon (POC), VAR, granger causality
Procedia PDF Downloads 48511480 Performance Analysis of a Hybrid Channel for Foglet Assisted Smart Asset Reporting
Authors: Hasan Farahneh
Abstract:
Smart asset management along roadsides and in deserted areas is a topic of deprived attention. We find most of the work in emergency reporting services in intelligent transportation systems (ITS) and rural areas but not much in asset reporting. Currently, available asset management mechanisms are based on scheduled maintenance and do not effectively report any emergency situation in a timely manner. This paper is the continuation of our previous work, in which we proposed the usage of Foglets and VLC link between smart vehicles and road side assets. In this paper, we propose a hybrid communication system for asset management and emergency reporting architecture for smart transportation. We incorporate Foglets along with visible light communication (VLC) and radio frequency (RF) communication. We present the channel model and parameters of a hybrid model to support an intelligent transportation system (ITS) system. Simulations show high improvement in the system performance in terms of communication range and received data. We present a comparative analysis of a hybrid ITS system.Keywords: Internet of Things, Foglets, VLC, RF, smart vehicle, roadside asset management
Procedia PDF Downloads 13611479 Deriving an Index of Adoption Rate and Assessing Factors Affecting Adoption of an Agroforestry-Based Farming System in Dhanusha District, Nepal
Authors: Arun Dhakal, Geoff Cockfield, Tek Narayan Maraseni
Abstract:
This paper attempts to fulfil the gap in measuring adoption in agroforestry studies. It explains the derivation of an index of adoption rate in a Nepalese context and examines the factors affecting adoption of agroforestry-based land management practice (AFLMP) in the Dhanusha District of Nepal. Data about the different farm practices and the factors (bio-physical, socio-economic) influencing adoption were collected during focus group discussion and from the randomly selected households using a household survey questionnaire, respectively. A multivariate regression model was used to determine the factors. The factors (variables) found to significantly affect adoption of AFLMP were: farm size, availability of irrigation water, education of household heads, agricultural labour force, frequency of visits by extension workers, expenditure on farm inputs purchase, household’s experience in agroforestry, and distance from home to government forest. The regression model explained about 75% of variation in adoption decision. The model rejected ‘erosion hazard’, ‘flood hazard’ and ‘gender’ as determinants of adoption, which in case of single agroforestry practice were major variables and played positive role. Out of eight variables, farm size played the most powerful role in explaining the variation in adoption, followed by availability of irrigation water and education of household heads. The results of this study suggest that policies to promote the provision of irrigation water, extension services and motivation to obtaining higher education would probably provide the incentive to adopt agroforestry elsewhere in the terai of Nepal.Keywords: agroforestry, adoption index, determinants of adoption, step-wise linear regression, Nepal
Procedia PDF Downloads 50511478 On Elastic Anisotropy of Fused Filament Fabricated Acrylonitrile Butadiene Styrene Structures
Authors: Joseph Marae Djouda, Ashraf Kasmi, François Hild
Abstract:
Fused filament fabrication is one of the most widespread additive manufacturing techniques because of its low-cost implementation. Its initial development was based on part fabrication with thermoplastic materials. The influence of the manufacturing parameters such as the filament orientation through the nozzle, the deposited layer thickness, or the speed deposition on the mechanical properties of the parts has been widely experimentally investigated. It has been recorded the remarkable variations of the anisotropy in the function of the filament path during the fabrication process. However, there is a lack in the development of constitutive models describing the mechanical properties. In this study, integrated digital image correlation (I-DIC) is used for the identification of mechanical constitutive parameters of two configurations of ABS samples: +/-45° and so-called “oriented deposition.” In this last, the filament was deposited in order to follow the principal strain of the sample. The identification scheme based on the gap reduction between simulation and the experiment directly from images recorded from a single sample (single edge notched tension specimen) is developed. The macroscopic and mesoscopic analysis are conducted from images recorded in both sample surfaces during the tensile test. The elastic and elastoplastic models in isotropic and orthotropic frameworks have been established. It appears that independently of the sample configurations (filament orientation during the fabrication), the elastoplastic isotropic model gives the correct description of the behavior of samples. It is worth noting that in this model, the number of constitutive parameters is limited to the one considered in the elastoplastic orthotropic model. This leads to the fact that the anisotropy of the architectured 3D printed ABS parts can be neglected in the establishment of the macroscopic behavior description.Keywords: elastic anisotropy, fused filament fabrication, Acrylonitrile butadiene styrene, I-DIC identification
Procedia PDF Downloads 12811477 Effect of Engineered Low Glycemic Foods on Cancer Progression and Healthy State
Authors: C. Panebianco, K. Adamberg, S. Adamberg, C. Saracino, M. Jaagura, K. Kolk, A. Di Chio, P. Graziano, R. Vilu, V. Pazienza
Abstract:
Background/Aims: Despite recent advances in treatment options, a modest impact on the outcome of the pancreatic cancer (PC) is observed so far. Short-term fasting cycles have the potential to improve the efficacy of chemotherapy against PC. However, diseased people may refuse to follow the fasting regimen and fasting may worsen the weight loss often occurring in cancer patients. Therefore, alternative approaches are needed. The aim of this study was to assess the effect of Engineered Low glycemic food ELGIF mimicking diet on growth of cancer cell lines in vitro and in an in vivo pancreatic cancer mouse xenograft model. Materials and Methods: BxPC-3, MiaPaca-2 and Panc-1 cells were cultured in control and ELGIF mimicking diet culturing condition to evaluate the tumor growth and proliferation pathways. Pancreatic cancer xenograft mice were subjected to ELGIF to assess the tumor volume and weight as compared to mice fed with control diet. Results: Pancreatic cancer cells cultured in ELGIF mimicking medium showed decreased levels of proliferation as compared to those cultured in the standard medium. Consistently, xenograft pancreatic cancer mice subjected to ELGIF diet displayed a significant decrease in tumor growth. Conclusion: A positive effect of ELGIF diet on proliferation in vitro is associated with the decrease of tumor progression in the in vivo PC xenograft mouse model. These results suggest that engineered dietary interventions could be supportive as synergistic approach to enhance the efficacy of existing cancer treatments in pancreatic cancer patients.Keywords: functional food, microbiota, mouse model, pancreatic cancer
Procedia PDF Downloads 29211476 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines
Authors: Kamyar Tolouei, Ehsan Moosavi
Abstract:
In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization
Procedia PDF Downloads 107