Search results for: lumped parameters model
21864 Design and Development of Chassis Made of Composite Material
Authors: P. Ravinder Reddy, Chaitanya Vishal Nalli, B. Tulja Lal, Anusha Kankanala
Abstract:
The chassis frame of an automobile with different sections have been considered for different loads. The orthotropic materials are selected to get the stability by varying fiber angle, fiber thickness, laminates, fiber properties, matrix properties and elastic ratios. The geometric model of chassis frame is carried out with parametric modelling approach. The analysis of chassis frame is carried out with ANSYS FEA software. The static and dynamic analysis of chassis frame is carried out by varying geometric parameters, orthotropic properties, materials and various sections. The static and dynamic response is discussed in detail in different sections.Keywords: chassis frame, dynamic response, geometric model, orthotropic materials
Procedia PDF Downloads 33321863 Quantification and Thermal Behavior of Rice Bran Oil, Sunflower Oil and Their Model Blends
Authors: Harish Kumar Sharma, Garima Sengar
Abstract:
Rice bran oil is considered comparatively nutritionally superior than different fats/oils. Therefore, model blends prepared from pure rice bran oil (RBO) and sunflower oil (SFO) were explored for changes in the different physicochemical parameters. Repeated deep fat frying process was carried out by using dried potato in order to study the thermal behaviour of pure rice bran oil, sunflower oil and their model blends. Pure rice bran oil and sunflower oil had shown good thermal stability during the repeated deep fat frying cycles. Although, the model blends constituting 60% RBO + 40% SFO showed better suitability during repeated deep fat frying than the remaining blended oils. The quantification of pure rice bran oil in the blended oils, physically refined rice bran oil (PRBO): SnF (sunflower oil) was carried by different methods. The study revealed that regression equations based on the oryzanol content, palmitic acid composition and iodine value can be used for the quantification. The rice bran oil can easily be quantified in the blended oils based on the oryzanol content by HPLC even at 1% level. The palmitic acid content in blended oils can also be used as an indicator to quantify rice bran oil at or above 20% level in blended oils whereas the method based on ultrasonic velocity, acoustic impedance and relative association showed initial promise in the quantification.Keywords: rice bran oil, sunflower oil, frying, quantification
Procedia PDF Downloads 30821862 A Stochastic Model to Predict Earthquake Ground Motion Duration Recorded in Soft Soils Based on Nonlinear Regression
Authors: Issam Aouari, Abdelmalek Abdelhamid
Abstract:
For seismologists, the characterization of seismic demand should include the amplitude and duration of strong shaking in the system. The duration of ground shaking is one of the key parameters in earthquake resistant design of structures. This paper proposes a nonlinear statistical model to estimate earthquake ground motion duration in soft soils using multiple seismicity indicators. Three definitions of ground motion duration proposed by literature have been applied. With a comparative study, we select the most significant definition to use for predict the duration. A stochastic model is presented for the McCann and Shah Method using nonlinear regression analysis based on a data set for moment magnitude, source to site distance and site conditions. The data set applied is taken from PEER strong motion databank and contains shallow earthquakes from different regions in the world; America, Turkey, London, China, Italy, Chili, Mexico...etc. Main emphasis is placed on soft site condition. The predictive relationship has been developed based on 600 records and three input indicators. Results have been compared with others published models. It has been found that the proposed model can predict earthquake ground motion duration in soft soils for different regions and sites conditions.Keywords: duration, earthquake, prediction, regression, soft soil
Procedia PDF Downloads 15321861 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets
Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu
Abstract:
Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.Keywords: GEO SAR, radar, simulation, ship
Procedia PDF Downloads 17721860 Seismic Behavior of Pile-Supported Bridges Considering Soil-Structure Interaction and Structural Non-Linearity
Authors: Muhammad Tariq A. Chaudhary
Abstract:
Soil-structure interaction (SSI) in bridges under seismic excitation is a complex phenomenon which involves coupling between the non-linear behavior of bridge pier columns and SSI in the soil-foundation part. It is a common practice in the study of SSI to model the bridge piers as linear elastic while treating the soil and foundation with a non-linear or an equivalent linear modeling approach. Consequently, the contribution of soil and foundation to the SSI phenomenon is disproportionately highlighted. The present study considered non-linear behavior of bridge piers in FEM model of a 4-span, pile-supported bridge that was designed for five different soil conditions in a moderate seismic zone. The FEM model of the bridge system was subjected to a suite of 21 actual ground motions representative of three levels of earthquake hazard (i.e. Design Basis Earthquake, Functional Evaluation Earthquake and Maximum Considered Earthquake). Results of the FEM analysis were used to delineate the influence of pier column non-linearity and SSI on critical design parameters of the bridge system. It was found that pier column non-linearity influenced the bridge lateral displacement and base shear more than SSI for majority of the analysis cases for the class of bridge investigated in the study.Keywords: bridge, FEM model, reinforced concrete pier, pile foundation, seismic loading, soil-structure interaction
Procedia PDF Downloads 23221859 Numerical Modeling the Cavitating Flow in Injection Nozzle Holes
Authors: Ridha Zgolli, Hatem Kanfoudi
Abstract:
Cavitating flows inside a diesel injection nozzle hole were simulated using a mixture model. A 2D numerical model is proposed in this paper to simulate steady cavitating flows. The Reynolds-averaged Navier-Stokes equations are solved for the liquid and vapor mixture, which is considered as a single fluid with variable density which is expressed as function of the vapor volume fraction. The closure of this variable is provided by the transport equation with a source term TEM. The processes of evaporation and condensation are governed by changes in pressure within the flow. The source term is implanted in the CFD code ANSYS CFX. The influence of numerical and physical parameters is presented in details. The numerical simulations are in good agreement with the experimental data for steady flow.Keywords: cavitation, injection nozzle, numerical simulation, k–ω
Procedia PDF Downloads 40121858 Pulsed Electric Field as Pretreatment for Different Drying Method in Chilean Abalone (Concholepas Concholepas) Mollusk: Effects on Product Physical Properties and Drying Methods Sustainability
Authors: Luis González-Cavieres, Mario Perez-Won, Anais Palma-Acevedo, Gipsy Tabilo-Munizaga, Erick Jara-Quijada, Roberto Lemus-Mondaca
Abstract:
In this study, pulsed electric field (PEF: 2.0 kV/cm) was used as pretreatment in drying methods, vacuum microwave (VMD); freeze-drying (FD); and hot air (HAD), in Chilean abalone mollusk. Drying parameters, quality, energy consumption, and Sustainability parameters were evaluated. PEF+VMD showed better values than the other drying systems, with drying times 67% and 83% lower than PEF+FD and FD. In the quality parameters, PEF+FD showed a significantly lower value for hardness (250 N), and a lower change of color value (ΔE = 12). In the case of HAD, the PEF application did not significantly influence its processing. In energy parameters, VMD and PEF+VMD reduced energy consumption and CO2 emissions.Keywords: PEF technology, vacuum microwave drying, energy consumption, CO2 emissions
Procedia PDF Downloads 9121857 A Framework for Consumer Selection on Travel Destinations
Authors: J. Rhodes, V. Cheng, P. Lok
Abstract:
The aim of this study is to develop a parsimonious model that explains the effect of different stimulus on a tourist’s intention to visit a new destination. The model consists of destination trust and interest as the mediating variables. The model was tested using two different types of stimulus; both studies empirically supported the proposed model. Furthermore, the first study revealed that advertising has a stronger effect than positive online reviews. The second study found that the peripheral route of the elaboration likelihood model has a stronger influence power than the central route in this context.Keywords: advertising, electronic word-of-mouth, elaboration likelihood model, intention to visit, trust
Procedia PDF Downloads 45821856 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 9921855 A Combined AHP-GP Model for Selecting Knowledge Management Tool
Authors: Ahmad Sarfaraz, Raiyad Herwies
Abstract:
In this paper, a multi-criteria decision making analysis is used to help any organization selects the best KM tool that fits and serves its needs. The AHP model is used based on a previous study to highlight and identify the main criteria and sub-criteria that are incorporated in the selection process. Different KM tools alternatives with different criteria are compared and weighted accurately to be incorporated in the GP model. The main goal is to combine the GP model with the AHP model to ensure that selecting the KM tool considers the resource constraints. Two important issues are discussed in this paper: how different factors could be taken into consideration in forming the AHP model, and how to incorporate the AHP results into the GP model for better results.Keywords: knowledge management, analytical hierarchy process, goal programming, multi-criteria decision making
Procedia PDF Downloads 38521854 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation
Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang
Abstract:
The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics
Procedia PDF Downloads 13321853 An ANOVA Approach for the Process Parameters Optimization of Al-Si Alloy Sand Casting
Authors: Manjinder Bajwa, Mahipal Singh, Manish Nagpal
Abstract:
This research paper aims to propose a novel approach using ANOVA technique for the strategic investigation of process parameters and their effects on the mechanical properties of Aluminium alloy cast. The two process parameters considered here were permeability of sand and pouring temperature of aluminium alloy. ANOVA has been employed for the first time to determine the effects of these selected parameters on the impact strength of alloy. The experimental results show that this proposed technique has great potential for analyzing sand casting process. Using this approach we have determined the treatment mean square, response mean square and mean square of error as 8.54, 8.255 and 0.435 respectively. The research concluded that at the 5% level of significance, permeability of sand is the more significant parameter influencing the impact strength of cast alloy.Keywords: aluminium alloy, pouring temperature, permeability of sand, impact strength, ANOVA
Procedia PDF Downloads 44821852 Long Short-Time Memory Neural Networks for Human Driving Behavior Modelling
Authors: Lu Zhao, Nadir Farhi, Yeltsin Valero, Zoi Christoforou, Nadia Haddadou
Abstract:
In this paper, a long short-term memory (LSTM) neural network model is proposed to replicate simultaneously car-following and lane-changing behaviors in road networks. By combining two kinds of LSTM layers and three input designs of the neural network, six variants of the LSTM model have been created. These models were trained and tested on the NGSIM 101 dataset, and the results were evaluated in terms of longitudinal speed and lateral position, respectively. Then, we compared the LSTM model with a classical car-following model (the intelligent driving model (IDM)) in the part of speed decision. In addition, the LSTM model is compared with a model using classical neural networks. After the comparison, the LSTM model demonstrates higher accuracy than the physical model IDM in terms of car-following behavior and displays better performance with regard to both car-following and lane-changing behavior compared to the classical neural network model.Keywords: traffic modeling, neural networks, LSTM, car-following, lane-change
Procedia PDF Downloads 26121851 Application of the Total Least Squares Estimation Method for an Aircraft Aerodynamic Model Identification
Authors: Zaouche Mohamed, Amini Mohamed, Foughali Khaled, Aitkaid Souhila, Bouchiha Nihad Sarah
Abstract:
The aerodynamic coefficients are important in the evaluation of an aircraft performance and stability-control characteristics. These coefficients also can be used in the automatic flight control systems and mathematical model of flight simulator. The study of the aerodynamic aspect of flying systems is a reserved domain and inaccessible for the developers. Doing tests in a wind tunnel to extract aerodynamic forces and moments requires a specific and expensive means. Besides, the glaring lack of published documentation in this field of study makes the aerodynamic coefficients determination complicated. This work is devoted to the identification of an aerodynamic model, by using an aircraft in virtual simulated environment. We deal with the identification of the system, we present an environment framework based on Software In the Loop (SIL) methodology and we use MicrosoftTM Flight Simulator (FS-2004) as the environment for plane simulation. We propose The Total Least Squares Estimation technique (TLSE) to identify the aerodynamic parameters, which are unknown, variable, classified and used in the expression of the piloting law. In this paper, we define each aerodynamic coefficient as the mean of its numerical values. All other variations are considered as modeling uncertainties that will be compensated by the robustness of the piloting control.Keywords: aircraft aerodynamic model, total least squares estimation, piloting the aircraft, robust control, Microsoft Flight Simulator, MQ-1 predator
Procedia PDF Downloads 28721850 Performance of AquaCrop Model for Simulating Maize Growth and Yield Under Varying Sowing Dates in Shire Area, North Ethiopia
Authors: Teklay Tesfay, Gebreyesus Brhane Tesfahunegn, Abadi Berhane, Selemawit Girmay
Abstract:
Adjusting the proper sowing date of a crop at a particular location with a changing climate is an essential management option to maximize crop yield. However, determining the optimum sowing date for rainfed maize production through field experimentation requires repeated trials for many years in different weather conditions and crop management. To avoid such long-term experimentation to determine the optimum sowing date, crop models such as AquaCrop are useful. Therefore, the overall objective of this study was to evaluate the performance of AquaCrop model in simulating maize productivity under varying sowing dates. A field experiment was conducted for two consecutive cropping seasons by deploying four maize seed sowing dates in a randomized complete block design with three replications. Input data required to run this model are stored as climate, crop, soil, and management files in the AquaCrop database and adjusted through the user interface. Observed data from separate field experiments was used to calibrate and validate the model. AquaCrop model was validated for its performance in simulating the green canopy and aboveground biomass of maize for the varying sowing dates based on the calibrated parameters. Results of the present study showed that there was a good agreement (an overall R2 =, Ef= d= RMSE =) between measured and simulated values of the canopy cover and biomass yields. Considering the overall values of the statistical test indicators, the performance of the model to predict maize growth and biomass yield was successful, and so this is a valuable tool help for decision-making. Hence, this calibrated and validated model is suggested to use for determining optimum maize crop sowing date for similar climate and soil conditions to the study area, instead of conducting long-term experimentation.Keywords: AquaCrop model, calibration, validation, simulation
Procedia PDF Downloads 6721849 CFD Simulation for Flow Behavior in Boiling Water Reactor Vessel and Upper Pool under Decommissioning Condition
Authors: Y. T. Ku, S. W. Chen, J. R. Wang, C. Shih, Y. F. Chang
Abstract:
In order to respond the policy decision of non-nuclear homes, Tai Power Company (TPC) will provide the decommissioning project of Kuosheng Nuclear power plant (KSNPP) to meet the regulatory requirement in near future. In this study, the computational fluid dynamics (CFD) methodology has been employed to develop a flow prediction model for boiling water reactor (BWR) with upper pool under decommissioning stage. The model can be utilized to investigate the flow behavior as the vessel combined with upper pool and continuity cooling system. At normal operating condition, different parameters are obtained for the full fluid area, including velocity, mass flow, and mixing phenomenon in the reactor pressure vessel (RPV) and upper pool. Through the efforts of the study, an integrated simulation model will be developed for flow field analysis of decommissioning KSNPP under normal operating condition. It can be expected that a basis result for future analysis application of TPC can be provide from this study.Keywords: CFD, BWR, decommissioning, upper pool
Procedia PDF Downloads 26721848 Modeling Approach to Better Control Fouling in a Submerged Membrane Bioreactor for Wastewater Treatment: Development of Analytical Expressions in Steady-State Using ASM1
Authors: Benaliouche Hana, Abdessemed Djamal, Meniai Abdessalem, Lesage Geoffroy, Heran Marc
Abstract:
This paper presents a dynamic mathematical model of activated sludge which is able to predict the formation and degradation kinetics of SMP (Soluble microbial products) in membrane bioreactor systems. The model is based on a calibrated version of ASM1 with the theory of production and degradation of SMP. The model was calibrated on the experimental data from MBR (Mathematical modeling Membrane bioreactor) pilot plant. Analytical expressions have been developed, describing the concentrations of the main state variables present in the sludge matrix, with the inclusion of only six additional linear differential equations. The objective is to present a new dynamic mathematical model of activated sludge capable of predicting the formation and degradation kinetics of SMP (UAP and BAP) from the submerged membrane bioreactor (BRMI), operating at low organic load (C / N = 3.5), for two sludge retention times (SRT) fixed at 40 days and 60 days, to study their impact on membrane fouling, The modeling study was carried out under the steady-state condition. Analytical expressions were then validated by comparing their results with those obtained by simulations using GPS-X-Hydromantis software. These equations made it possible, by means of modeling approaches (ASM1), to identify the operating and kinetic parameters and help to predict membrane fouling.Keywords: Activated Sludge Model No. 1 (ASM1), mathematical modeling membrane bioreactor, soluble microbial products, UAP, BAP, Modeling SMP, MBR, heterotrophic biomass
Procedia PDF Downloads 29421847 The Rupture Potential of Nerve Tissue Constrained Intracranial Saccular Aneurysm
Authors: M. Alam, P. Seshaiyer
Abstract:
The rupture predictability of intracranial aneurysm is one of the most important parameters for physicians in surgical treatment. As most of the intracranial aneurysms are asymptomatic, still the rupture potential of both symptomatic and asymptomatic lesions is relatively unknown. Moreover, an intracranial aneurysm constrained by a nerve tissue might be a common scenario for a physician to deal with during the treatment process. Here, we perform a computational modeling of nerve tissue constrained intracranial saccular aneurysm to show a protective role of constrained tissue on the aneurysm. A comparative parametric study of the model also performs taking long constraint, medium constraint, short constraint, point contact, narrow neck aneurysm, wide neck aneurysm as parameters for the analysis. Results show that contact constraint aneurysm generates less stress near the fundus compared to no constraint aneurysm, hence works as a protective wall for the aneurysm not to be ruptured.Keywords: rupture potential, intracranial saccular aneurysm, anisotropic hyper-elastic material, finite element analysis
Procedia PDF Downloads 21121846 Use of Predictive Food Microbiology to Determine the Shelf-Life of Foods
Authors: Fatih Tarlak
Abstract:
Predictive microbiology can be considered as an important field in food microbiology in which it uses predictive models to describe the microbial growth in different food products. Predictive models estimate the growth of microorganisms quickly, efficiently, and in a cost-effective way as compared to traditional methods of enumeration, which are long-lasting, expensive, and time-consuming. The mathematical models used in predictive microbiology are mainly categorised as primary and secondary models. The primary models are the mathematical equations that define the growth data as a function of time under a constant environmental condition. The secondary models describe the effects of environmental factors, such as temperature, pH, and water activity (aw) on the parameters of the primary models, including the maximum specific growth rate and lag phase duration, which are the most critical growth kinetic parameters. The combination of primary and secondary models provides valuable information to set limits for the quantitative detection of the microbial spoilage and assess product shelf-life.Keywords: shelf-life, growth model, predictive microbiology, simulation
Procedia PDF Downloads 21121845 Optimization of Electrical Discharge Machining Parameters in Machining AISI D3 Tool Steel by Grey Relational Analysis
Authors: Othman Mohamed Altheni, Abdurrahman Abusaada
Abstract:
This study presents optimization of multiple performance characteristics [material removal rate (MRR), surface roughness (Ra), and overcut (OC)] of hardened AISI D3 tool steel in electrical discharge machining (EDM) using Taguchi method and Grey relational analysis. Machining process parameters selected were pulsed current Ip, pulse-on time Ton, pulse-off time Toff and gap voltage Vg. Based on ANOVA, pulse current is found to be the most significant factor affecting EDM process. Optimized process parameters are simultaneously leading to a higher MRR, lower Ra, and lower OC are then verified through a confirmation experiment. Validation experiment shows an improved MRR, Ra and OC when Taguchi method and grey relational analysis were usedKeywords: edm parameters, grey relational analysis, Taguchi method, ANOVA
Procedia PDF Downloads 29421844 AgriFood Model in Ankara Regional Innovation Strategy
Authors: Coskun Serefoglu
Abstract:
The study aims to analyse how a traditional sector such as agri-food could be mobilized through regional innovation strategies. A principal component analysis as well as qualitative information, such as in-depth interviews, focus group and surveys, were employed to find the priority sectors. An agri-food model was developed which includes both a linear model and interactive model. The model consists of two main components, one of which is technological integration and the other one is agricultural extension which is based on Land-grant university approach of U.S. which is not a common practice in Turkey.Keywords: regional innovation strategy, interactive model, agri-food sector, local development, planning, regional development
Procedia PDF Downloads 14921843 A Benchmark for Some Elastic and Mechanical Properties of Uranium Dioxide
Abstract:
We present some elastic parameters of cubic fluorite type uranium dioxide (UO2) with a recent EAM type interatomic potential through geometry optimization calculations. Typical cubic elastic constants, bulk modulus, shear modulus, young modulus and other related elastic parameters were calculated during research. After calculations, we compared our results not only with the available theoretical data but also with previous experimental results. Our results are consistent with experiments and compare well the former theoretical results of the considered parameters of UO2.Keywords: UO2, elastic constants, bulk modulus, mechanical properties
Procedia PDF Downloads 41221842 In vitro Skin Model for Enhanced Testing of Antimicrobial Textiles
Authors: Steven Arcidiacono, Robert Stote, Erin Anderson, Molly Richards
Abstract:
There are numerous standard test methods for antimicrobial textiles that measure activity against specific microorganisms. However, many times these results do not translate to the performance of treated textiles when worn by individuals. Standard test methods apply a single target organism grown under optimal conditions to a textile, then recover the organism to quantitate and determine activity; this does not reflect the actual performance environment that consists of polymicrobial communities in less than optimal conditions or interaction of the textile with the skin substrate. Here we propose the development of in vitro skin model method to bridge the gap between lab testing and wear studies. The model will consist of a defined polymicrobial community of 5-7 commensal microbes simulating the skin microbiome, seeded onto a solid tissue platform to represent the skin. The protocol would entail adding a non-commensal test organism of interest to the defined community and applying a textile sample to the solid substrate. Following incubation, the textile would be removed and the organisms recovered, which would then be quantitated to determine antimicrobial activity. Important parameters to consider include identification and assembly of the defined polymicrobial community, growth conditions to allow the establishment of a stable community, and choice of skin surrogate. This model could answer the following questions: 1) is the treated textile effective against the target organism? 2) How is the defined community affected? And 3) does the textile cause unwanted effects toward the skin simulant? The proposed model would determine activity under conditions comparable to the intended application and provide expanded knowledge relative to current test methods.Keywords: antimicrobial textiles, defined polymicrobial community, in vitro skin model, skin microbiome
Procedia PDF Downloads 13721841 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 40121840 Stability Analysis of SEIR Epidemic Model with Treatment Function
Authors: Sasiporn Rattanasupha, Settapat Chinviriyasit
Abstract:
The treatment function adopts a continuous and differentiable function which can describe the effect of delayed treatment when the number of infected individuals increases and the medical condition is limited. In this paper, the SEIR epidemic model with treatment function is studied to investigate the dynamics of the model due to the effect of treatment. It is assumed that the treatment rate is proportional to the number of infective patients. The stability of the model is analyzed. The model is simulated to illustrate the analytical results and to investigate the effects of treatment on the spread of infection.Keywords: basic reproduction number, local stability, SEIR epidemic model, treatment function
Procedia PDF Downloads 52121839 A Data-Mining Model for Protection of FACTS-Based Transmission Line
Authors: Ashok Kalagura
Abstract:
This paper presents a data-mining model for fault-zone identification of flexible AC transmission systems (FACTS)-based transmission line including a thyristor-controlled series compensator (TCSC) and unified power-flow controller (UPFC), using ensemble decision trees. Given the randomness in the ensemble of decision trees stacked inside the random forests model, it provides an effective decision on the fault-zone identification. Half-cycle post-fault current and voltage samples from the fault inception are used as an input vector against target output ‘1’ for the fault after TCSC/UPFC and ‘1’ for the fault before TCSC/UPFC for fault-zone identification. The algorithm is tested on simulated fault data with wide variations in operating parameters of the power system network, including noisy environment providing a reliability measure of 99% with faster response time (3/4th cycle from fault inception). The results of the presented approach using the RF model indicate the reliable identification of the fault zone in FACTS-based transmission lines.Keywords: distance relaying, fault-zone identification, random forests, RFs, support vector machine, SVM, thyristor-controlled series compensator, TCSC, unified power-flow controller, UPFC
Procedia PDF Downloads 42321838 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials
Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov
Abstract:
Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.Keywords: reading, commercials, eye movements, EEG, polygraphic indicators
Procedia PDF Downloads 16621837 Regionalization of IDF Curves, by Interpolating Intensity and Adjustment Parameters - Application to Boyacá, Colombia
Authors: Pedro Mauricio Acosta, Carlos Andrés Caro
Abstract:
This research presents the regionalization of IDF curves for the department of Boyacá, Colombia, which comprises 16 towns, including the provincial capital, Tunja. For regionalization adjustment parameters (U and alpha) of the IDF curves stations referred to in the studied area were used. Similar regionalization is used by the interpolation of intensities. In the case of regionalization by parameters found by the construction of the curves intensity, duration and frequency estimation methods using ordinary moments and maximum likelihood. Regionalization and interpolation of data were performed with the assistance of Arcgis software. Within the development of the project the best choice to provide a level of reliability such as to determine which of the options and ways to regionalize is best sought. The resulting isolines maps were made in the case of regionalization intensities, each map is associated with a different return period and duration in order to build IDF curves in the studied area. In the case of the regionalization maps parameters associated with each parameter were performed last.Keywords: intensity duration, frequency curves, regionalization, hydrology
Procedia PDF Downloads 32521836 The Optimal Order Policy for the Newsvendor Model under Worker Learning
Authors: Sunantha Teyarachakul
Abstract:
We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.Keywords: inventory management, Newsvendor model, order policy, worker learning
Procedia PDF Downloads 41621835 Thermodynamically Predicting the Impact of Temperature on the Performance of Drilling Bits as a Function of Time
Authors: Talal Al-Bazali
Abstract:
Air drilling has recently received increasing acceptance by the oil and gas industry due to its unique advantages. The main advantages of air drilling include the higher rate of penetration, less formation damage, lower risk of loss of circulation. However, these advantages cannot be fully realized if thermal effects in air drilling are not well understood and minimized. Due to its high frictional coefficient, low heat conductivity, and high compressibility, air can impact the temperature distribution of bit and thus affect its bit performances. Based on energy and mass balances, a transient thermal model that predicts bit temperature is presented along with numerical solutions in this paper. In addition, several important parameters that influence bit temperature distribution are analyzed. Simulation results show that the bit temperature increases with increasing weight on bit and rotary speed but decreases as the standpipe pressure and flow rate increase. These results can be used to optimize drilling operations and flow parameters for an improved bit performance as shown in this paper.Keywords: air drilling, rate of penetration, temperature, rotary speed
Procedia PDF Downloads 285