Search results for: Computer Simulation
2058 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality
Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo
Abstract:
Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.Keywords: linear model, models and modeling, probability, randomness, sample
Procedia PDF Downloads 1182057 A Hybrid Data-Handler Module Based Approach for Prioritization in Quality Function Deployment
Authors: P. Venu, Joeju M. Issac
Abstract:
Quality Function Deployment (QFD) is a systematic technique that creates a platform where the customer responses can be positively converted to design attributes. The accuracy of a QFD process heavily depends on the data that it is handling which is captured from customers or QFD team members. Customized computer programs that perform Quality Function Deployment within a stipulated time have been used by various companies across the globe. These programs heavily rely on storage and retrieval of the data on a common database. This database must act as a perfect source with minimum missing values or error values in order perform actual prioritization. This paper introduces a missing/error data handler module which uses Genetic Algorithm and Fuzzy numbers. The prioritization of customer requirements of sesame oil is illustrated and a comparison is made between proposed data handler module-based deployment and manual deployment.Keywords: hybrid data handler, QFD, prioritization, module-based deployment
Procedia PDF Downloads 2972056 Research on the United Navigation Mechanism of Land, Sea and Air Targets under Multi-Sources Information Fusion
Authors: Rui Liu, Klaus Greve
Abstract:
The navigation information is a kind of dynamic geographic information, and the navigation information system is a kind of special geographic information system. At present, there are many researches on the application of centralized management and cross-integration application of basic geographic information. However, the idea of information integration and sharing is not deeply applied into the research of navigation information service. And the imperfection of navigation target coordination and navigation information sharing mechanism under certain navigation tasks has greatly affected the reliability and scientificity of navigation service such as path planning. Considering this, the project intends to study the multi-source information fusion and multi-objective united navigation information interaction mechanism: first of all, investigate the actual needs of navigation users in different areas, and establish the preliminary navigation information classification and importance level model; and then analyze the characteristics of the remote sensing and GIS vector data, and design the fusion algorithm from the aspect of improving the positioning accuracy and extracting the navigation environment data. At last, the project intends to analyze the feature of navigation information of the land, sea and air navigation targets, and design the united navigation data standard and navigation information sharing model under certain navigation tasks, and establish a test navigation system for united navigation simulation experiment. The aim of this study is to explore the theory of united navigation service and optimize the navigation information service model, which will lay the theory and technology foundation for the united navigation of land, sea and air targets.Keywords: information fusion, united navigation, dynamic path planning, navigation information visualization
Procedia PDF Downloads 2882055 Evaluating Emission Reduction Due to a Proposed Light Rail Service: A Micro-Level Analysis
Authors: Saeid Eshghi, Neeraj Saxena, Abdulmajeed Alsultan
Abstract:
Carbon dioxide (CO2) alongside other gas emissions in the atmosphere cause a greenhouse effect, resulting in an increase of the average temperature of the planet. Transportation vehicles are among the main contributors of CO2 emission. Stationary vehicles with initiated motors produce more emissions than mobile ones. Intersections with traffic lights that force the vehicles to become stationary for a period of time produce more CO2 pollution than other parts of the road. This paper focuses on analyzing the CO2 produced by the traffic flow at Anzac Parade Road - Barker Street intersection in Sydney, Australia, before and after the implementation of Light rail transport (LRT). The data are gathered during the construction phase of the LRT by collecting the number of vehicles on each path of the intersection for 15 minutes during the evening rush hour of 1 week (6-7 pm, July 04-31, 2018) and then multiplied by 4 to calculate the flow of vehicles in 1 hour. For analyzing the data, the microscopic simulation software “VISSIM” has been used. Through the analysis, the traffic flow was processed in three stages: before and after implementation of light rail train, and one during the construction phase. Finally, the traffic results were input into another software called “EnViVer”, to calculate the amount of CO2 during 1 h. The results showed that after the implementation of the light rail, CO2 will drop by a minimum of 13%. This finding provides an evidence that light rail is a sustainable mode of transport.Keywords: carbon dioxide, emission modeling, light rail, microscopic model, traffic flow
Procedia PDF Downloads 1432054 Impact of Using Peer Instruction and PhET Simulations on the Motivation and Physics Anxiety
Authors: Jaypee Limueco
Abstract:
This research focused on the impact of Peer Instruction and PhET Simulations on the level of motivation and Physics anxiety of Grade 9 students. Two groups of students were used in the study. The experimental group involved 65 registered students while the control group has 64 registered students. To determine the level of motivation of students in learning physics, the Physics Motivation Questionnaire was administered. On the other hand, to determine the level of Physics anxiety of the students in each group, Physics Anxiety Rating Scale was used. Peer Instruction supplemented with PhET simulations was implemented in the experimental group while the traditional lecture method was used in the control group. Both instruments were again administered after the implementation of the two different teaching approaches. “Wilcoxon Signed Rank test” was used to test the significant difference between pretest and posttest of each group. “Mann Whitney U” was used to test if significant differences exist between each group before and after instruction. Results showed that there is no significant difference between the level of motivation and anxiety of the experimental and control group before the implementation at p<0.05 significance level. It implies that the students have the same level of motivation and physics anxiety before instruction. However, the results of both tests have significant differences between the groups after instruction. It is also found that there is a significant positive change in the responses of the students in the experimental group while no change was evident on the control. The result of the analysis of the Mann Whitney U shows that the change in the attributes of the students is caused by the treatment. Therefore, it is concluded that Peer Instruction and PhET simulation helped in alleviating motivation of students and minimizing their anxiety towards Physics.Keywords: anxiety, motivation, peer instruction, PhET simulations
Procedia PDF Downloads 3562053 Simulation of a Three-Link, Six-Muscle Musculoskeletal Arm Activated by Hill Muscle Model
Authors: Nafiseh Ebrahimi, Amir Jafari
Abstract:
The study of humanoid character is of great interest to researchers in the field of robotics and biomechanics. One might want to know the forces and torques required to move a limb from an initial position to the desired destination position. Inverse dynamics is a helpful method to compute the force and torques for an articulated body limb. It enables us to know the joint torques required to rotate a link between two positions. Our goal in this study was to control a human-like articulated manipulator for a specific task of path tracking. For this purpose, the human arm was modeled with a three-link planar manipulator activated by Hill muscle model. Applying a proportional controller, values of force and torques applied to the joints were calculated by inverse dynamics, and then joints and muscle forces trajectories were computed and presented. To be more accurate to say, the kinematics of the muscle-joint space was formulated by which we defined the relationship between the muscle lengths and the geometry of the links and joints. Secondary, the kinematic of the links was introduced to calculate the position of the end-effector in terms of geometry. Then, we considered the modeling of Hill muscle dynamics, and after calculation of joint torques, finally, we applied them to the dynamics of the three-link manipulator obtained from the inverse dynamics to calculate the joint states, find and control the location of manipulator’s end-effector. The results show that the human arm model was successfully controlled to take the designated path of an ellipse precisely.Keywords: arm manipulator, hill muscle model, six-muscle model, three-link lodel
Procedia PDF Downloads 1422052 Optical Vortex in Asymmetric Arcs of Rotating Intensity
Authors: Mona Mihailescu, Rebeca Tudor, Irina A. Paun, Cristian Kusko, Eugen I. Scarlat, Mihai Kusko
Abstract:
Specific intensity distributions in the laser beams are required in many fields: optical communications, material processing, microscopy, optical tweezers. In optical communications, the information embedded in specific beams and the superposition of multiple beams can be used to increase the capacity of the communication channels, employing spatial modulation as an additional degree of freedom, besides already available polarization and wavelength multiplexing. In this regard, optical vortices present interest due to their potential to carry independent data which can be multiplexed at the transmitter and demultiplexed at the receiver. Also, in the literature were studied their combinations: 1) axial or perpendicular superposition of multiple optical vortices or 2) with other laser beam types: Bessel, Airy. Optical vortices, characterized by stationary ring-shape intensity and rotating phase, are achieved using computer generated holograms (CGH) obtained by simulating the interference between a tilted plane wave and a wave passing through a helical phase object. Here, we propose a method to combine information through the reunion of two CGHs. One is obtained using the helical phase distribution, characterized by its topological charge, m. The other is obtained using conical phase distribution, characterized by its radial factor, r0. Each CGH is obtained using plane wave with different tilts: km and kr for CGH generated from helical phase object and from conical phase object, respectively. These reunions of two CGHs are calculated to be phase optical elements, addressed on the liquid crystal display of a spatial light modulator, to optically process the incident beam for investigations of the diffracted intensity pattern in far field. For parallel reunion of two CGHs and high values of the ratio between km and kr, the bright ring from the first diffraction order, specific for optical vortices, is changed in an asymmetric intensity pattern: a number of circle arcs. Both diffraction orders (+1 and -1) are asymmetrical relative to each other. In different planes along the optical axis, it is observed that this asymmetric intensity pattern rotates around its centre: in the +1 diffraction order the rotation is anticlockwise and in the -1 diffraction order, the rotation is clockwise. The relation between m and r0 controls the diameter of the circle arcs and the ratio between km and kr controls the number of arcs. For perpendicular reunion of the two CGHs and low values of the ratio between km and kr, the optical vortices are multiplied and focalized in different planes, depending on the radial parameter. The first diffraction order contains information about both phase objects. It is incident on the phase masks placed at the receiver, computed using the opposite values for topological charge or for the radial parameter and displayed successively. In all, the proposed method is exploited in terms of constructive parameters, for the possibility offered by the combination of different types of beams which can be used in robust optical communications.Keywords: asymmetrical diffraction orders, computer generated holograms, conical phase distribution, optical vortices, spatial light modulator
Procedia PDF Downloads 3112051 Uncertainty Assessment in Building Energy Performance
Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud
Abstract:
The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method
Procedia PDF Downloads 4592050 Robust Image Registration Based on an Adaptive Normalized Mutual Information Metric
Authors: Huda Algharib, Amal Algharib, Hanan Algharib, Ali Mohammad Alqudah
Abstract:
Image registration is an important topic for many imaging systems and computer vision applications. The standard image registration techniques such as Mutual information/ Normalized mutual information -based methods have a limited performance because they do not consider the spatial information or the relationships between the neighbouring pixels or voxels. In addition, the amount of image noise may significantly affect the registration accuracy. Therefore, this paper proposes an efficient method that explicitly considers the relationships between the adjacent pixels, where the gradient information of the reference and scene images is extracted first, and then the cosine similarity of the extracted gradient information is computed and used to improve the accuracy of the standard normalized mutual information measure. Our experimental results on different data types (i.e. CT, MRI and thermal images) show that the proposed method outperforms a number of image registration techniques in terms of the accuracy.Keywords: image registration, mutual information, image gradients, image transformations
Procedia PDF Downloads 2482049 Automated Driving Deep Neural Networks Model Accuracy and Performance Assessment in a Simulated Environment
Authors: David Tena-Gago, Jose M. Alcaraz Calero, Qi Wang
Abstract:
The evolution and integration of automated vehicles have become more and more tangible in recent years. State-of-the-art technological advances in the field of camera-based Artificial Intelligence (AI) and computer vision greatly favor the performance and reliability of the Advanced Driver Assistance System (ADAS), leading to a greater knowledge of vehicular operation and resembling human behavior. However, the exclusive use of this technology still seems insufficient to control vehicular operation at 100%. To reveal the degree of accuracy of the current camera-based automated driving AI modules, this paper studies the structure and behavior of one of the main solutions in a controlled testing environment. The results obtained clearly outline the lack of reliability when using exclusively the AI model in the perception stage, thereby entailing using additional complementary sensors to improve its safety and performance.Keywords: accuracy assessment, AI-driven mobility, artificial intelligence, automated vehicles
Procedia PDF Downloads 1132048 Adsorption and Selective Determination Ametryne in Food Sample Using of Magnetically Separable Molecular Imprinted Polymers
Authors: Sajjad Hussain, Sabir Khan, Maria Del Pilar Taboada Sotomayor
Abstract:
This work demonstrates the synthesis of magnetic molecularly imprinted polymers (MMIPs) for determination of a selected pesticide (ametryne) using high performance liquid chromatography (HPLC). Computational simulation can assist the choice of the most suitable monomer for the synthesis of polymers. The (MMIPs) were polymerized at the surface of Fe3O4@SiO2 magnetic nanoparticles (MNPs) using 2-vinylpyradine as functional monomer, ethylene-glycol-dimethacrylate (EGDMA) is a cross-linking agent and 2,2-Azobisisobutyronitrile (AIBN) used as radical initiator. Magnetic non-molecularly imprinted polymer (MNIPs) was also prepared under the same conditions without analyte. The MMIPs were characterized by scanning electron microscopy (SEM), Brunauer, Emmett and Teller (BET) and Fourier transform infrared spectroscopy (FTIR). Pseudo first order and pseudo second order model were applied to study kinetics of adsorption and it was found that adsorption process followed the pseudo first order kinetic model. Adsorption equilibrium data was fitted to Freundlich and Langmuir isotherms and the sorption equilibrium process was well described by Langmuir isotherm mode. The selectivity coefficients (α) of MMIPs for ametryne with respect to atrazine, ciprofloxacin and folic acid were 4.28, 12.32, and 14.53 respectively. The spiked recoveries ranged between 91.33 and 106.80% were obtained. The results showed high affinity and selectivity of MMIPs for pesticide ametryne in the food samples.Keywords: molecularly imprinted polymer, pesticides, magnetic nanoparticles, adsorption
Procedia PDF Downloads 4862047 Android-Based Edugame Application for Earthquakes Disaster Mitigation Education
Authors: Endina P. Purwandari, Yolanda Hervianti, Feri Noperman, Endang W. Winarni
Abstract:
The earthquakes disaster is an event that can threaten at any moment and cause damage and loss of life. Game earthquake disaster mitigation is a useful educational game to enhance children insight, knowledge, and understanding in the response to the impact of the earthquake. This study aims to build an educational games application on the Android platform as a learning media for earthquake mitigation education and to determine the effect of the application toward children understanding of the earthquake disaster mitigation. The methods were research and development. The development was to develop edugame application for earthquakes mitigation education. The research involved elementary students as a research sample to test the developed application. The research results were valid android-based edugame application, and its the effect of application toward children understanding. The application contains an earthquake simulation video, an earthquake mitigation video, and a game consisting three stages, namely before the earthquake, when the earthquake occur, and after the earthquake. The results of the feasibility test application showed that this application was included in the category of 'Excellent' which the average percentage of the operation of applications by 76%, view application by 67% and contents of application by 74%. The test results of students' responses were 80% that showed that a positive their responses toward the application. The student understanding test results show that the average score of children understanding pretest was 71,33, and post-test was 97,00. T-test result showed that t value by 8,02 more than table t by 2,001. This indicated that the earthquakes disaster mitigation edugame application based on Android platform affects the children understanding about disaster earthquake mitigation.Keywords: android, edugame, mitigation, earthquakes
Procedia PDF Downloads 3642046 Ghost Frequency Noise Reduction through Displacement Deviation Analysis
Authors: Paua Ketan, Bhagate Rajkumar, Adiga Ganesh, M. Kiran
Abstract:
Low gear noise is an important sound quality feature in modern passenger cars. Annoying gear noise from the gearbox is influenced by the gear design, gearbox shaft layout, manufacturing deviations in the components, assembly errors and the mounting arrangement of the complete gearbox. Geometrical deviations in the form of profile and lead errors are often present on the flanks of the inspected gears. Ghost frequencies of a gear are very challenging to identify in standard gear measurement and analysis process due to small wavelengths involved. In this paper, gear whine noise occurring at non-integral multiples of gear mesh frequency of passenger car gearbox is investigated and the root cause is identified using the displacement deviation analysis (DDA) method. DDA method is applied to identify ghost frequency excitations on the flanks of gears arising out of generation grinding. Frequency identified through DDA correlated with the frequency of vibration and noise on the end-of-line machine as well as vehicle level measurements. With the application of DDA method along with standard lead profile measurement, gears with ghost frequency geometry deviations were identified on the production line to eliminate defective parts and thereby eliminate ghost frequency noise from a vehicle. Further, displacement deviation analysis can be used in conjunction with the manufacturing process simulation to arrive at suitable countermeasures for arresting the ghost frequency.Keywords: displacement deviation analysis, gear whine, ghost frequency, sound quality
Procedia PDF Downloads 1462045 A Theoretical Analysis of Air Cooling System Using Thermal Ejector under Variable Generator Pressure
Authors: Mohamed Ouzzane, Mahmoud Bady
Abstract:
Due to energy and environment context, research is looking for the use of clean and energy efficient system in cooling industry. In this regard, the ejector represents one of the promising solutions. The thermal ejector is a passive component used for thermal compression in refrigeration and cooling systems, usually activated by heat either waste or solar. The present study introduces a theoretical analysis of the cooling system which uses a gas ejector thermal compression. A theoretical model is developed and applied for the design and simulation of the ejector, as well as the whole cooling system. Besides the conservation equations of mass, energy and momentum, the gas dynamic equations, state equations, isentropic relations as well as some appropriate assumptions are applied to simulate the flow and mixing in the ejector. This model coupled with the equations of the other components (condenser, evaporator, pump, and generator) is used to analyze profiles of pressure and velocity (Mach number), as well as evaluation of the cycle cooling capacity. A FORTRAN program is developed to carry out the investigation. Properties of refrigerant R134a are calculated using real gas equations. Among many parameters, it is thought that the generator pressure is the cornerstone in the cycle, and hence considered as the key parameter in this investigation. Results show that the generator pressure has a great effect on the ejector and on the whole cooling system. At high generator pressures, strong shock waves inside the ejector are created, which lead to significant condenser pressure at the ejector exit. Additionally, at higher generator pressures, the designed system can deliver cooling capacity for high condensing pressure (hot season).Keywords: air cooling system, refrigeration, thermal ejector, thermal compression
Procedia PDF Downloads 1602044 Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network
Authors: Hui Wei, Zheng Dong
Abstract:
Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model.Keywords: biological model, feature extraction, multi-layer neural network, object recognition
Procedia PDF Downloads 5422043 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 342042 An Intrusion Detection Systems Based on K-Means, K-Medoids and Support Vector Clustering Using Ensemble
Authors: A. Mohammadpour, Ebrahim Najafi Kajabad, Ghazale Ipakchi
Abstract:
Presently, computer networks’ security rise in importance and many studies have also been conducted in this field. By the penetration of the internet networks in different fields, many things need to be done to provide a secure industrial and non-industrial network. Fire walls, appropriate Intrusion Detection Systems (IDS), encryption protocols for information sending and receiving, and use of authentication certificated are among things, which should be considered for system security. The aim of the present study is to use the outcome of several algorithms, which cause decline in IDS errors, in the way that improves system security and prevents additional overload to the system. Finally, regarding the obtained result we can also detect the amount and percentage of more sub attacks. By running the proposed system, which is based on the use of multi-algorithmic outcome and comparing that by the proposed single algorithmic methods, we observed a 78.64% result in attack detection that is improved by 3.14% than the proposed algorithms.Keywords: intrusion detection systems, clustering, k-means, k-medoids, SV clustering, ensemble
Procedia PDF Downloads 2212041 Effect of Particle Aspect Ratio and Shape Factor on Air Flow inside Pulmonary Region
Authors: Pratibha, Jyoti Kori
Abstract:
Particles in industry, harvesting, coal mines, etc. may not necessarily be spherical in shape. In general, it is difficult to find perfectly spherical particle. The prediction of movement and deposition of non spherical particle in distinct airway generation is much more difficult as compared to spherical particles. Moreover, there is extensive inflexibility in deposition between ducts of a particular generation and inside every alveolar duct since particle concentrations can be much bigger than the mean acinar concentration. Consequently, a large number of particles fail to be exhaled during expiration. This study presents a mathematical model for the movement and deposition of those non-spherical particles by using particle aspect ratio and shape factor. We analyse the pulsatile behavior underneath sinusoidal wall oscillation due to periodic breathing condition through a non-Darcian porous medium or inside pulmonary region. Since the fluid is viscous and Newtonian, the generalized Navier-Stokes equation in two-dimensional coordinate system (r, z) is used with boundary-layer theory. Results are obtained for various values of Reynolds number, Womersley number, Forchsheimer number, particle aspect ratio and shape factor. Numerical computation is done by using finite difference scheme for very fine mesh in MATLAB. It is found that the overall air velocity is significantly increased by changes in aerodynamic diameter, aspect ratio, alveoli size, Reynolds number and the pulse rate; while velocity is decreased by increasing Forchheimer number.Keywords: deposition, interstitial lung diseases, non-Darcian medium, numerical simulation, shape factor
Procedia PDF Downloads 1852040 Prediction for DC-AC PWM Inverters DC Pulsed Current Sharing from Passive Parallel Battery-Supercapacitor Energy Storage Systems
Authors: Andreas Helwig, John Bell, Wangmo
Abstract:
Hybrid energy storage systems (HESS) are gaining popularity for grid energy storage (ESS) driven by the increasingly dynamic nature of energy demands, requiring both high energy and high power density. Particularly the ability of energy storage systems via inverters to respond to increasing fluctuation in energy demands, the combination of lithium Iron Phosphate (LFP) battery and supercapacitor (SC) is a particular example of complex electro-chemical devices that may provide benefit to each other for pulse width modulated DC to AC inverter application. This is due to SC’s ability to respond to instantaneous, high-current demands and batteries' long-term energy delivery. However, there is a knowledge gap on the current sharing mechanism within a HESS supplying a load powered by high-frequency pulse-width modulation (PWM) switching to understand the mechanism of aging in such HESS. This paper investigates the prediction of current utilizing various equivalent circuits for SC to investigate sharing between battery and SC in MATLAB/Simulink simulation environment. The findings predict a significant reduction of battery current when the battery is used in a hybrid combination with a supercapacitor as compared to a battery-only model. The impact of PWM inverter carrier switching frequency on current requirements was analyzed between 500Hz and 31kHz. While no clear trend emerged, models predicted optimal frequencies for minimized current needs.Keywords: hybrid energy storage, carrier frequency, PWM switching, equivalent circuit models
Procedia PDF Downloads 262039 Energy Efficiency Improvement of Excavator with Independent Metering Valve by Continuous Mode Changing Considering Engine Fuel Consumption
Authors: Sang-Wook Lee, So-Yeon Jeon, Min-Gi Cho, Dae-Young Shin, Sung-Ho Hwang
Abstract:
Hydraulic system of excavator gets working energy from hydraulic pump which is connected to output shaft of engine. Recently, main control valve (MCV) which is composed of several independent metering valve (IMV) has been introduced for better energy efficiency of the hydraulic system so that fuel efficiency of the excavator can be improved. Excavator with IMV has 5 operating modes depending on the quantity of regeneration flow. In this system, the hydraulic pump is controlled to supply demanded flow which is needed to operate each mode. Because the regenerated flow supply energy to actuators, the hydraulic pump consumes less energy to make same motion than one that does not regenerate flow. The horse power control is applied to the hydraulic pump of excavator for maintaining engine start under a heavy load and this control makes the flow of hydraulic pump reduced. When excavator is in complex operation such as loading or unloading soil, the hydraulic pump discharges small quantity of working fluid in high pressure. At this operation, the engine of excavator does not run at optimal operating line (OOL). The engine needs to be operated on OOL to improve fuel efficiency and by controlling hydraulic pump the engine can drive on OOL. By continuous mode changing of IMV, the hydraulic pump is controlled to make engine runs on OOL. The simulation result of this study shows that fuel efficiency of excavator with IMV can be improved by considering engine OOL and continuous mode changing algorithm.Keywords: continuous mode changing, engine fuel consumption, excavator, fuel efficiency, IMV
Procedia PDF Downloads 3852038 Mechanism of pH Sensitive Flocculation for Organic Load and Colour Reduction in Landfill Leachate
Authors: Brayan Daniel Riascos Arteaga, Carlos Costa Perez
Abstract:
Landfill leachate has an important fraction of humic substances, mainly humic acids (HAs), which often represent more than half value of COD, specially in liquids proceeded from composting processes of organic fraction of solid wastes. We propose in this article a new method of pH sensitive flocculation for COD and colour reduction in landfill leachate based on the chemical properties of HAs. Landfill leachate with a high content of humic acids can be efficiently treated by pH sensitive flocculation at pH 2.0, reducing COD value in 86.1% and colour in 84.7%. Mechanism of pH sensitive flocculation is based in protonation first of phenolic groups and later of carboxylic acid groups in the HAs molecules, resulting in a reduction of Zeta potential value. For pH over neutrality, carboxylic acid and phenolic groups are ionized and Zeta potential increases in absolute value, maintaining HAs in suspension as colloids and conducting flocculation to be obstructed. Ionized anionic groups (carboxylates) can interact electrostatically with cations abundant in leachate (site binding) aiding to maintain HAs in suspension. Simulation of this situation and ideal visualization of Zeta potential behavior is described in the paper and aggregation of molecules by H-bonds is proposed as the main step in separation of HAs from leachate and reduction of COD value in this complex liquid. CHNS analysis, FT-IR spectrometry and UV–VIS spectrophotometry show chemical elements content in the range of natural and commercial HAs, clear aromaticity and carboxylic acids and phenolic groups presence in the precipitate from landfill leachateKeywords: landfill leachate, humic acids, COD, chemical treatment, flocculation
Procedia PDF Downloads 712037 Effect of Filler Size and Shape on Positive Temperature Coefficient Effect
Authors: Eric Asare, Jamie Evans, Mark Newton, Emiliano Bilotti
Abstract:
Two types of filler shapes (sphere and flakes) and three different sizes are employed to study the size effect on PTC. The composite is prepared using a mini-extruder with high-density polyethylene (HDPE) as the matrix. A computer modelling is used to fit the experimental results. The percolation threshold decreases with decreasing filler size and this was observed for both the spherical particles as well as the flakes. This was caused by the decrease in interparticle distance with decreasing filler size. The 100 µm particles showed a larger PTC intensity compared to the 5 µm particles for the metal coated glass sphere and flake. The small particles have a large surface area and agglomeration and this makes it difficult for the conductive network to e disturbed. Increasing the filler content decreased the PTC intensity and this is due to an increase in the conductive network within the polymer matrix hence more energy is needed to disrupt the network.Keywords: positive temperature coefficient (PTC) effect, conductive polymer composite (CPC), electrical conductivity
Procedia PDF Downloads 4282036 Leasing Revisited: Mastering the Digital Transformation with Traditional Financing
Authors: Tobias Huttche, Marco Canipa-Valdez, Corinne Mühlebach
Abstract:
This article discusses the role of leasing on the digital transformation process of companies and corresponding economic effects. Based on the traditional mechanisms of leasing, this article focuses in particular on the benefits of leasing as financing instrument with regard to the innovation potential of companies. Practical examples demonstrate how leasing can become an integral part of new business models. Especially, with regard to the digital transformation and corresponding investments in know-how and infrastructure, leasing can play an important role. Furthermore, findings of an empirical survey are presented dealing with the usage of leasing in Switzerland in an international context. The survey shows not only the benefits of leasing against the backdrop of digital transformation but gives guidance on how other countries can benefit from promoting leasing in their legislation and economy. Based on a simulation model for Switzerland, the economic effect of an increase in leasing volume is being calculated. Again, the respective results underline the substantial growth potential. This holds true especially for economies where asset-based lending is rarely used because of a lack of entrepreneurial or private security of the borrower (cash-based financing for developing and emerging countries). Overall, the authors found that leasing using companies are more productive and tend to grow faster than companies using less or none leasing. The positive effects of leasing on emerging digital challenges for companies and entire economies should encourage other countries to facilitate access to leasing as financing instrument by decreasing legal-, tax- and accounting-related requirements in the respective jurisdiction.Keywords: Cash-Based financing, digital transformation, financing instruments, growth, innovation, leasing
Procedia PDF Downloads 2562035 Current Global Education Trends: Issues and Challenges of Physical and Health Education Teaching and Learning in Nigerian Schools
Authors: Bichi Muktar Sani
Abstract:
The philosophy of Physical and Health Education is to develop academic and professional competency which will enable individuals earn a living and render unique services to the society and also provide good basis of knowledge and experience that characterize an educated and fully developed person through physical activities. With the increase of sedentary activities such as watching television, playing videogames, increased computer technology, automation and reduction of high school Physical and Health Education schedules, young people are most likely to become overweight, and less fit. Physical Education is a systematic instruction in sports, training, practice, gymnastics, exercises, and hygiene given as part of a school or college program. Physical and Health Education is the study, practice, and appreciation of the art and science of human movement. Physical and Health Education is course in the curricula that utilizes the learning in the cognitive, affective, and psychomotor domains in a lay or movement exploration setting. The paper made some recommendations on the way forward.Keywords: issues, challenges, physical education, school
Procedia PDF Downloads 402034 Bi-Criteria Vehicle Routing Problem for Possibility Environment
Authors: Bezhan Ghvaberidze
Abstract:
A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.Keywords: combinatorial optimization, Fuzzy Vehicle routing problem, multiple objective programming, possibility theory
Procedia PDF Downloads 4852033 Research and Application of the Three-Dimensional Visualization Geological Modeling of Mine
Authors: Bin Wang, Yong Xu, Honggang Qu, Rongmei Liu, Zhenji Gao
Abstract:
Today's mining industry is advancing gradually toward digital and visual direction. The three dimensional visualization geological modeling of mine is the digital characterization of mineral deposit, and is one of the key technology of digital mine. The three-dimensional geological modeling is a technology that combines the geological spatial information management, geological interpretation, geological spatial analysis and prediction, geostatistical analysis, entity content analysis and graphic visualization in three-dimensional environment with computer technology, and is used in geological analysis. In this paper, the three-dimensional geological modeling of an iron mine through the use of Surpac is constructed, and the weight difference of the estimation methods between distance power inverse ratio method and ordinary kriging is studied, and the ore body volume and reserves are simulated and calculated by using these two methods. Compared with the actual mine reserves, its result is relatively accurate, so it provided scientific bases for mine resource assessment, reserve calculation, mining design and so on.Keywords: three-dimensional geological modeling, geological database, geostatistics, block model
Procedia PDF Downloads 702032 Literary Translation Human vs Machine: An Essay about Online Translation
Authors: F. L. Bernardo, R. A. S. Zacarias
Abstract:
The ways to translate are manifold since textual genres undergoing translations are diverse. In this essay, our goal is to give special attention to the literary genre and to the online translation tool Google Translate (GT), widely used either by nonprofessionals or by scholars, in order to show evidence of the indispensability of human wit in a good translation. Our study has its basis on a literary review of prominent authors, with emphasis on translation categories. Also highlighting the issue of polysemous literary translation, we aim to shed light on the translator’s craft and the fallible nature of online translation. To better illustrate these principles, the methodology consisted on performing a comparative analysis involving the original text Moll Flanders by Daniel Defoe in English to its online translation given by GT and to a translation into Brazilian Portuguese performed by a human. We proceeded to identifying and analyzing the degrees of textual equivalence according to the following categories: volume, levels and order. The results have attested the unsuitability in a translation done by a computer connected to the World Wide Web.Keywords: Google Translator, human translation, literary translation, Moll Flanders
Procedia PDF Downloads 6512031 Study and Simulation of the Thrust Vectoring in Supersonic Nozzles
Authors: Kbab H, Hamitouche T
Abstract:
In recent years, significant progress has been accomplished in the field of aerospace propulsion and propulsion systems. These developments are associated with efforts to enhance the accuracy of the analysis of aerothermodynamic phenomena in the engine. This applies in particular to the flow in the nozzles used. One of the most remarkable processes in this field is thrust vectoring by means of devices able to orientate the thrust vector and control the deflection of the exit jet in the engine nozzle. In the study proposed, we are interested in the fluid thrust vectoring using a second injection in the nozzle divergence. This fluid injection causes complex phenomena, such as boundary layer separation, which generates a shock wave in the primary jet upstream of the fluid interacting zone (primary jet - secondary jet). This will cause the deviation of the main flow, and therefore of the thrust vector with reference to the axis nozzle. In the modeling of the fluidic thrust vector, various parameters can be used. The Mach number of the primary jet and the injected fluid, the total pressures ratio, the injection rate, the thickness of the upstream boundary layer, the injector position in the divergent part, and the nozzle geometry are decisive factors in this type of phenomenon. The complexity of the latter challenges researchers to understand the physical phenomena of the turbulent boundary layer encountered in supersonic nozzles, as well as the calculation of its thickness and the friction forces induced on the walls. The present study aims to numerically simulate the thrust vectoring by secondary injection using the ANSYS-FLUENT, then to analyze and validate the results and the performances obtained (angle of deflection, efficiency...), which will then be compared with those obtained by other authors.Keywords: CD Nozzle, TVC, SVC, NPR, CFD, NPR, SPR
Procedia PDF Downloads 1332030 Numerical Investigation of Soft Clayey Soil Improved by Soil-Cement Columns under Harmonic Load
Authors: R. Ziaie Moayed, E. Ghanbari Alamouty
Abstract:
Deep soil mixing is one of the improvement methods in geotechnical engineering which is widely used in soft soils. This article investigates the consolidation behavior of a soft clay soil which is improved by soil-cement column (SCC) by numerical modeling using Plaxis2D program. This behavior is simulated under vertical static and cyclic load which is applied on the soil surface. The static load problem is the simulation of a physical model test in an axisymmetric condition which uses a single SCC in the model center. The results of numerical modeling consist of settlement of soft soil composite, stress on soft soil and column, and excessive pore water pressure in the soil show a good correspondence with the test results. The response of soft soil composite to the cyclic load in vertical direction also compared with the static results. Also the effects of two variables namely the cement content used in a SCC and the area ratio (the ratio of the diameter of SCC to the diameter of composite soil model, a) is investigated. The results show that the stress on the column with the higher value of a, is lesser compared with the stress on other columns. Different rate of consolidation and excessive pore pressure distribution is observed in cyclic load problem. Also comparing the results of settlement of soil shows higher compressibility in the cyclic load problem.Keywords: area ratio, consolidation behavior, cyclic load, numerical modeling, soil-cement column
Procedia PDF Downloads 1512029 Springback Prediction for Sheet Metal Cold Stamping Using Convolutional Neural Networks
Abstract:
Cold stamping has been widely applied in the automotive industry for the mass production of a great range of automotive panels. Predicting the springback to ensure the dimensional accuracy of the cold-stamped components is a critical step. The main approaches for the prediction and compensation of springback in cold stamping include running Finite Element (FE) simulations and conducting experiments, which require forming process expertise and can be time-consuming and expensive for the design of cold stamping tools. Machine learning technologies have been proven and successfully applied in learning complex system behaviours using presentative samples. These technologies exhibit the promising potential to be used as supporting design tools for metal forming technologies. This study, for the first time, presents a novel application of a Convolutional Neural Network (CNN) based surrogate model to predict the springback fields for variable U-shape cold bending geometries. A dataset is created based on the U-shape cold bending geometries and the corresponding FE simulations results. The dataset is then applied to train the CNN surrogate model. The result shows that the surrogate model can achieve near indistinguishable full-field predictions in real-time when compared with the FE simulation results. The application of CNN in efficient springback prediction can be adopted in industrial settings to aid both conceptual and final component designs for designers without having manufacturing knowledge.Keywords: springback, cold stamping, convolutional neural networks, machine learning
Procedia PDF Downloads 149