Search results for: process simulation
16868 Study on Planning of Smart GRID Using Landscape Ecology
Authors: Sunglim Lee, Susumu Fujii, Koji Okamura
Abstract:
Smart grid is a new approach for electric power grid that uses information and communications technology to control the electric power grid. Smart grid provides real-time control of the electric power grid, controlling the direction of power flow or time of the flow. Control devices are installed on the power lines of the electric power grid to implement smart grid. The number of the control devices should be determined, in relation with the area one control device covers and the cost associated with the control devices. One approach to determine the number of the control devices is to use the data on the surplus power generated by home solar generators. In current implementations, the surplus power is sent all the way to the power plant, which may cause power loss. To reduce the power loss, the surplus power may be sent to a control device and sent to where the power is needed from the control device. Under assumption that the control devices are installed on a lattice of equal size squares, our goal is to figure out the optimal spacing between the control devices, where the power sharing area (the area covered by one control device) is kept small to avoid power loss, and at the same time the power sharing area is big enough to have no surplus power wasted. To achieve this goal, a simulation using landscape ecology method is conducted on a sample area. First an aerial photograph of the land of interest is turned into a mosaic map where each area is colored according to the ratio of the amount of power production to the amount of power consumption in the area. The amount of power consumption is estimated according to the characteristics of the buildings in the area. The power production is calculated by the sum of the area of the roofs shown in the aerial photograph and assuming that solar panels are installed on all the roofs. The mosaic map is colored in three colors, each color representing producer, consumer, and neither. We started with a mosaic map with 100 m grid size, and the grid size is grown until there is no red grid. One control device is installed on each grid, so that the grid is the area which the control device covers. As the result of this simulation we got 350 m as the optimal spacing between the control devices that makes effective use of the surplus power for the sample area.Keywords: landscape ecology, IT, smart grid, aerial photograph, simulation
Procedia PDF Downloads 44416867 Multiple Identity Construction among Multilingual Minorities: A Quantitative Sociolinguistic Case Study
Authors: Stefanie Siebenhütter
Abstract:
This paper aims to reveal criterions involved in the process of identity-forming among multilingual minority language speakers in Northeastern Thailand and in the capital Bangkok. Using sociolinguistic interviews and questionnaires, it is asked which factors are important for speakers and how they define their identity by their interactions socially as well as linguistically. One key question to answer is how sociolinguistic factors may force or diminish the process of forming social identity of multilingual minority speakers. However, the motivation for specific language use is rarely overt to the speaker’s themselves as well as to others. Therefore, identifying the intentions included in the process of identity construction is to approach by scrutinizing speaker’s behavior and attitudes. Combining methods used in sociolinguistics and social psychology allows uncovering the tools for identity construction that ethnic Kui uses to range themselves within a multilingual setting. By giving an overview of minority speaker’s language use in context of the specific border near multilingual situation and asking how speakers construe identity within this spatial context, the results exhibit some of the subtle and mostly unconscious criterions involved in the ongoing process of identity construction.Keywords: social identity, identity construction, minority language, multilingualism, social networks, social boundaries
Procedia PDF Downloads 26716866 A New Approach to the Boom Welding Technique by Determining Seam Profile Tracking
Authors: Muciz Özcan, Mustafa Sacid Endiz, Veysel Alver
Abstract:
In this paper we present a new approach to the boom welding related to the mobile cranes manufacturing, implementing a new method in order to get homogeneous welding quality and reduced energy usage during booms production. We aim to get the realization of the same welding quality carried out on the boom in every region during the manufacturing process and to detect the possible welding errors whether they could be eliminated using laser sensors. We determine the position of the welding region directly through our system and with the help of the welding oscillator we are able to perform a proper boom welding. Errors that may occur in the welding process can be observed by monitoring and eliminated by means of an operator. The major modification in the production of the crane booms will be their form of the booms. Although conventionally, more than one welding is required to perform this process, with the suggested concept, only one particular welding is sufficient, which will be more energy and environment-friendly. Consequently, as only one welding is needed for the manufacturing of the boom, the particular welding quality becomes more essential. As a way to satisfy the welding quality, a welding manipulator was made and fabricated. By using this welding manipulator, the risks of involving dangerous gases formed during the welding process for the operator and the surroundings are diminished as much as possible.Keywords: boom welding, seam tracking, energy saving, global warming
Procedia PDF Downloads 34616865 Solving Weighted Number of Operation Plus Processing Time Due-Date Assignment, Weighted Scheduling and Process Planning Integration Problem Using Genetic and Simulated Annealing Search Methods
Authors: Halil Ibrahim Demir, Caner Erden, Mumtaz Ipek, Ozer Uygun
Abstract:
Traditionally, the three important manufacturing functions, which are process planning, scheduling and due-date assignment, are performed separately and sequentially. For couple of decades, hundreds of studies are done on integrated process planning and scheduling problems and numerous researches are performed on scheduling with due date assignment problem, but unfortunately the integration of these three important functions are not adequately addressed. Here, the integration of these three important functions is studied by using genetic, random-genetic hybrid, simulated annealing, random-simulated annealing hybrid and random search techniques. As well, the importance of the integration of these three functions and the power of meta-heuristics and of hybrid heuristics are studied.Keywords: process planning, weighted scheduling, weighted due-date assignment, genetic search, simulated annealing, hybrid meta-heuristics
Procedia PDF Downloads 46916864 Finite Element Simulation of an Offshore Monopile Subjected to Cyclic Loading Using Hypoplasticity with Intergranular Strain Anisotropy (ISA) for the Soil
Authors: William Fuentes, Melany Gil
Abstract:
Numerical simulations of offshore wind turbines (OWTs) in shallow waters demand sophisticated models considering the cyclic nature of the environmental loads. For the case of an OWT founded on sands, rapid loading may cause a reduction of the effective stress of the soil surrounding the structure. This eventually leads to its settlement, tilting, or other issues affecting its serviceability. In this work, a 3D FE model of an OWT founded on sand is constructed and analyzed. Cyclic loading with different histories is applied at certain points of the tower to simulate some environmental forces. The mechanical behavior of the soil is simulated through the recently proposed ISA-hypoplastic model for sands. The Intergranular Strain Anisotropy ISA can be interpreted as an enhancement of the intergranular strain theory, often used to extend hypoplastic formulations for the simulation of cyclic loading. In contrast to previous formulations, the proposed constitutive model introduces an elastic range for small strain amplitudes, includes the cyclic mobility effect and is able to capture the cyclic behavior of sands under a larger number of cycles. The model performance is carefully evaluated on the FE dynamic analysis of the OWT.Keywords: offshore wind turbine, monopile, ISA, hypoplasticity
Procedia PDF Downloads 24616863 Using Power Flow Analysis for Understanding UPQC’s Behaviors
Authors: O. Abdelkhalek, A. Naimi, M. Rami, M. N. Tandjaoui, A. Kechich
Abstract:
This paper deals with the active and reactive power flow analysis inside the unified power quality conditioner (UPQC) during several cases. The UPQC is a combination of shunt and series active power filter (APF). It is one of the best solutions towards the mitigation of voltage sags and swells problems on distribution network. This analysis can provide the helpful information to well understanding the interaction between the series filter, the shunt filter, the DC bus link and electrical network. The mathematical analysis is based on active and reactive power flow through the shunt and series active power filter. Wherein series APF can absorb or deliver the active power to mitigate a swell or sage voltage where in the both cases it absorbs a small reactive power quantity whereas the shunt active power absorbs or releases the active power for stabilizing the storage capacitor’s voltage as well as the power factor correction. The voltage sag and voltage swell are usually interpreted through the DC bus voltage curves. These two phenomena are introduced in this paper with a new interpretation based on the active and reactive power flow analysis inside the UPQC. For simplifying this study, a linear load is supposed in this digital simulation. The simulation results are carried out to confirm the analysis done.Keywords: UPQC, Power flow analysis, shunt filter, series filter.
Procedia PDF Downloads 57216862 Mechanical Properties of Die-Cast Nonflammable Mg Alloy
Authors: Myoung-Gon Yoon, Jung-Ho Moon, Tae Kwon Ha
Abstract:
Tensile specimens of nonflammable AZ91D Mg alloy were fabricated in this study via cold chamber die-casting process. Dimensions of tensile specimens were 25mm in length, 4mm in width, and 0.8 or 3.0mm in thickness. Microstructure observation was conducted before and after tensile tests at room temperature. In the die casting process, various injection distances from 150 to 260mm were employed to obtain optimum process conditions. Distribution of Al12Mg17 phase was the key factor to determine the mechanical properties of die-cast Mg alloy. Specimens with 3mm of thickness showed superior mechanical properties to those with 0.8mm of thickness. Closed networking of Al12Mg17 phase along grain boundary was found to be detrimental to mechanical properties of die-cast Mg alloy.Keywords: non-flammable magnesium alloy, AZ91D, die-casting, microstructure, mechanical properties
Procedia PDF Downloads 30816861 3D Numerical Investigation of Asphalt Pavements Behaviour Using Infinite Elements
Authors: K. Sandjak, B. Tiliouine
Abstract:
This article presents the main results of three-dimensional (3-D) numerical investigation of asphalt pavement structures behaviour using a coupled Finite Element-Mapped Infinite Element (FE-MIE) model. The validation and numerical performance of this model are assessed by confronting critical pavement responses with Burmister’s solution and FEM simulation results for multi-layered elastic structures. The coupled model is then efficiently utilised to perform 3-D simulations of a typical asphalt pavement structure in order to investigate the impact of two tire configurations (conventional dual and new generation wide-base tires) on critical pavement response parameters. The numerical results obtained show the effectiveness and the accuracy of the coupled (FE-MIE) model. In addition, the simulation results indicate that, compared with conventional dual tire assembly, single wide base tire caused slightly greater fatigue asphalt cracking and subgrade rutting potentials and can thus be utilised in view of its potential to provide numerous mechanical, economic, and environmental benefits.Keywords: 3-D numerical investigation, asphalt pavements, dual and wide base tires, Infinite elements
Procedia PDF Downloads 21516860 Sustainable Hydrogen Generation via Gasification of Pig Hair Biowaste with NiO/Al₂O₃ Catalysts
Authors: Jamshid Hussain, Kuen Song Lin
Abstract:
Over one thousand tons of pig hair biowaste (PHB) are produced yearly in Taiwan. The improper disposal of PHB can have a negative impact on the environment, consequently contributing to the spread of diseases. The treatment of PHB has become a major environmental and economic challenge. Innovative treatments must be developed because of the heavy metal and sulfur content of PHB. Like most organic materials, PHB is composed of many organic volatiles that contain large amounts of hydrogen. Hydrogen gas can be effectively produced by the catalytic gasification of PHB using a laboratory-scale fixed-bed gasifier, employing 15 wt% NiO/Al₂O₃ catalyst at 753–913 K. The derived kinetic parameters were obtained and refined using simulation calculations. FE–SEM microphotograph showed that NiO/Al₂O₃ catalyst particles are Spherical or irregularly shaped with diameters of 10–20 nm. HR–TEM represented that the fresh Ni particles were evenly dispersed and uniform in the microstructure of Al₂O₃ support. The sizes of the NiO nanoparticles were vital in determining catalyst activity. As displayed in the pre-edge XANES spectra of the NiO/Al₂O₃ catalysts, it exhibited a non-intensive absorbance nature for the 1s to 3d transition, which is prohibited by the selection rule for an ideal octahedral symmetry. Similarly, the populace of Ni(II) and Ni(0) onto Al₂O₃ supports are proportional to the strength of the 1s to 4pxy transition, respectively. The weak shoulder at 8329–8334 eV and a strong character at 8345–8353 eV were ascribed to the 1s to 4pxy shift, which suggested the presence of NiO types onto Al₂O₃ support in PHB catalytic gasification. As determined by the XANES analyses, Ni(II)→Ni(0) reduction was mostly observed. The oxidation of PHB onto the NiO/Al₂O₃ surface may have resulted in Ni(0) and the formation of tar during the gasification process. The EXAFS spectra revealed that the Ni atoms with Ni–Ni/Ni–O bonds were found. The Ni–O bonding proved that the produced syngas were unable to reduce NiO to Ni(0) completely. The weakness of the Ni–Ni bonds may have been caused by the highly dispersed Ni in the Al₂O₃ support. The central Ni atoms have Ni–O (2.01 Å) and Ni–Ni (2.34 Å) bond distances in the fresh NiO/Al₂O₃ catalyst. The PHB was converted into hydrogen-rich syngas (CO + H₂, >89.8% dry basis). When PHB (250 kg h−1) was catalytically gasified at 753–913 K, syngas was produced at approximately 5.45 × 105 kcal h−1 of heat recovery with 76.5%–83.5% cold gas efficiency. The simulation of the pilot-scale PHB catalytic gasification demonstrated that the system could provide hydrogen (purity > 99.99%) and generate electricity for an internal combustion engine of 100 kW and a proton exchange membrane fuel cell (PEMFC) of 175 kW. A projected payback for a PHB catalytic gasification plant with a capacity of 10- or 20-TPD (ton per day) was around 3.2 or 2.5 years, respectively.Keywords: pig hair biowaste, catalytic gasification, hydrogen production, PEMFC, resource recovery
Procedia PDF Downloads 1316859 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model
Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis
Abstract:
Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.Keywords: artery, drug, nanoparticles, navigation
Procedia PDF Downloads 10716858 Thermodynamic Analysis of a Multi-Generation Plant Driven by Pine Sawdust as Primary Fuel
Authors: Behzad Panahirad, UğUr Atikol
Abstract:
The current study is based on a combined heat and power system with multi-objectives, driven by biomass. The system consists of a combustion chamber (CC), a single effect absorption cooling system (SEACS), an air conditioning unit (AC), a reheat steam Rankine cycle (RRC), an organic Rankine cycle (ORC) and an electrolyzer. The purpose of this system is to produce hydrogen, electricity, heat, cooling, and air conditioning. All the simulations had been performed by Engineering Equation Solver (EES) software. Pine sawdust is the selected biofuel for the combustion process. The overall utilization factor (εₑₙ) and exergetic efficiency (ψₑₓ) were calculated to be 2.096 and 24.03% respectively. The performed renewable and environmental impact analysis indicated a sustainability index of 1.316 (SI) and a specific CO2 emission of 353.8 kg/MWh. The parametric study is conducted based on the variation of ambient (sink) temperature, biofuel mass flow rate, and boilers outlet temperatures. The parametric simulation showed that the increase in biofuel mass flow rate has a positive effect on the sustainability of the system.Keywords: biomass, exergy assessment, multi-objective plant, CO₂ emission, irreversibility
Procedia PDF Downloads 17016857 Environmental Decision Making Model for Assessing On-Site Performances of Building Subcontractors
Authors: Buket Metin
Abstract:
Buildings cause a variety of loads on the environment due to activities performed at each stage of the building life cycle. Construction is the first stage that affects both the natural and built environments at different steps of the process, which can be defined as transportation of materials within the construction site, formation and preparation of materials on-site and the application of materials to realize the building subsystems. All of these steps require the use of technology, which varies based on the facilities that contractors and subcontractors have. Hence, environmental consequences of the construction process should be tackled by focusing on construction technology options used in every step of the process. This paper presents an environmental decision-making model for assessing on-site performances of subcontractors based on the construction technology options which they can supply. First, construction technologies, which constitute information, tools and methods, are classified. Then, environmental performance criteria are set forth related to resource consumption, ecosystem quality, and human health issues. Finally, the model is developed based on the relationships between the construction technology components and the environmental performance criteria. The Fuzzy Analytical Hierarchy Process (FAHP) method is used for weighting the environmental performance criteria according to environmental priorities of decision-maker(s), while the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking on-site environmental performances of subcontractors using quantitative data related to the construction technology components. Thus, the model aims to provide an insight to decision-maker(s) about the environmental consequences of the construction process and to provide an opportunity to improve the overall environmental performance of construction sites.Keywords: construction process, construction technology, decision making, environmental performance, subcontractor
Procedia PDF Downloads 24716856 Finite Element Modeling and Mechanical Properties of Aluminum Proceed by Equal Channel Angular Pressing Process
Authors: F. Al-Mufadi, F. Djavanroodi
Abstract:
During the last decade ultrafine grained (UFG) and nano-structured (NS) materials have experienced a rapid development. In this research work finite element analysis has been carried out to investigate the plastic strain distribution in equal channel angular process (ECAP). The magnitudes of standard deviation (S. D.) and inhomogeneity index (Ci) were compared for different ECAP passes. Verification of a three-dimensional finite element model was performed with experimental tests. Finally the mechanical property including impact energy of ultrafine grained pure commercially pure Aluminum produced by severe plastic deformation method has been examined. For this aim, equal channel angular pressing die with the channel angle, outer corner angle and channel diameter of 90°, 20° and 20 mm had been designed and manufactured. Commercial pure Aluminum billets were ECAPed up to four passes by route BC at the ambient temperature. The results indicated that there is a great improvement at the hardness measurement, yield strength and ultimate tensile strength after ECAP process. It is found that the magnitudes of HV reach 67 HV from 21 HV after the final stage of process. Also, about 330% and 285% enhancement at the YS and UTS values have been obtained after the fourth pass as compared to the as-received conditions, respectively. On the other hand, the elongation to failure and impact energy have been reduced by 23% and 50% after imposing four passes of ECAP process, respectively.Keywords: SPD, ECAP, FEM, pure Al, mechanical properties
Procedia PDF Downloads 17916855 Photo-Fenton Decolorization of Methylene Blue Adsolubilized on Co2+ -Embedded Alumina Surface: Comparison of Process Modeling through Response Surface Methodology and Artificial Neural Network
Authors: Prateeksha Mahamallik, Anjali Pal
Abstract:
In the present study, Co(II)-adsolubilized surfactant modified alumina (SMA) was prepared, and methylene blue (MB) degradation was carried out on Co-SMA surface by visible light photo-Fenton process. The entire reaction proceeded on solid surface as MB was embedded on Co-SMA surface. The reaction followed zero order kinetics. Response surface methodology (RSM) and artificial neural network (ANN) were used for modeling the decolorization of MB by photo-Fenton process as a function of dose of Co-SMA (10, 20 and 30 g/L), initial concentration of MB (10, 20 and 30 mg/L), concentration of H2O2 (174.4, 348.8 and 523.2 mM) and reaction time (30, 45 and 60 min). The prediction capabilities of both the methodologies (RSM and ANN) were compared on the basis of correlation coefficient (R2), root mean square error (RMSE), standard error of prediction (SEP), relative percent deviation (RPD). Due to lower value of RMSE (1.27), SEP (2.06) and RPD (1.17) and higher value of R2 (0.9966), ANN was proved to be more accurate than RSM in order to predict decolorization efficiency.Keywords: adsolubilization, artificial neural network, methylene blue, photo-fenton process, response surface methodology
Procedia PDF Downloads 25416854 Set-point Performance Evaluation of Robust Back-Stepping Control Design for a Nonlinear Electro-Hydraulic Servo System
Authors: Maria Ahmadnezhad, Seyedgharani Ghoreishi
Abstract:
Electrohydraulic servo system have been used in industry in a wide number of applications. Its dynamics are highly nonlinear and also have large extent of model uncertainties and external disturbances. In this thesis, a robust back-stepping control (RBSC) scheme is proposed to overcome the problem of disturbances and system uncertainties effectively and to improve the set-point performance of EHS systems. In order to implement the proposed control scheme, the system uncertainties in EHS systems are considered as total leakage coefficient and effective oil volume. In addition, in order to obtain the virtual controls for stabilizing system, the update rule for the system uncertainty term is induced by the Lyapunov control function (LCF). To verify the performance and robustness of the proposed control system, computer simulation of the proposed control system using Matlab/Simulink Software is executed. From the computer simulation, it was found that the RBSC system produces the desired set-point performance and has robustness to the disturbances and system uncertainties of EHS systems.Keywords: electro hydraulic servo system, back-stepping control, robust back-stepping control, Lyapunov redesign
Procedia PDF Downloads 100416853 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 12016852 Optimization of Springback Prediction in U-Channel Process Using Response Surface Methodology
Authors: Muhamad Sani Buang, Shahrul Azam Abdullah, Juri Saedon
Abstract:
There is not much effective guideline on development of design parameters selection on springback for advanced high strength steel sheet metal in U-channel process during cold forming process. This paper presents the development of predictive model for springback in U-channel process on advanced high strength steel sheet employing Response Surface Methodology (RSM). The experimental was performed on dual phase steel sheet, DP590 in U-channel forming process while design of experiment (DoE) approach was used to investigates the effects of four factors namely blank holder force (BHF), clearance (C) and punch travel (Tp) and rolling direction (R) were used as input parameters using two level values by applying Full Factorial design (24). From a statistical analysis of variant (ANOVA), result showed that blank holder force (BHF), clearance (C) and punch travel (Tp) displayed significant effect on springback of flange angle (β2) and wall opening angle (β1), while rolling direction (R) factor is insignificant. The significant parameters are optimized in order to reduce the springback behavior using Central Composite Design (CCD) in RSM and the optimum parameters were determined. A regression model for springback was developed. The effect of individual parameters and their response was also evaluated. The results obtained from optimum model are in agreement with the experimental valuesKeywords: advance high strength steel, u-channel process, springback, design of experiment, optimization, response surface methodology (rsm)
Procedia PDF Downloads 54116851 Near Shore Wave Manipulation for Electricity Generation
Authors: K. D. R. Jagath-Kumara, D. D. Dias
Abstract:
The sea waves carry thousands of GWs of power globally. Although there are a number of different approaches to harness offshore energy, they are likely to be expensive, practically challenging and vulnerable to storms. Therefore, this paper considers using the near shore waves for generating mechanical and electrical power. It introduces two new approaches, the wave manipulation and using a variable duct turbine, for intercepting very wide wave fronts and coping with the fluctuations of the wave height and the sea level, respectively. The first approach effectively allows capturing much more energy yet with a much narrower turbine rotor. The second approach allows using a rotor with a smaller radius but captures energy of higher wave fronts at higher sea levels yet preventing it from totally submerging. To illustrate the effectiveness of the approach, the paper contains a description and the simulation results of a scale model of a wave manipulator. Then, it includes the results of testing a physical model of the manipulator and a single duct, axial flow turbine, in a wave flume in the laboratory. The paper also includes comparisons of theoretical predictions, simulation results and wave flume tests with respect to the incident energy, loss in wave manipulation, minimal loss, brake torque and the angular velocity.Keywords: near-shore sea waves, renewable energy, wave energy conversion, wave manipulation
Procedia PDF Downloads 48316850 Stability Bound of Ruin Probability in a Reduced Two-Dimensional Risk Model
Authors: Zina Benouaret, Djamil Aissani
Abstract:
In this work, we introduce the qualitative and quantitative concept of the strong stability method in the risk process modeling two lines of business of the same insurance company or an insurance and re-insurance companies that divide between them both claims and premiums with a certain proportion. The approach proposed is based on the identification of the ruin probability associate to the model considered, with a stationary distribution of a Markov random process called a reversed process. Our objective, after clarifying the condition and the perturbation domain of parameters, is to obtain the stability inequality of the ruin probability which is applied to estimate the approximation error of a model with disturbance parameters by the considered model. In the stability bound obtained, all constants are explicitly written.Keywords: Markov chain, risk models, ruin probabilities, strong stability analysis
Procedia PDF Downloads 24916849 The Design of Intelligent Passenger Organization System for Metro Stations Based on Anylogic
Authors: Cheng Zeng, Xia Luo
Abstract:
Passenger organization has always been an essential part of China's metro operation and management. Facing the massive passenger flow, stations need to improve their intelligence and automation degree by an appropriate integrated system. Based on the existing integrated supervisory control system (ISCS) and simulation software (Anylogic), this paper designs an intelligent passenger organization system (IPOS) for metro stations. Its primary function includes passenger information acquisition, data processing and computing, visualization management, decision recommendations, and decision response based on interlocking equipment. For this purpose, the logical structure and intelligent algorithms employed are particularly devised. Besides, the structure diagram of information acquisition and application module, the application of Anylogic, the case library's function process are all given by this research. Based on the secondary development of Anylogic and existing technologies like video recognition, the IPOS is supposed to improve the response speed and address capacity in the face of emergent passenger flow of metro stations.Keywords: anylogic software, decision-making support system, intellectualization, ISCS, passenger organization
Procedia PDF Downloads 17616848 Use of Satellite Imaging to Understand Earth’s Surface Features: A Roadmap
Authors: Sabri Serkan Gulluoglu
Abstract:
It is possible with Geographic Information Systems (GIS) that the information about all natural and artificial resources on the earth is obtained taking advantage of satellite images are obtained by remote sensing techniques. However, determination of unknown sources, mapping of the distribution and efficient evaluation of resources are defined may not be possible with the original image. For this reasons, some process steps are needed like transformation, pre-processing, image enhancement and classification to provide the most accurate assessment numerically and visually. Many studies which present the phases of obtaining and processing of the satellite images have examined in the literature study. The research showed that the determination of the process steps may be followed at this subject with the existence of a common whole may provide to progress the process rapidly for the necessary and possible studies which will be.Keywords: remote sensing, satellite imaging, gis, computer science, information
Procedia PDF Downloads 31816847 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties
Authors: Sonal Budhiraja, Biswabrata Pradhan
Abstract:
This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval
Procedia PDF Downloads 25016846 Investigation of the Properties of Biochar Obtained by Dry and Wet Torrefaction in a Fixed and in a Fluidized Bed
Authors: Natalia Muratova, Dmitry Klimov, Rafail Isemin, Sergey Kuzmin, Aleksandr Mikhalev, Oleg Milovanov
Abstract:
We investigated the processing of poultry litter into biochar using dry torrefaction methods (DT) in a fixed and fluidized bed of quartz sand blown with nitrogen, as well as wet torrefaction (WT) in a fluidized bed in a medium of water steam at a temperature of 300 °C. Torrefaction technology affects the duration of the heat treatment process and the characteristics of the biochar: the process of separating CO₂, CO, H₂ and CH₄ from a portion of fresh poultry litter during torrefaction in a fixed bed is completed after 2400 seconds, but in a fluidized bed — after 480 seconds. During WT in a fluidized bed of quartz sand, this process ends in 840 seconds after loading a portion of fresh litter, but in a fluidized bed of litter particles previously subjected to torrefaction, the process ends in 350 - 450 seconds. In terms of the ratio between (H/C) and (O/C), the litter obtained after DT and WT treatment corresponds to lignite. WT in a fluidized bed allows one to obtain biochar, in which the specific pore area is two times larger than the specific pore area of biochar obtained after DT in a fluidized bed. Biochar, obtained as a result of the poultry litter treatment in a fluidized bed using DT or WT method, is recommended to be used not only as a biofuel but also as an adsorbent or the soil fertilizer.Keywords: biochar, poultry litter, dry and wet torrefaction, fixed bed, fluidized bed
Procedia PDF Downloads 15716845 Coal Preparation Plant:Technology Overview and New Adaptations
Authors: Amit Kumar Sinha
Abstract:
A coal preparation plant typically operates with multiple beneficiation circuits to process individual size fractions of coal obtained from mine so that the targeted overall plant efficiency in terms of yield and ash is achieved. Conventional coal beneficiation plant in India or overseas operates generally in two methods of processing; coarse beneficiation with treatment in dense medium cyclones or in baths and fines beneficiation with treatment in flotation cell. This paper seeks to address the proven application of intermediate circuit along with coarse and fines circuit in Jamadoba New Coal Preparation Plant of capacity 2 Mt/y to treat -0.5 mm+0.25 mm size particles in reflux classifier. Previously this size of particles was treated directly in Flotation cell which had operational and metallurgical limitations which will be discussed in brief in this paper. The paper also details test work results performed on the representative samples of TSL coal washeries to determine the top size of intermediate and fines circuit and discusses about the overlapping process of intermediate circuit and how it is process wise suitable to beneficiate misplaced particles from coarse circuit and fines circuit. This paper also compares the separation efficiency (Ep) of various intermediate circuit process equipment and tries to validate the use of reflux classifier over fine coal DMC or spirals. An overview of Modern coal preparation plant treating Indian coal especially Washery Grade IV coal with reference to Jamadoba New Coal Preparation Plant which was commissioned in 2018 with basis of selection of equipment and plant profile, application of reflux classifier in intermediate circuit and process design criteria is also outlined in this paper.Keywords: intermediate circuit, overlapping process, reflux classifier
Procedia PDF Downloads 13616844 Effect of Different Porous Media Models on Drug Delivery to Solid Tumors: Mathematical Approach
Authors: Mostafa Sefidgar, Sohrab Zendehboudi, Hossein Bazmara, Madjid Soltani
Abstract:
Based on findings from clinical applications, most drug treatments fail to eliminate malignant tumors completely even though drug delivery through systemic administration may inhibit their growth. Therefore, better understanding of tumor formation is crucial in developing more effective therapeutics. For this purpose, nowadays, solid tumor modeling and simulation results are used to predict how therapeutic drugs are transported to tumor cells by blood flow through capillaries and tissues. A solid tumor is investigated as a porous media for fluid flow simulation. Most of the studies use Darcy model for porous media. In Darcy model, the fluid friction is neglected and a few simplified assumptions are implemented. In this study, the effect of these assumptions is studied by considering Brinkman model. A multi scale mathematical method which calculates fluid flow to a solid tumor is used in this study to investigate how neglecting fluid friction affects the solid tumor simulation. In this work, the mathematical model in our previous studies is developed by considering two model of momentum equation for porous media: Darcy and Brinkman. The mathematical method involves processes such as fluid flow through solid tumor as porous media, extravasation of blood flow from vessels, blood flow through vessels and solute diffusion, convective transport in extracellular matrix. The sprouting angiogenesis model is used for generating capillary network and then fluid flow governing equations are implemented to calculate blood flow through the tumor-induced capillary network. Finally, the two models of porous media are used for modeling fluid flow in normal and tumor tissues in three different shapes of tumors. Simulations of interstitial fluid transport in a solid tumor demonstrate that the simplifications used in Darcy model affect the interstitial velocity and Brinkman model predicts a lower value for interstitial velocity than the values that Darcy model does.Keywords: solid tumor, porous media, Darcy model, Brinkman model, drug delivery
Procedia PDF Downloads 30716843 Stochastic Richelieu River Flood Modeling and Comparison of Flood Propagation Models: WMS (1D) and SRH (2D)
Authors: Maryam Safrai, Tewfik Mahdi
Abstract:
This article presents the stochastic modeling of the Richelieu River flood in Quebec, Canada, occurred in the spring of 2011. With the aid of the one-dimensional Watershed Modeling System (WMS (v.10.1) and HEC-RAS (v.4.1) as a flood simulator, the delineation of the probabilistic flooded areas was considered. Based on the Monte Carlo method, WMS (v.10.1) delineated the probabilistic flooded areas with corresponding occurrence percentages. Furthermore, results of this one-dimensional model were compared with the results of two-dimensional model (SRH-2D) for the evaluation of efficiency and precision of each applied model. Based on this comparison, computational process in two-dimensional model is longer and more complicated versus brief one-dimensional one. Although, two-dimensional models are more accurate than one-dimensional method, but according to existing modellers, delineation of probabilistic flooded areas based on Monte Carlo method is achievable via one-dimensional modeler. The applied software in this case study greatly responded to verify the research objectives. As a result, flood risk maps of the Richelieu River with the two applied models (1d, 2d) could elucidate the flood risk factors in hydrological, hydraulic, and managerial terms.Keywords: flood modeling, HEC-RAS, model comparison, Monte Carlo simulation, probabilistic flooded area, SRH-2D, WMS
Procedia PDF Downloads 14016842 Emergentist Metaphorical Creativity: Towards a Model of Analysing Metaphorical Creativity in Interactive Talk
Authors: Afef Badri
Abstract:
Metaphorical creativity does not constitute a static property of discourse. It is an interactive dynamic process created online. There has been a lack of research concerning online produced metaphorical creativity. This paper intends to account for metaphorical creativity in online talk-in-interaction as a dynamic process that emerges as discourse unfolds. It brings together insights from the emergentist approach to the study of metaphor in verbal interactions and insights from conceptual blending approach as a model for analysing online metaphorical constructions to propose a model for studying metaphorical creativity in interactive talk. The model is based on three focal points. First, metaphorical creativity is a dynamic emergent and open-to-change process that evolves in real time as interlocutors constantly blend and re-blend previous metaphorical contributions. Second, it is not a product of isolated individual minds but a joint achievement that is co-constructed and co-elaborated by interlocutors. The third and most important point is that the emergent process of metaphorical creativity is tightly shaped by contextual variables surrounding talk-in-interaction. It is grounded in the framework of interpretation of interlocutors. It is constrained by preceding contributions in a way that creates textual cohesion of the verbal exchange and it is also a goal-oriented process predefined by the communicative intention of each participant in a way that reveals the ideological coherence/incoherence of the entire conversation.Keywords: communicative intention, conceptual blending, the emergentist approach, metaphorical creativity
Procedia PDF Downloads 25916841 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth
Authors: Elham Soltani Dehnavi
Abstract:
In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort
Procedia PDF Downloads 18316840 NanoSat MO Framework: Simulating a Constellation of Satellites with Docker Containers
Authors: César Coelho, Nikolai Wiegand
Abstract:
The advancement of nanosatellite technology has opened new avenues for cost-effective and faster space missions. The NanoSat MO Framework (NMF) from the European Space Agency (ESA) provides a modular and simpler approach to the development of flight software and operations of small satellites. This paper presents a methodology using the NMF together with Docker for simulating constellations of satellites. By leveraging Docker containers, the software environment of individual satellites can be easily replicated within a simulated constellation. This containerized approach allows for rapid deployment, isolation, and management of satellite instances, facilitating comprehensive testing and development in a controlled setting. By integrating the NMF lightweight simulator in the container, a comprehensive simulation environment was achieved. A significant advantage of using Docker containers is their inherent scalability, enabling the simulation of hundreds or even thousands of satellites with minimal overhead. Docker's lightweight nature ensures efficient resource utilization, allowing for deployment on a single host or across a cluster of hosts. This capability is crucial for large-scale simulations, such as in the case of mega-constellations, where multiple traditional virtual machines would be impractical due to their higher resource demands. This ability for easy horizontal scaling based on the number of simulated satellites provides tremendous flexibility to different mission scenarios. Our results demonstrate that leveraging Docker containers with the NanoSat MO Framework provides a highly efficient and scalable solution for simulating satellite constellations, offering not only significant benefits in terms of resource utilization and operational flexibility but also enabling testing and validation of ground software for constellations. The findings underscore the importance of taking advantage of already existing technologies in computer science to create new solutions for future satellite constellations in space.Keywords: containerization, docker containers, NanoSat MO framework, satellite constellation simulation, scalability, small satellites
Procedia PDF Downloads 5016839 Design and Simulation of Low Threshold Nanowire Photonic Crystal Surface Emitting Lasers
Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li
Abstract:
Nanowire based Photonic Crystal Surface Emitting Lasers (PCSELs) reported in the literature have been designed using a triangular, square or honeycomb patterns. The triangular and square pattern PCSELs have limited degrees of freedom in tuning the design parameters which hinders the ability to design high quality factor (Q-factor) devices. Nanowire based PCSELs designed using triangular and square patterns have been reported with the lasing thresholds of 130 kW/〖cm〗^2 and 7 kW/〖cm〗^2 respectively. On the other hand the honeycomb pattern gives more degrees of freedom in tuning the design parameters, which can allow one to design high Q-factor devices. A deformed honeycomb pattern device was reported with lasing threshold of 6.25 W/〖cm〗^2 corresponding to a simulated Q-factor of 5.84X〖10〗^5.Despite this achievement, the design principles which can lead to realization of even higher Q-factor honeycomb pattern PCSELs have not yet been investigated. In this work we show that through deforming the honeycomb pattern and tuning the heigh and lattice constants of the nanowires, it is possible to achieve even higher Q-factor devices. Considering three different band edge modes, we investigate how the resonance wavelength changes as the device is deformed, which is useful in designing high Q-factor devices in different wavelength bands. We eventually establish the design and simulation of honeycomb PCSELs operating around the wavelength of 960nm , in the O and the C band with Q-factors up to 7X〖10〗^7. We also investigate the Q-factors of undeformed device, and establish that the mode at the band edge close to 960nm can attain highest Q-factor of all the modes when the device is undeformed and the Q-factor degrades as the device is deformed. This work is a stepping stone towards the fabrication of very high Q-factor, nanowire based honey comb PCSELs, which are expected to have very low lasing threshold.Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers
Procedia PDF Downloads 11