Search results for: lumped parameter method
19357 An Efficient Algorithm of Time Step Control for Error Correction Method
Authors: Youngji Lee, Yonghyeon Jeon, Sunyoung Bu, Philsu Kim
Abstract:
The aim of this paper is to construct an algorithm of time step control for the error correction method most recently developed by one of the authors for solving stiff initial value problems. It is achieved with the generalized Chebyshev polynomial and the corresponding error correction method. The main idea of the proposed scheme is in the usage of the duplicated node points in the generalized Chebyshev polynomials of two different degrees by adding necessary sample points instead of re-sampling all points. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. Two stiff problems are numerically solved to assess the effectiveness of the proposed scheme.Keywords: stiff initial value problem, error correction method, generalized Chebyshev polynomial, node points
Procedia PDF Downloads 57319356 [Keynote Talk]: Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method
Authors: Lina Wu, Jia Liu, Ye Li
Abstract:
The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.Keywords: Bochner formula, Calculus Stokes' Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Liouville-type problem, p-harmonic map
Procedia PDF Downloads 27419355 Backstepping Design and Fractional Differential Equation of Chaotic System
Authors: Ayub Khan, Net Ram Garg, Geeta Jain
Abstract:
In this paper, backstepping method is proposed to synchronize two fractional-order systems. The simulation results show that this method can effectively synchronize two chaotic systems.Keywords: backstepping method, fractional order, synchronization, chaotic system
Procedia PDF Downloads 45819354 First Principle Studies on the Structural, Electronic and Magnetic Properties of Some BaMn-Based Double Perovskites
Authors: Amel Souidi, S. Bentata, B. Bouadjemi, T. Lantri, Z. Aziz
Abstract:
Perovskite materials which include magnetic elements have relevance due to the technological perspectives in the spintronics industry. In this work, we have investigated the structural, electronic and magnetic properties of double perovskites Ba2MnXO6 with X= Mo and W by using the full-potential linearized augmented plane wave (FP-LAPW) method based on Density Functional Theory (DFT) [1, 2] as implemented in the WIEN2K [3] code. The interchange-correlation potential was included through the generalized gradient approximation (GGA) [4] as well as taking into account the on-site coulomb repulsive interaction in (GGA+U) approach. We have analyzed the structural parameters, charge and spin densities, total and partial densities of states. The results show that the materials crystallize in the 225 space group (Fm-3m) and have a lattice parameter of about 7.97 Å and 7.95 Å for Ba2MnMoO6 and Ba2MnWO6, respectively. The band structures reveal a metallic ferromagnetic (FM) ground state in Ba2MnMoO6 and half-metallic (HM) ferromagnetic (FM) ground state in the Ba2MnWO6 compound, with total magnetic moment equal 2.9951μB (Ba2MnMoO6 ) and 4.0001μB (Ba2MnWO6 ). The GGA+U calculations predict an energy gap in the spin-up bands in Ba2MnWO6. So we estimate that this material with HM-FM nature implies a promising application in spin-electronics technology.Keywords: double perovskites, electronic structure, first-principles, semiconductors
Procedia PDF Downloads 36819353 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 35719352 Obtain the Stress Intensity Factor (SIF) in a Medium Containing a Penny-Shaped Crack by the Ritz Method
Authors: A. Tavangari, N. Salehzadeh
Abstract:
In the crack growth analysis, the Stress Intensity Factor (SIF) is a fundamental prerequisite. In the present study, the mode I stress intensity factor (SIF) of three-dimensional penny-Shaped crack is obtained in an isotropic elastic cylindrical medium with arbitrary dimensions under arbitrary loading at the top of the cylinder, by the semi-analytical method based on the Rayleigh-Ritz method. This method that is based on minimizing the potential energy amount of the whole of the system, gives a very close results to the previous studies. Defining the displacements (elastic fields) by hypothetical functions in a defined coordinate system is the base of this research. So for creating the singularity conditions at the tip of the crack the appropriate terms should be found.Keywords: penny-shaped crack, stress intensity factor, fracture mechanics, Ritz method
Procedia PDF Downloads 36619351 On a Negative Relation between Bacterial Taxis and Turing Pattern Formation
Authors: A. Elragig, S. Townley, H. Dreiwi
Abstract:
In this paper we introduce a bacteria-leukocyte model with bacteria chemotaxsis. We assume that bacteria develop a tactic defense mechanism as a response to Leukocyte phagocytosis. We explore the effect of this tactic motion on Turing space in two parameter spaces. A fine tuning of bacterial chemotaxis shows a significant effect on developing a non-uniform steady state.Keywords: chemotaxis-diffusion driven instability, bacterial chemotaxis, mathematical biology, ecology
Procedia PDF Downloads 36819350 Degradation of Polycyclic Aromatic Hydrocarbons-Contaminated Soil by Proxy-Acid Method
Authors: Reza Samsami
Abstract:
The aim of the study was to degradation of polycyclic aromatic hydrocarbons (PAHs) by proxy-acid method. The amounts of PAHs were determined in a silty-clay soil sample of an aged oil refinery field in Abadan, Iran. Proxy-acid treatment method was investigated. The results have shown that the proxy-acid system is an effective method for degradation of PAHs. The results also demonstrated that the number of fused aromatic rings have not significant effects on PAH removal by proxy-acid method. The results also demonstrated that the number of fused aromatic rings have not significant effects on PAH removal by proxy-acid method.Keywords: proxy-acid treatment, silty-clay soil, PAHs, degradation
Procedia PDF Downloads 26719349 Water-Repellent Finishing on Cotton Fabric by SF₆ Plasma
Authors: We'aam Alali, Ziad Saffour, Saker Saloum
Abstract:
Low-pressure, sulfur hexafluoride (SF₆) remote radio-frequency (RF) plasma, ignited in a hollow cathode discharge (HCD-L300) plasma system, has been shown to be a powerful method in cotton fabric finishing to achieve water-repellent property. This plasma was ignited at an SF6 flow rate of (200 cm), low pressure (0.5 mbar), and radio frequency (13.56 MHz) with a power of (300 W). The contact angle has been measured as a function of the plasma exposure period using the water contact angle measuring device (WCA), and the changes in the morphology, chemical structure, and mechanical properties as tensile strength and elongation at the break of the fabric have also been investigated using the scanning electron microscope (SEM), energy-dispersive X-ray spectroscopy (EDX), attenuated total reflectance Fourier transform Infrared spectroscopy (ATR-FTIR), and tensile test device, respectively. In addition, weight loss of the fabric and the fastness of washing have been studied. It was found that the exposure period of the fabric to the plasma is an important parameter. Moreover, a good water-repellent cotton fabric can be obtained by treating it with SF₆ plasma for a short time (1 min) without degrading its mechanical properties. Regarding the modified morphology of the cotton fabric, it was found that grooves were formed on the surface of the fibers after treatment. Chemically, the fluorine atoms were attached to the surface of the fibers.Keywords: cotton fabric, SEM, SF₆ plasma, water-repellency
Procedia PDF Downloads 8119348 Critical Activity Effect on Project Duration in Precedence Diagram Method
Authors: Salman Ali Nisar, Koshi Suzuki
Abstract:
Precedence Diagram Method (PDM) with its additional relationships i.e., start-to-start, finish-to-finish, and start-to-finish, between activities provides more flexible schedule than traditional Critical Path Method (CPM). But, changing the duration of critical activities in PDM network will have anomalous effect on critical path. Researchers have proposed some classification of critical activity effects. In this paper, we do further study on classifications of critical activity effect and provide more information in detailed. Furthermore, we determine the maximum amount of time for each class of critical activity effect by which the project managers can control the dynamic feature (shortening/lengthening) of critical activities and project duration more efficiently.Keywords: construction project management, critical path method, project scheduling, precedence diagram method
Procedia PDF Downloads 51119347 Cross-Cultural Analysis of the Impact of Project Atmosphere on Project Success and Failure
Authors: Omer Livvarcin, Mary Kay Park, Michael Miles
Abstract:
The current literature includes a few studies that mention the impact of relations between teams, the business environment, and experiences from previous projects. There is, however, limited research that treats the phenomenon of project atmosphere (PA) as a whole. This is especially true of research identifying parameters and sub-parameters, which allow project management (PM) teams to build a project culture that ultimately imbues project success. This study’s findings identify a number of key project atmosphere parameters and sub-parameters that affect project management success. One key parameter identified in the study is a cluster related to cultural concurrence, including artifacts such as policies and mores, values, perceptions, and assumptions. A second cluster centers on motivational concurrence, including such elements as project goals and team-member expectations, moods, morale, motivation, and organizational support. A third parameter cluster relates to experiential concurrence, with a focus on project and organizational memory, previous internal PM experience, and external environmental PM history and experience). A final cluster of parameters is comprised of those falling in the area of relational concurrence, including inter/intragroup relationships, role conflicts, and trust. International and intercultural project management data was collected and analyzed from the following countries: Canada, China, Nigeria, South Korea and Turkey. The cross-cultural nature of the data set suggests increased confidence that the findings will be generalizable across cultures and thus applicable for future international project management success. The intent of the identification of project atmosphere as a critical project management element is that a clear understanding of the dynamics of its sub-parameters upon projects may significantly improve the odds of success of future international and intercultural projects.Keywords: project management, project atmosphere, cultural concurrence, motivational concurrence, relational concurrence
Procedia PDF Downloads 31819346 Solution of Singularly Perturbed Differential Difference Equations Using Liouville Green Transformation
Authors: Y. N. Reddy
Abstract:
The class of differential-difference equations which have characteristics of both classes, i.e., delay/advance and singularly perturbed behaviour is known as singularly perturbed differential-difference equations. The expression ‘positive shift’ and ‘negative shift’ are also used for ‘advance’ and ‘delay’ respectively. In general, an ordinary differential equation in which the highest order derivative is multiplied by a small positive parameter and containing at least one delay/advance is known as singularly perturbed differential-difference equation. Singularly perturbed differential-difference equations arise in the modelling of various practical phenomena in bioscience, engineering, control theory, specifically in variational problems, in describing the human pupil-light reflex, in a variety of models for physiological processes or diseases and first exit time problems in the modelling of the determination of expected time for the generation of action potential in nerve cells by random synaptic inputs in dendrites. In this paper, we envisage the use of Liouville Green Transformation to find the solution of singularly perturbed differential difference equations. First, using Taylor series, the given singularly perturbed differential difference equation is approximated by an asymptotically equivalent singularly perturbation problem. Then the Liouville Green Transformation is applied to get the solution. Several model examples are solved, and the results are compared with other methods. It is observed that the present method gives better approximate solutions.Keywords: difference equations, differential equations, singular perturbations, boundary layer
Procedia PDF Downloads 19919345 Effect of the Binary and Ternary Exchanges on Crystallinity and Textural Properties of X Zeolites
Authors: H. Hammoudi, S. Bendenia, K. Marouf-Khelifa, R. Marouf, J. Schott, A. Khelifa
Abstract:
The ionic exchange of the NaX zeolite by Cu2+ and/or Zn2+ cations is progressively driven while following the development of some of its characteristic: crystallinity by XR diffraction, profile of isotherms, RI criterion, isosteric adsorption heat and microporous volume using both the Dubinin–Radushkevich (DR) equation and the t-plot through the Lippens–de Boer method which also makes it possible to determine the external surface area. Results show that the cationic exchange process, in the case of Cu2+ introduced at higher degree, is accompanied by crystalline degradation for Cu(x)X, in contrast to Zn2+-exchanged zeolite X. This degradation occurs without significant presence of mesopores, because the RI criterion values were found to be much lower than 2.2. A comparison between the binary and ternary exchanges shows that the curves of CuZn(x)X are clearly below those of Zn(x)X and Cu(x)X, whatever the examined parameter. On the other hand, the curves relating to CuZn(x)X tend towards those of Cu(x)X. This would again confirm the sensitivity of the crystalline structure of CuZn(x)X with respect to the introduction of Cu2+ cations. An original result is the distortion of the zeolitic framework of X zeolites at middle exchange degree, when Cu2+ competes with another divalent cation, such as Zn2+, for the occupancy of sites distributed within zeolitic cavities. In other words, the ternary exchange accentuates the crystalline degradation of X zeolites. An unexpected result also is the no correlation between crystal damage and the external surface area.Keywords: adsorption, crystallinity, ion exchange, zeolite
Procedia PDF Downloads 25919344 Design and Optimization of Spoke Rotor Type Brushless Direct Current Motor for Electric Vehicles Using Different Flux Barriers
Authors: Ismail Kurt, Necibe Fusun Oyman Serteller
Abstract:
Today, with the reduction in semiconductor system costs, Brushless Direct Current (BLDC) motors have become widely preferred. Based on rotor architecture, BLDC structures are divided into internal permanent magnet (IPM) and surface permanent magnet (SPM). However, permanent magnet (PM) motors in electric vehicles (EVs) are still predominantly based on interior permanent magnet (IPM) motors, as the rotors do not require sleeves, the PMs are better protected by the rotor cores, and the air-gap lengths can be much smaller. This study discusses the IPM rotor structure in detail, highlighting its higher torque levels, reluctance torque, wide speed range operation, and production advantages. IPM rotor structures are particularly preferred in EVs due to their high-speed capabilities, torque density and field weakening (FW) features. In FW applications, the motor becomes more suitable for operation at torques lower than the rated torque but at speeds above the rated speed. Although V-type and triangular IPM rotor structures are generally preferred in EV applications, the spoke-type rotor structure offers distinct advantages, making it a competitive option for these systems. The flux barriers in the rotor significantly affect motor performance, providing notable benefits in both motor efficiency and cost. This study utilizes ANSYS/Maxwell simulation software to analyze the spoke-type IPM motor and examine its key design parameters. Through analytical and 2D analysis, preliminary motor design and parameter optimization have been carried out. During the parameter optimization phase, torque ripple a common issue, especially for IPM motors has been investigated, along with the associated changes in motor parameters.Keywords: electric vehicle, field weakening, flux barrier, spoke rotor.
Procedia PDF Downloads 819343 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion
Authors: Omran M. Kenshel, Alan J. O'Connor
Abstract:
Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability
Procedia PDF Downloads 47319342 Analyzing the Effects of Bio-fibers on the Stiffness and Strength of Adhesively Bonded Thermoplastic Bio-fiber Reinforced Composites by a Mixed Experimental-Numerical Approach
Authors: Sofie Verstraete, Stijn Debruyne, Frederik Desplentere
Abstract:
Considering environmental issues, the interest to apply sustainable materials in industry increases. Specifically for composites, there is an emerging need for suitable materials and bonding techniques. As an alternative to traditional composites, short bio-fiber (cellulose-based flax) reinforced Polylactic Acid (PLA) is gaining popularity. However, these thermoplastic based composites show issues in adhesive bonding. This research focusses on analyzing the effects of the fibers near the bonding interphase. The research applies injection molded plate structures. A first important parameter concerns the fiber volume fraction, which directly affects adhesion characteristics of the surface. This parameter is varied between 0 (pure PLA) and 30%. Next to fiber volume fraction, the orientation of fibers near the bonding surface governs the adhesion characteristics of the injection molded parts. This parameter is not directly controlled in this work, but its effects are analyzed. Surface roughness also greatly determines surface wettability, thus adhesion. Therefore, this research work considers three different roughness conditions. Different mechanical treatments yield values up to 0.5 mm. In this preliminary research, only one adhesive type is considered. This is a two-part epoxy which is cured at 23 °C for 48 hours. In order to assure a dedicated parametric study, simple and reproduceable adhesive bonds are manufactured. Both single lap (substrate width 25 mm, thickness 3 mm, overlap length 10 mm) and double lap tests are considered since these are well documented and quite straightforward to conduct. These tests are conducted for the different substrate and surface conditions. Dog bone tensile testing is applied to retrieve the stiffness and strength characteristics of the substrates (with different fiber volume fractions). Numerical modelling (non-linear FEA) relates the effects of the considered parameters on the stiffness and strength of the different joints, obtained through the abovementioned tests. Ongoing work deals with developing dedicated numerical models, incorporating the different considered adhesion parameters. Although this work is the start of an extensive research project on the bonding characteristics of thermoplastic bio-fiber reinforced composites, some interesting results are already prominent. Firstly, a clear correlation between the surface roughness and the wettability of the substrates is observed. Given the adhesive type (and viscosity), it is noticed that an increase in surface energy is proportional to the surface roughness, to some extent. This becomes more pronounced when fiber volume fraction increases. Secondly, ultimate bond strength (single lap) also increases with increasing fiber volume fraction. On a macroscopic level, this confirms the positive effect of fibers near the adhesive bond line.Keywords: adhesive bonding, bio-fiber reinforced composite, flax fibers, lap joint
Procedia PDF Downloads 12819341 A Development of Holonomic Mobile Robot Using Fuzzy Multi-Layered Controller
Authors: Seungwoo Kim, Yeongcheol Cho
Abstract:
In this paper, a holonomic mobile robot is designed in omnidirectional wheels and an adaptive fuzzy controller is presented for its precise trajectories. A kind of adaptive controller based on fuzzy multi-layered algorithm is used to solve the big parametric uncertainty of motor-controlled dynamic system of 3-wheels omnidirectional mobile robot. The system parameters such as a tracking force are so time-varying due to the kinematic structure of omnidirectional wheels. The fuzzy adaptive control method is able to solve the problems of classical adaptive controller and conventional fuzzy adaptive controllers. The basic idea of new adaptive control scheme is that an adaptive controller can be constructed with parallel combination of robust controllers. This new adaptive controller uses a fuzzy multi-layered architecture which has several independent fuzzy controllers in parallel, each with different robust stability area. Out of several independent fuzzy controllers, the most suited one is selected by a system identifier which observes variations in the controlled system parameter. This paper proposes a design procedure which can be carried out mathematically and systematically from the model of a controlled system. Finally, the good performance of a holonomic mobile robot is confirmed through live tests of the tracking control task.Keywords: fuzzy adaptive control, fuzzy multi-layered controller, holonomic mobile robot, omnidirectional wheels, robustness and stability.
Procedia PDF Downloads 36019340 Influence of Disintegration of Sida hermaphrodita Silage on Methane Fermentation Efficiency
Authors: Marcin Zielinski, Marcin Debowski, Paulina Rusanowska, Magda Dudek
Abstract:
As a result of sonification, the destruction of complex biomass structures results in an increase in the biogas yield from the conditioned material. First, the amount of organic matter released into the solution due to disintegration was determined. This parameter was determined by changes in the carbon content in liquid phase of the conditioned substrate. The amount of carbon in the liquid phase increased with the prolongation of the sonication time to 16 min. Further increase in the duration of sonication did not cause a statistically significant increase in the amount of organic carbon in the liquid phase. The disintegrated material was then used for respirometric measurements for determination of the impact of the conditioning process used on methane fermentation effectiveness. The relationship between the amount of energy introduced into the lignocellulosic substrate and the amount of biogas produced has been demonstrated. Statistically significant increase in the amount of biogas was observed until sonication of 16 min. Further increase in energy in the conditioning process did not significantly increase the production of biogas from the treated substrate. The biogas production from the conditioned substrate was 17% higher than from the reference biomass at that time. The ultrasonic disintegration method did not significantly affect the observed biogas composition. In all series, the methane content in the produced biogas from the conditioned substrate was similar to that obtained with the raw substrate sample (51.1%). Another method of substrate conditioning was hydrothermal depolymerization. This method consists in application of increased temperature and pressure to substrate. These phenomena destroy the structure of the processed material, the release of organic compounds to the solution, which should lead to increase the amount of produced biogas from such treated biomass. The hydrothermal depolymerization was conducted using an innovative microwave heating method. Control measurements were performed using conventional heating. The obtained results indicate the relationship between depolymerization temperature and the amount of biogas. Statistically significant value of the biogas production coefficients increased as the depolymerization temperature increased to 150°C. Further raising the depolymerization temperature to 180°C did not significantly increase the amount of produced biogas in the respirometric tests. As a result of the hydrothermal depolymerization obtained using microwave at 150°C for 20 min, the rate of biogas production from the Sida silage was 780 L/kg VS, which accounted for nearly 50% increase compared to 370 L/kg VS obtained from the same silage but not depolymerised. The study showed that by microwave heating it is possible to effectively depolymerized substrate. Significant differences occurred especially in the temperature range of 130-150ºC. The pre-treatment of Sida hermaphrodita silage (biogas substrate) did not significantly affect the quality of the biogas produced. The methane concentration was about 51.5% on average. The study was carried out in the framework of the project under program BIOSTRATEG funded by the National Centre for Research and Development No. 1/270745/2/NCBR/2015 'Dietary, power, and economic potential of Sida hermaphrodita cultivation on fallow land'.Keywords: disintegration, biogas, methane fermentation, Virginia fanpetals, biomass
Procedia PDF Downloads 31019339 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces
Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur
Abstract:
In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.Keywords: aerodynamic, bi-dimensional, vegetation, synergistic
Procedia PDF Downloads 26919338 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang
Abstract:
Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI
Procedia PDF Downloads 26819337 Enhancement of Density-Based Spatial Clustering Algorithm with Noise for Fire Risk Assessment and Warning in Metro Manila
Authors: Pinky Mae O. De Leon, Franchezka S. P. Flores
Abstract:
This study focuses on applying an enhanced density-based spatial clustering algorithm with noise for fire risk assessments and warnings in Metro Manila. Unlike other clustering algorithms, DBSCAN is known for its ability to identify arbitrary-shaped clusters and its resistance to noise. However, its performance diminishes when handling high dimensional data, wherein it can read the noise points as relevant data points. Also, the algorithm is dependent on the parameters (eps & minPts) set by the user; choosing the wrong parameters can greatly affect its clustering result. To overcome these challenges, the study proposes three key enhancements: first is to utilize multiple MinHash and locality-sensitive hashing to decrease the dimensionality of the data set, second is to implement Jaccard Similarity before applying the parameter Epsilon to ensure that only similar data points are considered neighbors, and third is to use the concept of Jaccard Neighborhood along with the parameter MinPts to improve in classifying core points and identifying noise in the data set. The results show that the modified DBSCAN algorithm outperformed three other clustering methods, achieving fewer outliers, which facilitated a clearer identification of fire-prone areas, high Silhouette score, indicating well-separated clusters that distinctly identify areas with potential fire hazards and exceptionally achieved a low Davies-Bouldin Index and a high Calinski-Harabasz score, highlighting its ability to form compact and well-defined clusters, making it an effective tool for assessing fire hazard zones. This study is intended for assessing areas in Metro Manila that are most prone to fire risk.Keywords: DBSCAN, clustering, Jaccard similarity, MinHash LSH, fires
Procedia PDF Downloads 719336 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA
Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata
Abstract:
We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time
Procedia PDF Downloads 55519335 MapReduce Logistic Regression Algorithms with RHadoop
Authors: Byung Ho Jung, Dong Hoon Lim
Abstract:
Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.Keywords: big data, logistic regression, MapReduce, RHadoop
Procedia PDF Downloads 28519334 Contribution at Dimensioning of the Energy Dissipation Basin
Authors: M. Aouimeur
Abstract:
The environmental risks of a dam and particularly the security in the Valley downstream of it,, is a very complex problem. Integrated management and risk-sharing become more and more indispensable. The definition of "vulnerability “concept can provide assistance to controlling the efficiency of protective measures and the characterization of each valley relatively to the floods's risk. Security can be enhanced through the integrated land management. The social sciences may be associated to the operational systems of civil protection, in particular warning networks. The passage of extreme floods in the site of the dam causes the rupture of this structure and important damages downstream the dam. The river bed could be damaged by erosion if it is not well protected. Also, we may encounter some scouring and flooding problems in the downstream area of the dam. Therefore, the protection of the dam is crucial. It must have an energy dissipator in a specific place. The basin of dissipation plays a very important role for the security of the dam and the protection of the environment against floods downstream the dam. It allows to dissipate the potential energy created by the dam with the passage of the extreme flood on the weir and regularize in a natural manner and with more security the discharge or elevation of the water plan on the crest of the weir, also it permits to reduce the speed of the flow downstream the dam, in order to obtain an identical speed to the river bed. The problem of the dimensioning of a classic dissipation basin is in the determination of the necessary parameters for the dimensioning of this structure. This communication presents a simple graphical method, that is fast and complete, and a methodology which determines the main features of the hydraulic jump, necessary parameters for sizing the classic dissipation basin. This graphical method takes into account the constraints imposed by the reality of the terrain or the practice such as the one related to the topography of the site, the preservation of the environment equilibrium and the technical and economic side.This methodology is to impose the loss of head DH dissipated by the hydraulic jump as a hypothesis (free design) to determine all the others parameters of classical dissipation basin. We can impose the loss of head DH dissipated by the hydraulic jump that is equal to a selected value or to a certain percentage of the upstream total head created by the dam. With the parameter DH+ =(DH/k),(k: critical depth),the elaborate graphical representation allows to find the other parameters, the multiplication of these parameters by k gives the main characteristics of the hydraulic jump, necessary parameters for the dimensioning of classic dissipation basin.This solution is often preferred for sizing the dissipation basins of small concrete dams. The results verification and their comparison to practical data, confirm the validity and reliability of the elaborate graphical method.Keywords: dimensioning, energy dissipation basin, hydraulic jump, protection of the environment
Procedia PDF Downloads 58319333 Meta-Instruction Theory in Mathematics Education and Critique of Bloom’s Theory
Authors: Abdollah Aliesmaeili
Abstract:
The purpose of this research is to present a different perspective on the basic math teaching method called meta-instruction, which reverses the learning path. Meta-instruction is a method of teaching in which the teaching trajectory starts from brain education into learning. This research focuses on the behavior of the mind during learning. In this method, students are not instructed in mathematics, but they are educated. Another goal of the research is to "criticize Bloom's classification in the cognitive domain and reverse it", because it cannot meet the educational and instructional needs of the new generation and "substituting math education instead of math teaching". This is an indirect method of teaching. The method of research is longitudinal through four years. Statistical samples included students ages 6 to 11. The research focuses on improving the mental abilities of children to explore mathematical rules and operations by playing only with eight measurements (any years 2 examinations). The results showed that there is a significant difference between groups in remembering, understanding, and applying. Moreover, educating math is more effective than instructing in overall learning abilities.Keywords: applying, Bloom's taxonomy, brain education, mathematics teaching method, meta-instruction, remembering, starmath method, understanding
Procedia PDF Downloads 2419332 Investigation of Ductile Failure Mechanisms in SA508 Grade 3 Steel via X-Ray Computed Tomography and Fractography Analysis
Authors: Suleyman Karabal, Timothy L. Burnett, Egemen Avcu, Andrew H. Sherry, Philip J. Withers
Abstract:
SA508 Grade 3 steel is widely used in the construction of nuclear pressure vessels, where its fracture toughness plays a critical role in ensuring operational safety and reliability. Understanding the ductile failure mechanisms in this steel grade is crucial for designing robust pressure vessels that can withstand severe nuclear environment conditions. In the present study, round bar specimens of SA508 Grade 3 steel with four distinct notch geometries were subjected to tensile loading while capturing continuous 2D images at 5-second intervals in order to monitor any alterations in their geometries to construct true stress-strain curves of the specimens. 3D reconstructions of X-ray computed tomography (CT) images at high-resolution (a spatial resolution of 0.82 μm) allowed for a comprehensive assessment of the influences of second-phase particles (i.e., manganese sulfide inclusions and cementite particles) on ductile failure initiation as a function of applied plastic strain. Additionally, based on 2D and 3D images, plasticity modeling was executed, and the results were compared to experimental data. A specific ‘two-parameter criterion’ was established and calibrated based on the correlation between stress triaxiality and equivalent plastic strain at failure initiation. The proposed criterion demonstrated substantial agreement with the experimental results, thus enhancing our knowledge of ductile fracture behavior in this steel grade. The implementation of X-ray CT and fractography analysis provided new insights into the diverse roles played by different populations of second-phase particles in fracture initiation under varying stress triaxiality conditions.Keywords: ductile fracture, two-parameter criterion, x-ray computed tomography, stress triaxiality
Procedia PDF Downloads 9219331 Effect of Type of Pile and Its Installation Method on Pile Bearing Capacity by Physical Modelling in Frustum Confining Vessel
Authors: Seyed Abolhasan Naeini, M. Mortezaee
Abstract:
Various factors such as the method of installation, the pile type, the pile material and the pile shape, can affect the final bearing capacity of a pile executed in the soil; among them, the method of installation is of special importance. The physical modeling is among the best options in the laboratory study of the piles behavior. Therefore, the current paper first presents and reviews the frustum confining vesel (FCV) as a suitable tool for physical modeling of deep foundations. Then, by describing the loading tests of two open-ended and closed-end steel piles, each of which has been performed in two methods, “with displacement" and "without displacement", the effect of end conditions and installation method on the final bearing capacity of the pile is investigated. The soil used in the current paper is silty sand of Firoozkooh. The results of the experiments show that in general the without displacement installation method has a larger bearing capacity in both piles, and in a specific method of installation the closed ended pile shows a slightly higher bearing capacity.Keywords: physical modeling, frustum confining vessel, pile, bearing capacity, installation method
Procedia PDF Downloads 15319330 Seismic Fragility Functions of RC Moment Frames Using Incremental Dynamic Analyses
Authors: Seung-Won Lee, JongSoo Lee, Won-Jik Yang, Hyung-Joon Kim
Abstract:
A capacity spectrum method (CSM), one of methodologies to evaluate seismic fragilities of building structures, has been long recognized as the most convenient method, even if it contains several limitations to predict the seismic response of structures of interest. This paper proposes the procedure to estimate seismic fragility curves using an incremental dynamic analysis (IDA) rather than the method adopting a CSM. To achieve the research purpose, this study compares the seismic fragility curves of a 5-story reinforced concrete (RC) moment frame obtained from both methods, an IDA method and a CSM. Both seismic fragility curves are similar in slight and moderate damage states whereas the fragility curve obtained from the IDA method presents less variation (or uncertainties) in extensive and complete damage states. This is due to the fact that the IDA method can properly capture the structural response beyond yielding rather than the CSM and can directly calculate higher mode effects. From these observations, the CSM could overestimate seismic vulnerabilities of the studied structure in extensive or complete damage states.Keywords: seismic fragility curve, incremental dynamic analysis, capacity spectrum method, reinforced concrete moment frame
Procedia PDF Downloads 42319329 Choosing an Optimal Epsilon for Differentially Private Arrhythmia Analysis
Authors: Arin Ghazarian, Cyril Rakovski
Abstract:
Differential privacy has become the leading technique to protect the privacy of individuals in a database while allowing useful analysis to be done and the results to be shared. It puts a guarantee on the amount of privacy loss in the worst-case scenario. Differential privacy is not a toggle between full privacy and zero privacy. It controls the tradeoff between the accuracy of the results and the privacy loss using a single key parameter calledKeywords: arrhythmia, cardiology, differential privacy, ECG, epsilon, medi-cal data, privacy preserving analytics, statistical databases
Procedia PDF Downloads 15319328 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 89