Search results for: optimal homotopy perturbation method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20789

Search results for: optimal homotopy perturbation method

19439 Assessing Significance of Correlation with Binomial Distribution

Authors: Vijay Kumar Singh, Pooja Kushwaha, Prabhat Ranjan, Krishna Kumar Ojha, Jitendra Kumar

Abstract:

Present day high-throughput genomic technologies, NGS/microarrays, are producing large volume of data that require improved analysis methods to make sense of the data. The correlation between genes and samples has been regularly used to gain insight into many biological phenomena including, but not limited to, co-expression/co-regulation, gene regulatory networks, clustering and pattern identification. However, presence of outliers and violation of assumptions underlying Pearson correlation is frequent and may distort the actual correlation between the genes and lead to spurious conclusions. Here, we report a method to measure the strength of association between genes. The method assumes that the expression values of a gene are Bernoulli random variables whose outcome depends on the sample being probed. The method considers the two genes as uncorrelated if the number of sample with same outcome for both the genes (Ns) is equal to certainly expected number (Es). The extent of correlation depends on how far Ns can deviate from the Es. The method does not assume normality for the parent population, fairly unaffected by the presence of outliers, can be applied to qualitative data and it uses the binomial distribution to assess the significance of association. At this stage, we would not claim about the superiority of the method over other existing correlation methods, but our method could be another way of calculating correlation in addition to existing methods. The method uses binomial distribution, which has not been used until yet, to assess the significance of association between two variables. We are evaluating the performance of our method on NGS/microarray data, which is noisy and pierce by the outliers, to see if our method can differentiate between spurious and actual correlation. While working with the method, it has not escaped our notice that the method could also be generalized to measure the association of more than two variables which has been proven difficult with the existing methods.

Keywords: binomial distribution, correlation, microarray, outliers, transcriptome

Procedia PDF Downloads 392
19438 Study of the Electromagnetic Resonances of a Cavity with an Aperture Using Numerical Method and Equivalent Circuit Method

Authors: Ming-Chu Yin, Ping-An Du

Abstract:

The shielding ability of a shielding cavity is affected greatly by its resonances, which include resonance modes and frequencies. The equivalent circuit method and numerical method of transmission line matrix (TLM) are used to analyze the effect of aperture-cavity coupling on electromagnetic resonances of a cavity with an aperture in this paper. Both theoretical and numerical results show that the resonance modes of a shielding cavity with an aperture can be considered as the combination of cavity and aperture inherent resonance modes with resonance frequencies shifting, and the reason of this shift is aperture-cavity coupling. Because aperture sizes are important parameters to aperture-cavity coupling, variation rules of electromagnetic resonances of a shielding cavity with its aperture sizes are given, which will be useful for the design of shielding cavities.

Keywords: aperture-cavity coupling, equivalent circuit method, resonances, shielding equipment

Procedia PDF Downloads 424
19437 A Quantification Method of Attractiveness of Stations and an Estimation Method of Number of Passengers Taking into Consideration the Attractiveness of the Station

Authors: Naoya Ozaki, Takuya Watanabe, Ryosuke Matsumoto, Noriko Fukasawa

Abstract:

In the metropolitan areas in Japan, in many stations, shopping areas are set up, and escalators and elevators are installed to make the stations be barrier-free. Further, many areas around the stations are being redeveloped. Railway business operators want to know how much effect these circumstances have on attractiveness of the station or number of passengers using the station. So, we performed a questionnaire survey of the station users in the metropolitan areas for finding factors to affect the attractiveness of stations. Then, based on the analysis of the survey, we developed a method to quantitatively evaluate attractiveness of the stations. We also developed an estimation method for number of passengers based on combination of attractiveness of the station quantitatively evaluated and the residential and labor population around the station. Then, we derived precise linear regression models estimating the attractiveness of the station and number of passengers of the station.

Keywords: attractiveness of the station, estimation method, number of passengers of the station, redevelopment around the station, renovation of the station

Procedia PDF Downloads 270
19436 Seismic Performance of Benchmark Building Installed with Semi-Active Dampers

Authors: B. R. Raut

Abstract:

The seismic performance of 20-storey benchmark building with semi-active dampers is investigated under various earthquake ground motions. The Semi-Active Variable Friction Dampers (SAVFD) and Magnetorheological Dampers (MR) are used in this study. A recently proposed predictive control algorithm is employed for SAVFD and a simple mechanical model based on a Bouc–Wen element with clipped optimal control algorithm is employed for MR damper. A parametric study is carried out to ascertain the optimum parameters of the semi-active controllers, which yields the minimum performance indices of controlled benchmark building. The effectiveness of dampers is studied in terms of the reduction in structural responses and performance criteria. To minimize the cost of the dampers, the optimal location of the damper, rather than providing the dampers at all floors, is also investigated. The semi-active dampers installed in benchmark building effectively reduces the earthquake-induced responses. Lesser number of dampers at appropriate locations also provides comparable response of benchmark building, thereby reducing cost of dampers significantly. The effectiveness of two semi-active devices in mitigating seismic responses is cross compared. Among two semi-active devices majority of the performance criteria of MR dampers are lower than SAVFD installed with benchmark building. Thus the performance of the MR dampers is far better than SAVFD in reducing displacement, drift, acceleration and base shear of mid to high-rise building against seismic forces.

Keywords: benchmark building, control strategy, input excitation, MR dampers, peak response, semi-active variable friction dampers

Procedia PDF Downloads 266
19435 The Use of Ward Linkage in Cluster Integration with a Path Analysis Approach

Authors: Adji Achmad Rinaldo Fernandes

Abstract:

Path analysis is an analytical technique to study the causal relationship between independent and dependent variables. In this study, the integration of Clusters in the Ward Linkage method was used in a variety of clusters with path analysis. The variables used are character (x₁), capacity (x₂), capital (x₃), collateral (x₄), and condition of economy (x₄) to on time pay (y₂) through the variable willingness to pay (y₁). The purpose of this study was to compare the Ward Linkage method cluster integration in various clusters with path analysis to classify willingness to pay (y₁). The data used are primary data from questionnaires filled out by customers of Bank X, using purposive sampling. The measurement method used is the average score method. The results showed that the Ward linkage method cluster integration with path analysis on 2 clusters is the best method, by comparing the coefficient of determination. Variable character (x₁), capacity (x₂), capital (x₃), collateral (x₄), and condition of economy (x₅) to on time pay (y₂) through willingness to pay (y₁) can be explained by 58.3%, while the remaining 41.7% is explained by variables outside the model.

Keywords: cluster integration, linkage, path analysis, compliant paying behavior

Procedia PDF Downloads 157
19434 Assessment of the Energy Balance Method in the Case of Masonry Domes

Authors: M. M. Sadeghi, S. Vahdani

Abstract:

Masonry dome structures had been widely used for covering large spans in the past. The seismic assessment of these historical structures is very complicated due to the nonlinear behavior of the material, their rigidness, and special stability configuration. The assessment method based on energy balance concept, as well as the standard pushover analysis, is used to evaluate the effectiveness of these methods in the case of masonry dome structures. The Soltanieh dome building is used as an example to which two methods are applied. The performance points are given from superimposing the capacity, and demand curves in Acceleration Displacement Response Spectra (ADRS) and energy coordination are compared with the nonlinear time history analysis as the exact result. The results show a good agreement between the dynamic analysis and the energy balance method, but standard pushover method does not provide an acceptable estimation.

Keywords: energy balance method, pushover analysis, time history analysis, masonry dome

Procedia PDF Downloads 262
19433 Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method

Authors: Yasuyuki Takahashi, Maya Yamashita, Kyoko Saito

Abstract:

In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT.

Keywords: dynamic SPECT, time resolution, 180-degree interpolation method, 99mTc-GSA.

Procedia PDF Downloads 481
19432 Effectiveness of Earthing System in Vertical Configurations

Authors: S. Yunus, A. Suratman, N. Mohamad Nor, M. Othman

Abstract:

This paper presents the measurement and simulation results by Finite Element Method (FEM) for earth resistance (RDC) for interconnected vertical ground rod configurations. The soil resistivity was measured using the Wenner four-pin Method, and RDC was measured using the Fall of Potential (FOP) method, as outlined in the standard. Genetic Algorithm (GA) is employed to interpret the soil resistivity to that of a 2-layer soil model. The same soil resistivity data that were obtained by Wenner four-pin method were used in FEM for simulation. This paper compares the results of RDC obtained by FEM simulation with the real measurement at field site. A good agreement was seen for RDC obtained by measurements and FEM. This shows that FEM is a reliable software to be used for design of earthing systems. It is also found that the parallel rod system has a better performance compared to a similar setup using a grid layout.

Keywords: earthing system, earth electrodes, finite element method, genetic algorithm, earth resistances

Procedia PDF Downloads 95
19431 Method for Assessing Potential in Distribution Logistics

Authors: B. Groß, P. Fronia, P. Nyhuis

Abstract:

In addition to the production, which is already frequently optimized, improving the distribution logistics also opens up tremendous potential for increasing an enterprise’s competitiveness. Here too though, numerous interactions need to be taken into account, enterprises thus need to be able to identify and weigh between different potentials for economically efficient optimizations. In order to be able to assess potentials, enterprises require a suitable method. This paper first briefly presents the need for this research before introducing the procedure that will be used to develop an appropriate method that not only considers interactions but is also quickly and easily implemented.

Keywords: distribution logistics, evaluation of potential, methods, model

Procedia PDF Downloads 485
19430 Assessment of Water Quality Network in Karoon River by Dynamic Programming Approach (DPA)

Authors: M. Nasri Nasrabadi, A. A. Hassani

Abstract:

Karoon is one of the greatest and longest rivers of Iran, which because of the existence of numerous industrial, agricultural centers and drinking usage, has a strategic situation in the west and southwest parts of Iran, and the optimal monitoring of its water quality is an essential and indispensable national issue. Due to financial constraints, water quality monitoring network design is an efficient way to manage water quality. The most crucial part is to find appropriate locations for monitoring stations. Considering the objectives of water usage, we evaluate existing water quality sampling stations of this river. There are several methods for assessment of existing monitoring stations such as Sanders method, multiple criteria decision making and dynamic programming approach (DPA) which DPA opted in this study. The results showed that due to the drinking water quality index out of 20 existing monitoring stations, nine stations should be retained on the river, that include of Gorgor-Band-Ghir of A zone, Dez-Band-Ghir of B zone, Teir, Pole Panjom and Zargan of C zone, Darkhoein, Hafar, Chobade, and Sabonsazi of D zone. In additional, stations of Dez river have the best conditions.

Keywords: DPA, karoon river, network monitoring, water quality, sampling site

Procedia PDF Downloads 358
19429 Variable Renewable Energy Droughts in the Power Sector – A Model-based Analysis and Implications in the European Context

Authors: Martin Kittel, Alexander Roth

Abstract:

The continuous integration of variable renewable energy sources (VRE) in the power sector is required for decarbonizing the European economy. Power sectors become increasingly exposed to weather variability, as the availability of VRE, i.e., mainly wind and solar photovoltaic, is not persistent. Extreme events, e.g., long-lasting periods of scarce VRE availability (‘VRE droughts’), challenge the reliability of supply. Properly accounting for the severity of VRE droughts is crucial for designing a resilient renewable European power sector. Energy system modeling is used to identify such a design. Our analysis reveals the sensitivity of the optimal design of the European power sector towards VRE droughts. We analyze how VRE droughts impact optimal power sector investments, especially in generation and flexibility capacity. We draw upon work that systematically identifies VRE drought patterns in Europe in terms of frequency, duration, and seasonality, as well as the cross-regional and cross-technological correlation of most extreme drought periods. Based on their analysis, the authors provide a selection of relevant historical weather years representing different grades of VRE drought severity. These weather years will serve as input for the capacity expansion model for the European power sector used in this analysis (DIETER). We additionally conduct robustness checks varying policy-relevant assumptions on capacity expansion limits, interconnections, and level of sector coupling. Preliminary results illustrate how an imprudent selection of weather years may cause underestimating the severity of VRE droughts, flawing modeling insights concerning the need for flexibility. Sub-optimal European power sector designs vulnerable to extreme weather can result. Using relevant weather years that appropriately represent extreme weather events, our analysis identifies a resilient design of the European power sector. Although the scope of this work is limited to the European power sector, we are confident that our insights apply to other regions of the world with similar weather patterns. Many energy system studies still rely on one or a limited number of sometimes arbitrarily chosen weather years. We argue that the deliberate selection of relevant weather years is imperative for robust modeling results.

Keywords: energy systems, numerical optimization, variable renewable energy sources, energy drought, flexibility

Procedia PDF Downloads 54
19428 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 75
19427 ISME: Integrated Style Motion Editor for 3D Humanoid Character

Authors: Ismahafezi Ismail, Mohd Shahrizal Sunar

Abstract:

The motion of a realistic 3D humanoid character is very important especially for the industries developing computer animations and games. However, this type of motion is seen with a very complex dimensional data as well as body position, orientation, and joint rotation. Integrated Style Motion Editor (ISME), on the other hand, is a method used to alter the 3D humanoid motion capture data utilised in computer animation and games development. Therefore, this study was carried out with the purpose of demonstrating a method that is able to manipulate and deform different motion styles by integrating Key Pose Deformation Technique and Trajectory Control Technique. This motion editing method allows the user to generate new motions from the original motion capture data using a simple interface control. Unlike the previous method, our method produces a realistic humanoid motion style in real time.

Keywords: computer animation, humanoid motion, motion capture, motion editing

Procedia PDF Downloads 368
19426 A Numerical Study for Mixing Depth and Applicability of Partial Cement Mixing Method Utilizing Geogrid and Fixing Unit

Authors: Woo-seok Choi, Eun-sup Kim, Nam-Seo Park

Abstract:

The demand for new technique in soft ground improvement continuously increases as general soft ground methods like PBD and DCM have a application problem in soft grounds with deep depth and wide distribution in Southern coast of Korea and Southeast. In this study, partial cement mixing method utilizing geogrid and fixing unit(CMG) is suggested and Finite element analysis is performed for analyzing the depth of surface soil and deep soil stabilization and comparing with DCM method. In the result of the experiment, the displacement in DCM method were lower than the displacement in CMG, it's because the upper load is transferred to deep part soil not treated by cement in CMG method case. The differential settlement in DCM method was higher than the differential settlement in CMG, because of the effect load transfer effect by surface part soil treated by cement and geogrid. In conclusion, CMG method has the advantage of economics and constructability in embankment road, railway, etc in which differential settlement is the important consideration.

Keywords: soft ground, geogrid, fixing unit, partial cement mixing, finite element analysis

Procedia PDF Downloads 363
19425 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 314
19424 Improved Multi-Objective Particle Swarm Optimization Applied to Design Problem

Authors: Kapse Swapnil, K. Shankar

Abstract:

Aiming at optimizing the weight and deflection of cantilever beam subjected to maximum stress and maximum deflection, Multi-objective Particle Swarm Optimization (MOPSO) with Utopia Point based local search is implemented. Utopia point is used to govern the search towards the Pareto Optimal set. The elite candidates obtained during the iterations are stored in an archive according to non-dominated sorting and also the archive is truncated based on least crowding distance. Local search is also performed on elite candidates and the most diverse particle is selected as the global best. This method is implemented on standard test functions and it is observed that the improved algorithm gives better convergence and diversity as compared to NSGA-II in fewer iterations. Implementation on practical structural problem shows that in 5 to 6 iterations, the improved algorithm converges with better diversity as evident by the improvement of cantilever beam on an average of 0.78% and 9.28% in the weight and deflection respectively compared to NSGA-II.

Keywords: Utopia point, multi-objective particle swarm optimization, local search, cantilever beam

Procedia PDF Downloads 496
19423 Feasibility of Simulating External Vehicle Aerodynamics Using Spalart-Allmaras Turbulence Model with Adjoint Method in OpenFOAM and Fluent

Authors: Arpit Panwar, Arvind Deshpande

Abstract:

The study of external vehicle aerodynamics using Spalart-Allmaras turbulence model with adjoint method was conducted. The accessibility and ease of working with the Fluent module of ANSYS and OpenFOAM were considered. The objective of the study was to understand and analyze the possibility of bringing high-level aerodynamic simulation to the average consumer vehicle. A form-factor of BMW M6 vehicle was designed in Solidworks, which was analyzed in OpenFOAM and Fluent. The turbulence model being a single equation provides much faster convergence rate when clubbed with the adjoint method. Fluent being commercial software still does not allow us to solve Spalart-Allmaras turbulence model using the adjoint method. Hence, the turbulence model was solved using the SIMPLE method in Fluent. OpenFOAM being an open source provide flexibility in simulation but is not user-friendly. It supports solving the defined turbulence model with the adjoint method. The result generated from the simulation gives us acceptable values of drag, when validated with the result of percentage error in drag values for a notch-back vehicle model on an extensive simulation produced at 6th ANSA and μETA conference, Greece. The success of this approach will allow us to bring more aerodynamic vehicle body design to all segments of the automobile and not limiting it to just the high-end sports cars.

Keywords: Spalart-Allmaras turbulence model, OpenFOAM, adjoint method, SIMPLE method, vehicle aerodynamic design

Procedia PDF Downloads 189
19422 An Ant Colony Optimization Approach for the Pollution Routing Problem

Authors: P. Parthiban, Sonu Rajak, N. Kannan, R. Dhanalakshmi

Abstract:

This paper deals with the Vehicle Routing Problem (VRP) with environmental considerations which is called Pollution Routing Problem (PRP). The objective is to minimize the operational and environmental costs. It consists of routing a number of vehicles to serve a set of customers, and determining fuel consumption, driver wages and their speed on each route segment, while respecting the capacity constraints and time windows. In this context, we presented an Ant Colony Optimization (ACO) approach, combined with a Speed Optimization Algorithm (SOA) to solve the PRP. The proposed solution method consists of two stages. Stage one is to solve a Vehicle Routing Problem with Time Window (VRPTW) using ACO and in the second stage a SOA is run on the resulting VRPTW solutions. Given a vehicle route, the SOA consists of finding the optimal speed on each arc of the route in order to minimize an objective function comprising fuel consumption costs and driver wages. The proposed algorithm tested on benchmark problem, the preliminary results show that the proposed algorithm is able to provide good solutions.

Keywords: ant colony optimization, CO2 emissions, combinatorial optimization, speed optimization, vehicle routing

Procedia PDF Downloads 304
19421 Spectrophotometric Determination of 5-Aminosalicylic Acid in Pharmaceutical Samples

Authors: Chand Pasha

Abstract:

A Simple, accurate and precise spectrophotometric method for the quantitative analysis of determination of 5-aminosalicylic acid is described. This method is based on the reaction of 5-aminosalicylic acid with nitrite in acid medium to form diazonium ion, which is coupled with acetylacetone in basic medium to form azo dyes, which shows absorption maxima at 470 nm. The method obeys Beer’s law in the concentration range of 0.5-11.2 gml-1 of 5-aminosalicylic acid with acetylacetone. The molar absorptivity and Sandell’s sensitivity of 5-aminosalicylic acid -acetylacetone azo dye is 2.672 ×104 lmol-1cm-1, 5.731 × 10-3 gcm-2 respectively. The dye formed is stable for 10 hrs. The optimum reaction conditions and other analytical parameters are evaluated. Interference due to foreign organic compounds have been investigated. The method has been successfully applied to the determination of 5-aminosalicylic acid in pharmaceutical samples.

Keywords: spectrophotometry, diazotization, mesalazine, nitrite, acetylacetone

Procedia PDF Downloads 170
19420 Efficiency of Wood Vinegar Mixed with Some Plants Extract against the Housefly (Musca domestica L.)

Authors: U. Pangnakorn, S. Kanlaya

Abstract:

The efficiency of wood vinegar mixed with each individual of three plants extract such as: citronella grass (Cymbopogon nardus), neem seed (Azadirachta indica A. Juss), and yam bean seed (Pachyrhizus erosus Urb.) were tested against the second instar larvae of housefly (Musca domestica L.). Steam distillation was used for extraction of the citronella grass while neem and yam bean were simple extracted by fermentation with ethyl alcohol. Toxicity test was evaluated in laboratory based on two methods of larvicidal bioassay: topical application method (contact poison) and feeding method (stomach poison). Larval mortality was observed daily and larval survivability was recorded until the survived larvae developed to pupae and adults. The study resulted that treatment of wood vinegar mixed with citronella grass showed the highest larval mortality by topical application method (50.0%) and by feeding method (80.0%). However, treatment of mixed wood vinegar and neem seed showed the longest pupal duration to 25 day and 32 days for topical application method and feeding method respectively. Additional, larval duration on treated M. domestica larvae was extended to 13 days for topical application method and 11 days for feeding method. Thus, the feeding method gave higher efficiency compared with the topical application method.

Keywords: housefly (Musca domestica L.), neem seed (Azadirachta indica), citronella grass (Cymbopogon nardus), yam bean seed (Pachyrhizus erosus), mortality

Procedia PDF Downloads 326
19419 A Kinetic Study of Radical Polymerisation of Acrylic Monomers in the Presence of the Liquid Crystal and the Electro-Optical Properties of These Mixtures

Authors: A. Bouriche, D. Merah, T. Bouchaour, L. Alachaher-Bedjaoui, U. Maschke

Abstract:

Intensive research continues in the field of liquid crystals (LCs) for their potential use in modern display applications. Nematic LCs has been most commonly used due to the large birefringence and their sensitivity to even weak perturbation forces induced by electric, magnetic and optical fields. Polymer dispersed liquid crystals (PDLCs), composed of micron-sized nematic LC droplets dispersed in a polymer matrix is an important class of materials for applications in different domains of technology involving large area display devices, optical switches, phase modulators, variable attenuators, polarisers, flexible displays and smart windows. In this study the composites are prepared from mixtures of mono functional acrylic monomers, (Butylacrylate (ABu), 2-Ethylhexylacrylate (2-EHA), 2-Hydroxyethyl methacrylate (HEMA) and hydroxybutylmethacrylate (HBMA)) and two liquid crystals: (4-cyano-4'-n-pentyl-biphenyl) (5CB) and E7 which is an eutectic mixtures of four cyanoparaphenylenes. These mixtures are prepared adding the Darocur 1173 as photoinitiator, the 1.6-hexanediol diacrylate (HDDA) as cross-linker agent, and finally they are exposed to UV irradiation. The kinetic polymerization of monomer/LC mixture were investigated with the Fourier Transform Infra Red spectroscopy (FTIR). The electro-optical properties of the PDLC films were determined by measuring the voltage dependence on the transmitted light.

Keywords: acrylic monomers, films PDLC, liquid crystal, polymerisation

Procedia PDF Downloads 277
19418 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level

Authors: M. A. Spielmann, L. Schebek

Abstract:

In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.

Keywords: building sector, economic-ecological assessment, heat, LCA, quarter level

Procedia PDF Downloads 207
19417 A Family of Second Derivative Methods for Numerical Integration of Stiff Initial Value Problems in Ordinary Differential Equations

Authors: Luke Ukpebor, C. E. Abhulimen

Abstract:

Stiff initial value problems in ordinary differential equations are problems for which a typical solution is rapidly decaying exponentially, and their numerical investigations are very tedious. Conventional numerical integration solvers cannot cope effectively with stiff problems as they lack adequate stability characteristics. In this article, we developed a new family of four-step second derivative exponentially fitted method of order six for the numerical integration of stiff initial value problem of general first order differential equations. In deriving our method, we employed the idea of breaking down the general multi-derivative multistep method into predator and corrector schemes which possess free parameters that allow for automatic fitting into exponential functions. The stability analysis of the method was discussed and the method was implemented with numerical examples. The result shows that the method is A-stable and competes favorably with existing methods in terms of efficiency and accuracy.

Keywords: A-stable, exponentially fitted, four step, predator-corrector, second derivative, stiff initial value problems

Procedia PDF Downloads 235
19416 A New Method to Reduce 5G Application Layer Payload Size

Authors: Gui Yang Wu, Bo Wang, Xin Wang

Abstract:

Nowadays, 5G service-based interface architecture uses text-based payload like JSON to transfer business data between network functions, which has obvious advantages as internet services but causes unnecessarily larger traffic. In this paper, a new 5G application payload size reduction method is presented to provides the mechanism to negotiate about new capability between network functions when network communication starts up and how 5G application data are reduced according to negotiated information with peer network function. Without losing the advantages of 5G text-based payload, this method demonstrates an excellent result on application payload size reduction and does not increase the usage quota of computing resource. Implementation of this method does not impact any standards or specifications and not change any encoding or decoding functionality too. In a real 5G network, this method will contribute to network efficiency and eventually save considerable computing resources.

Keywords: 5G, JSON, payload size, service-based interface

Procedia PDF Downloads 152
19415 Determination of Starting Design Parameters for Reactive-Dividing Wall Distillation Column Simulation Using a Modified Shortcut Design Method

Authors: Anthony P. Anies, Jose C. Muñoz

Abstract:

A new shortcut method for the design of reactive-dividing wall columns (RDWC) is proposed in this work. The RDWC is decomposed into its thermodynamically equivalent configuration naming the Petlyuk column, which consists of a reactive prefractionator and an unreactive main fractionator. The modified FUGK(Fenske-Underwood-Gilliland-Kirkbride) shortcut distillation method, which incorporates the effect of reaction on the Underwood equations and the Gilliland correlation, is used to design the reactive prefractionator. On the other hand, the conventional FUGK shortcut method is used to design the unreactive main fractionator. The shortcut method is applied to the synthesis of dimethyl ether (DME) through the liquid phase dehydration of methanol, and the results were used as the starting design inputs for rigorous simulation in Aspen Plus V8.8. A mole purity of 99 DME in the distillate stream, 99% methanol in the side draw stream, and 99% water in the bottoms stream were obtained in the simulation, thereby making the proposed shortcut method applicable for the preliminary design of RDWC.

Keywords: aspen plus, dimethyl ether, petlyuk column, reactive-dividing wall column, shortcut method, FUGK

Procedia PDF Downloads 170
19414 Parameter Estimation for the Mixture of Generalized Gamma Model

Authors: Wikanda Phaphan

Abstract:

Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.

Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method

Procedia PDF Downloads 203
19413 Comparative Performance Analysis for Selected Behavioral Learning Systems versus Ant Colony System Performance: Neural Network Approach

Authors: Hassan M. H. Mustafa

Abstract:

This piece of research addresses an interesting comparative analytical study. Which considers two concepts of diverse algorithmic computational intelligence approaches related tightly with Neural and Non-Neural Systems. The first algorithmic intelligent approach concerned with observed obtained practical results after three neural animal systems’ activities. Namely, they are Pavlov’s, and Thorndike’s experimental work. Besides a mouse’s trial during its movement inside figure of eight (8) maze, to reach an optimal solution for reconstruction problem. Conversely, second algorithmic intelligent approach originated from observed activities’ results for Non-Neural Ant Colony System (ACS). These results obtained after reaching an optimal solution while solving Traveling Sales-man Problem (TSP). Interestingly, the effect of increasing number of agents (either neurons or ants) on learning performance shown to be similar for both introduced systems. Finally, performance of both intelligent learning paradigms shown to be in agreement with learning convergence process searching for least mean square error LMS algorithm. While its application for training some Artificial Neural Network (ANN) models. Accordingly, adopted ANN modeling is a relevant and realistic tool to investigate observations and analyze performance for both selected computational intelligence (biological behavioral learning) systems.

Keywords: artificial neural network modeling, animal learning, ant colony system, traveling salesman problem, computational biology

Procedia PDF Downloads 451
19412 Application of Optical Method Based on Laser Devise as Non-Destructive Testing for Calculus of Mechanical Deformation

Authors: R. Daïra, V. Chalvidan

Abstract:

We present the speckle interferometry method to determine the deformation of a piece. This method of holographic imaging using a CCD camera for simultaneous digital recording of two states object and reference. The reconstruction is obtained numerically. This latest method has the advantage of being simpler than the methods currently available, and it does not suffer the holographic configuration faults online. Furthermore, it is entirely digital and avoids heavy analysis after recording the hologram. This work was carried out in the laboratory HOLO 3 (optical metrology laboratory in Saint Louis, France) and it consists in controlling qualitatively and quantitatively the deformation of object by using a camera CCD connected to a computer equipped with software of Fringe Analysis.

Keywords: speckle, nondestructive testing, interferometry, image processing

Procedia PDF Downloads 479
19411 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach

Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani

Abstract:

This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.

Keywords: chaotic approach, phase space, Cao method, local linear approximation method

Procedia PDF Downloads 310
19410 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 356