Search results for: exponential time differencing method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32767

Search results for: exponential time differencing method

32467 A Monolithic Arbitrary Lagrangian-Eulerian Finite Element Strategy for Partly Submerged Solid in Incompressible Fluid with Mortar Method for Modeling the Contact Surface

Authors: Suman Dutta, Manish Agrawal, C. S. Jog

Abstract:

Accurate computation of hydrodynamic forces on floating structures and their deformation finds application in the ocean and naval engineering and wave energy harvesting. This manuscript presents a monolithic, finite element strategy for fluid-structure interaction involving hyper-elastic solids partly submerged in an incompressible fluid. A velocity-based Arbitrary Lagrangian-Eulerian (ALE) formulation has been used for the fluid and a displacement-based Lagrangian approach has been used for the solid. The flexibility of the ALE technique permits us to treat the free surface of the fluid as a Lagrangian entity. At the interface, the continuity of displacement, velocity and traction are enforced using the mortar method. In the mortar method, the constraints are enforced in a weak sense using the Lagrange multiplier method. In the literature, the mortar method has been shown to be robust in solving various contact mechanics problems. The time-stepping strategy used in this work reduces to the generalized trapezoidal rule in the Eulerian setting. In the Lagrangian limit, in the absence of external load, the algorithm conserves the linear and angular momentum and the total energy of the system. The use of monolithic coupling with an energy-conserving time-stepping strategy gives an unconditionally stable algorithm and allows the user to take large time steps. All the governing equations and boundary conditions have been mapped to the reference configuration. The use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. The robustness and good performance of the proposed method are demonstrated by solving benchmark problems from the literature.

Keywords: ALE, floating body, fluid-structure interaction, monolithic, mortar method

Procedia PDF Downloads 274
32466 Real-Time Recognition of the Terrain Configuration to Improve Driving Stability for Unmanned Robots

Authors: Bongsoo Jeon, Jayoung Kim, Jihong Lee

Abstract:

Methods for measuring or estimating of ground shape by a laser range finder and a vision sensor (exteroceptive sensors) have critical weakness in terms that these methods need prior database built to distinguish acquired data as unique surface condition for driving. Also, ground information by exteroceptive sensors does not reflect the deflection of ground surface caused by the movement of UGVs. Therefore, this paper proposes a method of recognizing exact and precise ground shape using Inertial Measurement Unit (IMU) as a proprioceptive sensor. In this paper, firstly this method recognizes attitude of a robot in real-time using IMU and compensates attitude data of a robot with angle errors through analysis of vehicle dynamics. This method is verified by outdoor driving experiments of a real mobile robot.

Keywords: inertial measurement unit, laser range finder, real-time recognition of the ground shape, proprioceptive sensor

Procedia PDF Downloads 288
32465 Cognitive SATP for Airborne Radar Based on Slow-Time Coding

Authors: Fanqiang Kong, Jindong Zhang, Daiyin Zhu

Abstract:

Space-time adaptive processing (STAP) techniques have been motivated as a key enabling technology for advanced airborne radar applications. In this paper, the notion of cognitive radar is extended to STAP technique, and cognitive STAP is discussed. The principle for improving signal-to-clutter ratio (SCNR) based on slow-time coding is given, and the corresponding optimization algorithm based on cyclic and power-like algorithms is presented. Numerical examples show the effectiveness of the proposed method.

Keywords: space-time adaptive processing (STAP), airborne radar, signal-to-clutter ratio, slow-time coding

Procedia PDF Downloads 273
32464 SAP-Reduce: Staleness-Aware P-Reduce with Weight Generator

Authors: Lizhi Ma, Chengcheng Hu, Fuxian Wong

Abstract:

Partial reduce (P-Reduce) has set a state-of-the-art performance on distributed machine learning in the heterogeneous environment over the All-Reduce architecture. The dynamic P-Reduce based on the exponential moving average (EMA) approach predicts all the intermediate model parameters, which raises unreliability. It is noticed that the approximation trick leads the wrong way to obtaining model parameters in all the nodes. In this paper, SAP-Reduce is proposed, which is a variant of the All-Reduce distributed training model with staleness-aware dynamic P-Reduce. SAP-Reduce directly utilizes the EMA-like algorithm to generate the normalized weights. To demonstrate the effectiveness of the algorithm, the experiments are set based on a number of deep learning models, comparing the single-step training acceleration ratio and convergence time. It is found that SAP-Reduce simplifying dynamic P-Reduce outperforms the intermediate approximation one. The empirical results show SAP-Reduce is 1.3× −2.1× faster than existing baselines.

Keywords: collective communication, decentralized distributed training, machine learning, P-Reduce

Procedia PDF Downloads 34
32463 Effect of Precursors Aging Time on the Photocatalytic Activity of Zno Thin Films

Authors: N. Kaneva, A. Bojinova, K. Papazova

Abstract:

Thin ZnO films are deposited on glass substrates via sol–gel method and dip-coating. The films are prepared from zinc acetate dehydrate as a starting reagent. After that the as-prepared ZnO sol is aged for different periods (0, 1, 3, 5, 10, 15, and 30 days). Nanocrystalline thin films are deposited from various sols. The effect ZnO sols aging time on the structural and photocatalytic properties of the films is studied. The films surface is studied by Scanning Electron Microscopy. The effect of the aging time of the starting solution is studied inrespect to photocatalytic degradation of Reactive Black 5 (RB5) by UV-vis spectroscopy. The experiments are conducted upon UV-light illumination and in complete darkness. The variation of the absorption spectra shows the degradation of RB5 dissolved in water, as a result of the reaction acurring on the surface of the films, and promoted by UV irradiation. The initial concentrations of dye (5, 10 and 20 ppm) and the effect of the aging time are varied during the experiments. The results show, that the increasing aging time of starting solution with respect to ZnO generally promotes photocatalytic activity. The thin films obtained from ZnO sol, which is aged 30 days have best photocatalytic degradation of the dye (97,22%) in comparison with the freshly prepared ones (65,92%). The samples and photocatalytic experimental results are reproducible. Nevertheless, all films exhibit a substantial activity in both UV light and darkness, which is promising for the development of new ZnO photocatalysts by sol-gel method.

Keywords: ZnO thin films, sol-gel, photocatalysis, aging time

Procedia PDF Downloads 382
32462 Microwave-Assisted Extraction of Lycopene from Gac Arils (Momordica cochinchinensis (Lour.) Spreng)

Authors: Yardfon Tanongkankit, Kanjana Narkprasom, Nukrob Narkprasom, Khwanruthai Saiupparat, Phatthareeya Siriwat

Abstract:

Gac fruit (Momordica cochinchinensis (Lour.) Spreng) possesses high potential for health food as it contains high lycopene contents. The objective of this study was to optimize the extraction of lycopene from gac arils using the microwave extraction method. Response surface method was used to find the conditions that optimize the extraction of lycopene from gac arils. The parameters of extraction used in this study were extraction time (120-600 seconds), the solvent to sample ratio (10:1, 20:1, 30:1, 40:1 and 50:1 mL/g) and set microwave power (100-800 watts). The results showed that the microwave extraction condition at the extraction time of 360 seconds, the sample ratio of 30:1 mL/g and the microwave power of 450 watts were suggested since it exhibited the highest value of lycopene content of 9.86 mg/gDW. It was also observed that lycopene contents extracted from gac arils by microwave method were higher than that by the conventional method.

Keywords: conventional extraction, Gac arils, microwave-assisted extraction, Lycopene

Procedia PDF Downloads 391
32461 3D Objects Indexing Using Spherical Harmonic for Optimum Measurement Similarity

Authors: S. Hellam, Y. Oulahrir, F. El Mounchid, A. Sadiq, S. Mbarki

Abstract:

In this paper, we propose a method for three-dimensional (3-D)-model indexing based on defining a new descriptor, which we call new descriptor using spherical harmonics. The purpose of the method is to minimize, the processing time on the database of objects models and the searching time of similar objects to request object. Firstly we start by defining the new descriptor using a new division of 3-D object in a sphere. Then we define a new distance which will be used in the search for similar objects in the database.

Keywords: 3D indexation, spherical harmonic, similarity of 3D objects, measurement similarity

Procedia PDF Downloads 435
32460 Tabu Search Algorithm for Ship Routing and Scheduling Problem with Time Window

Authors: Khaled Moh. Alhamad

Abstract:

This paper describes a tabu search heuristic for a ship routing and scheduling problem (SRSP). The method was developed to address the problem of loading cargos for many customers using heterogeneous vessels. Constraints relate to delivery time windows imposed by customers, the time horizon by which all deliveries must be made and vessel capacities. The results of a computational investigation are presented. Solution quality and execution time are explored with respect to problem size and parameters controlling the tabu search such as tenure and neighbourhood size.

Keywords: heuristic, scheduling, tabu search, transportation

Procedia PDF Downloads 507
32459 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate

Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim

Abstract:

Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.

Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic

Procedia PDF Downloads 639
32458 Comparing Numerical Accuracy of Solutions of Ordinary Differential Equations (ODE) Using Taylor's Series Method, Euler's Method and Runge-Kutta (RK) Method

Authors: Palwinder Singh, Munish Sandhir, Tejinder Singh

Abstract:

The ordinary differential equations (ODE) represent a natural framework for mathematical modeling of many real-life situations in the field of engineering, control systems, physics, chemistry and astronomy etc. Such type of differential equations can be solved by analytical methods or by numerical methods. If the solution is calculated using analytical methods, it is done through calculus theories, and thus requires a longer time to solve. In this paper, we compare the numerical accuracy of the solutions given by the three main types of one-step initial value solvers: Taylor’s Series Method, Euler’s Method and Runge-Kutta Fourth Order Method (RK4). The comparison of accuracy is obtained through comparing the solutions of ordinary differential equation given by these three methods. Furthermore, to verify the accuracy; we compare these numerical solutions with the exact solutions.

Keywords: Ordinary differential equations (ODE), Taylor’s Series Method, Euler’s Method, Runge-Kutta Fourth Order Method

Procedia PDF Downloads 358
32457 Different Methods Anthocyanins Extracted from Saffron

Authors: Hashem Barati, Afshin Farahbakhsh

Abstract:

The flowers of saffron contain anthocyanins. Generally, extraction of anthocyanins takes place at low temperatures (below 30 °C), preferably under vacuum (to minimize degradation) and in an acidic environment. In order to extract anthocyanins, the dried petals were added to 30 ml of acidic ethanol (pH=2). Amount of petals, extraction time, temperature, and ethanol percentage which were selected. Total anthocyanin content was a function of both variables of ethanol percent and extraction time.To prepare SW with pH of 3.5, different concentrations of 100, 400, 700, 1,000, and 2,000 ppm of sodium metabisulfite were added to aqueous sodium citrate. At this selected concentration, different extraction times of 20, 40, 60, 120, 180 min were tested to determine the optimum extraction time. When the extraction time was extended from 20 to 60 min, the total recovered anthocyanins of sulfur method changed from 650 to 710 mg/100 g. In the EW method Cellubrix and Pectinex enzymes were added separately to the buffer solution at different concentrations of 1%, 2.5%, 5%, 7%, 10%, and 12.5% and held for 2 hours reaction time at an ambient temperature of 40 °C. There was a considerable and significant difference in trends of Acys content of tepals extracted by pectinex enzymes at 5% concentration and AE solution.

Keywords: saffron, anthocyanins, acidic environment, acidic ethanol, pectinex enzymes, Cellubrix enzymes, sodium metabisulfite

Procedia PDF Downloads 514
32456 Analyzing the Changing Pattern of Nigerian Vegetation Zones and Its Ecological and Socio-Economic Implications Using Spot-Vegetation Sensor

Authors: B. L. Gadiga

Abstract:

This study assesses the major ecological zones in Nigeria with the view to understanding the spatial pattern of vegetation zones and the implications on conservation within the period of sixteen (16) years. Satellite images used for this study were acquired from the SPOT-VEGETATION between 1998 and 2013. The annual NDVI images selected for this study were derived from SPOT-4 sensor and were acquired within the same season (November) in order to reduce differences in spectral reflectance due to seasonal variations. The images were sliced into five classes based on literatures and knowledge of the area (i.e. <0.16 Non-Vegetated areas; 0.16-0.22 Sahel Savannah; 0.22-0.40 Sudan Savannah, 0.40-0.47 Guinea Savannah and >0.47 Forest Zone). Classification of the 1998 and 2013 images into forested and non forested areas showed that forested area decrease from 511,691 km2 in 1998 to 478,360 km2 in 2013. Differencing change detection method was performed on 1998 and 2013 NDVI images to identify areas of ecological concern. The result shows that areas undergoing vegetation degradation covers an area of 73,062 km2 while areas witnessing some form restoration cover an area of 86,315 km2. The result also shows that there is a weak correlation between rainfall and the vegetation zones. The non-vegetated areas have a correlation coefficient (r) of 0.0088, Sahel Savannah belt 0.1988, Sudan Savannah belt -0.3343, Guinea Savannah belt 0.0328 and Forest belt 0.2635. The low correlation can be associated with the encroachment of the Sudan Savannah belt into the forest belt of South-eastern part of the country as revealed by the image analysis. The degradation of the forest vegetation is therefore responsible for the serious erosion problems witnessed in the South-east. The study recommends constant monitoring of vegetation and strict enforcement of environmental laws in the country.

Keywords: vegetation, NDVI, SPOT-vegetation, ecology, degradation

Procedia PDF Downloads 221
32455 Seismic Behavior of Concrete Filled Steel Tube Reinforced Concrete Column

Authors: Raghabendra Yadav, Baochun Chen, Huihui Yuan, Zhibin Lian

Abstract:

Pseudo-dynamic test (PDT) method is an advanced seismic test method that combines loading technology with computer technology. Large-scale models or full scale seismic tests can be carried out by using this method. CFST-RC columns are used in civil engineering structures because of their better seismic performance. A CFST-RC column is composed of four CFST limbs which are connected with RC web in longitudinal direction and with steel tube in transverse direction. For this study, a CFST-RC pier is tested under Four different earthquake time histories having scaled PGA of 0.05g. From the experiment acceleration, velocity, displacement and load time histories are observed. The dynamic magnification factors for acceleration due to Elcentro, Chi-Chi, Imperial Valley and Kobe ground motions are observed as 15, 12, 17 and 14 respectively. The natural frequency of the pier is found to be 1.40 Hz. The result shows that this type of pier has excellent static and earthquake resistant properties.

Keywords: bridge pier, CFST-RC pier, pseudo dynamic test, seismic performance, time history

Procedia PDF Downloads 185
32454 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization

Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu

Abstract:

This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.

Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection

Procedia PDF Downloads 61
32453 Labview-Based System for Fiber Links Events Detection

Authors: Bo Liu, Qingshan Kong, Weiqing Huang

Abstract:

With the rapid development of modern communication, diagnosing the fiber-optic quality and faults in real-time is widely focused. In this paper, a Labview-based system is proposed for fiber-optic faults detection. The wavelet threshold denoising method combined with Empirical Mode Decomposition (EMD) is applied to denoise the optical time domain reflectometer (OTDR) signal. Then the method based on Gabor representation is used to detect events. Experimental measurements show that signal to noise ratio (SNR) of the OTDR signal is improved by 1.34dB on average, compared with using the wavelet threshold denosing method. The proposed system has a high score in event detection capability and accuracy. The maximum detectable fiber length of the proposed Labview-based system can be 65km.

Keywords: empirical mode decomposition, events detection, Gabor transform, optical time domain reflectometer, wavelet threshold denoising

Procedia PDF Downloads 123
32452 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 273
32451 Viscoelastic Modeling of Hot Mix Asphalt (HMA) under Repeated Loading by Using Finite Element Method

Authors: S. A. Tabatabaei, S. Aarabi

Abstract:

Predicting the hot mix asphalt (HMA) response and performance is a challenging task because of the subjectivity of HMA under the complex loading and environmental condition. The behavior of HMA is a function of temperature of loading and also shows the time and rate-dependent behavior directly affecting design criteria of mixture. Velocity of load passing make the time and rate. The viscoelasticity illustrates the reaction of HMA under loading and environmental conditions such as temperature and moisture effect. The behavior has direct effect on design criteria such as tensional strain and vertical deflection. In this paper, the computational framework for viscoelasticity and implementation in 3D dimensional HMA model is introduced to use in finite element method. The model was lied under various repeated loading conditions at constant temperature. The response of HMA viscoelastic behavior is investigated in loading condition under speed vehicle and sensitivity of behavior to the range of speed and compared to HMA which is supposed to have elastic behavior as in conventional design methods. The results show the importance of loading time pulse, unloading time and various speeds on design criteria. Also the importance of memory fading of material to storing the strain and stress due to repeated loading was shown. The model was simulated by ABAQUS finite element package

Keywords: viscoelasticity, finite element method, repeated loading, HMA

Procedia PDF Downloads 398
32450 3D-Printing of Waveguide Terminations: Effect of Material Shape and Structuring on Their Characteristics

Authors: Lana Damaj, Vincent Laur, Azar Maalouf, Alexis Chevalier

Abstract:

Matched termination is an important part of the passive waveguide components. It is typically used at the end of a waveguide transmission line to prevent reflections and improve signal quality. Waveguide terminations (loads) are commonly used in microwave and RF applications. In traditional microwave architectures, usually, waveguide termination consists of a standard rectangular waveguide made by a lossy resistive material, and ended by shorting metallic plate. These types of terminations are used, to dissipate the energy as heat. However, these terminations may increase the size and the weight of the overall system. New alternative solution consists in developing terminations based on 3D-printing of materials. Designing such terminations is very challenging since it should meet the requirements imposed by the system. These requirements include many parameters such as the absorption, the power handling capability in addition to the cost, the size and the weight that have to be minimized. 3D-printing is a shaping process that enables the production of complex geometries. It allows to find best compromise between requirements. In this paper, a comparison study has been made between different existing and new shapes of waveguide terminations. Indeed, 3D printing of absorbers makes it possible to study not only standard shapes (wedge, pyramid, tongue) but also more complex topologies such as exponential ones. These shapes have been designed and simulated using CST MWS®. The loads have been printed using the carbon-filled PolyLactic Acid, conductive PLA from ProtoPasta. Since the terminations has been characterized in the X-band (from 8GHz to 12GHz), the rectangular waveguide standard WR-90 has been selected. The classical wedge shape has been used as a reference. First, all loads have been simulated with the same length and two parameters have been compared: the absorption level (level of |S11|) and the dissipated power density. This study shows that the concave exponential pyramidal shape has the better absorption level and the convex exponential pyramidal shape has the better dissipated power density level. These two loads have been printed in order to measure their properties. A good agreement between the simulated and measured reflection coefficient has been obtained. Furthermore, a study of material structuring based on the honeycomb hexagonal structure has been investigated in order to vary the effective properties. In the final paper, the detailed methodology and the simulated and measured results will be presented in order to show how 3D-printing can allow controlling mass, weight, absorption level and power behaviour.

Keywords: additive manufacturing, electromagnetic composite materials, microwave measurements, passive components, power handling capacity (PHC), 3D-printing

Procedia PDF Downloads 21
32449 Method Development and Validation for Quantification of Active Content and Impurities of Clodinafop Propargyl and Its Enantiomeric Separation by High-Performance Liquid Chromatography

Authors: Kamlesh Vishwakarma, Bipul Behari Saha, Sunilkumar Sing, Abhishek Mishra, Sreenivas Rao

Abstract:

A rapid, sensitive and inexpensive method has been developed for complete analysis of Clodinafop Propargyl. Clodinafop Propargyl enantiomers were separated on chiral column, Chiral Pak AS-H (250 mm. 4.6mm x 5µm) with mobile phase n-hexane: IPA (96:4) at flow rate 1.5 ml/min. The effluent was monitored by UV detector at 230 nm. Clodinafop Propagyl content and impurity quantification was done with reverse phase HPLC. The present study describes a HPLC method using simple mobile phase for the quantification of Clodinafop Propargyl and its impurities. The method was validated and found to be accurate, precise, convenient and effective. Moreover, the lower solvent consumption along with short analytical run time led to a cost effective analytical method.

Keywords: Clodinafop Propargyl, method, validation, HPLC-UV

Procedia PDF Downloads 371
32448 Overhead Lines Induced Transient Overvoltage Analysis Using Finite Difference Time Domain Method

Authors: Abdi Ammar, Ouazir Youcef, Laissaoui Abdelmalek

Abstract:

In this work, an approach based on transmission lines theory is presented. It is exploited for the calculation of overvoltage created by direct impacts of lightning waves on a guard cable of an overhead high-voltage line. First, we show the theoretical developments leading to the propagation equation, its discretization by finite difference time domain method (FDTD), and the resulting linear algebraic equations, followed by the calculation of the linear parameters of the line. The second step consists of solving the transmission lines system of equations by the FDTD method. This enabled us to determine the spatio-temporal evolution of the induced overvoltage.

Keywords: lightning surge, transient overvoltage, eddy current, FDTD, electromagnetic compatibility, ground wire

Procedia PDF Downloads 83
32447 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery

Authors: Mohamed Hafid, Marcel Lacroix

Abstract:

This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.

Keywords: cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method

Procedia PDF Downloads 200
32446 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 130
32445 Relevance of Lecture Method in Modern Era: A Study from Nepal

Authors: Hari Prasad Nepal

Abstract:

Research on lecture method issues confirm that this teaching method has been practiced from the very beginnings of schooling. Many teachers, lecturers and professors are convinced that lecture still represents main tool of contemporary instructional process. The central purpose of this study is to uncover the extent of using lecture method in the higher education. The study was carried out in Nepalese context with employing mixed method research design. To obtain the primary data this study employed a questionnaire involving items with close and open answers. 120 teachers, lecturers and professors participated in this study. The findings indicated that 75 percent of the respondents use the lecture method in their classroom teaching. The study reveals that there are advantages of using lecture method such as easy to practice, less time to prepare, high pass rate, high students’ satisfaction, little comments on instructors, appropriate to large classes and high level students. In addition, the study divulged the instructors’ reflections and measures to improve the lecture method. This research concludes that the practice of lecture method is still significantly applicable in colleges and universities in Nepalese contexts. So, there are no significant changes in the application of lecture method in the higher education classroom despite the emergence of new learning approaches and strategies.

Keywords: instructors, learning approaches, learning strategies, lecture method

Procedia PDF Downloads 238
32444 Aging Time Effect of 58s Microstructure

Authors: Nattawipa Pakasri

Abstract:

58S (60SiO2-36CaO-4P2O5), three-dimensionally ordered macroporous bioactive glasses (3DOM-BGs) were synthesized by the sol-gel method using dual templating methods. non-ionic surfactant Brij56 used as templates component produced mesoporous and the spherical PMMA colloidal crystals as one template component yielded either three-dimensionally ordered microporous products or shaped bioactive glass nanoparticles. The bioactive glass with aging step for 12 h at room temperature, no structure transformation occurred and the 3DOM structure was produced (Figure a) due to no shrinkage process between the aging step. After 48 h time of o 3DOM structure remained and, nanocube with ∼120 nm edge lengths and nanosphere particle with ∼50 nm was obtained (Figure c, d). PMMA packing templates have octahedral and tetrahedral holes to make 2 final shapes of 3DOM-BGs which is rounded and cubic, respectively. The ageing time change from 12h, 24h and 48h affected to the thickness of interconnecting macropores network. The wall thickness was gradually decrease after increase aging time.

Keywords: three-dimensionally ordered macroporous bioactive glasses, sol-gel method, PMMA, bioactive glass

Procedia PDF Downloads 115
32443 Model Predictive Control with Unscented Kalman Filter for Nonlinear Implicit Systems

Authors: Takashi Shimizu, Tomoaki Hashimoto

Abstract:

A class of implicit systems is known as a more generalized class of systems than a class of explicit systems. To establish a control method for such a generalized class of systems, we adopt model predictive control method which is a kind of optimal feedback control with a performance index that has a moving initial time and terminal time. However, model predictive control method is inapplicable to systems whose all state variables are not exactly known. In other words, model predictive control method is inapplicable to systems with limited measurable states. In fact, it is usual that the state variables of systems are measured through outputs, hence, only limited parts of them can be used directly. It is also usual that output signals are disturbed by process and sensor noises. Hence, it is important to establish a state estimation method for nonlinear implicit systems with taking the process noise and sensor noise into consideration. To this purpose, we apply the model predictive control method and unscented Kalman filter for solving the optimization and estimation problems of nonlinear implicit systems, respectively. The objective of this study is to establish a model predictive control with unscented Kalman filter for nonlinear implicit systems.

Keywords: optimal control, nonlinear systems, state estimation, Kalman filter

Procedia PDF Downloads 202
32442 A Static and Dynamic Slope Stability Analysis of Sonapur

Authors: Rupam Saikia, Ashim Kanti Dey

Abstract:

Sonapur is an intense hilly region on the border of Assam and Meghalaya lying in North-East India and is very near to a seismic fault named as Dauki besides which makes the region seismically active. Besides, these recently two earthquakes of magnitude 6.7 and 6.9 have struck North-East India in January and April 2016. Also, the slope concerned for this study is adjacent to NH 44 which for a long time has been a sole important connecting link to the states of Manipur and Mizoram along with some parts of Assam and so has been a cause of considerable loss to life and property since past decades as there has been several recorded incidents of landslide, road-blocks, etc. mostly during the rainy season which comes into news. Based on this issue this paper reports a static and dynamic slope stability analysis of Sonapur which has been carried out in MIDAS GTS NX. The slope being highly unreachable due to terrain and thick vegetation in-situ test was not feasible considering the current scope available so disturbed soil sample was collected from the site for the determination of strength parameters. The strength parameters were so determined for varying relative density with further variation in water content. The slopes were analyzed considering plane strain condition for three slope heights of 5 m, 10 m and 20 m which were then further categorized based on slope angles 30, 40, 50, 60, and 70 considering the possible extent of steepness. Initially static analysis under dry state was performed then considering the worst case that can develop during rainy season the slopes were analyzed for fully saturated condition along with partial degree of saturation with an increase in the waterfront. Furthermore, dynamic analysis was performed considering the El-Centro Earthquake which had a magnitude of 6.7 and peak ground acceleration of 0.3569g at 2.14 sec for the slope which were found to be safe during static analysis under both dry and fully saturated condition. Some of the conclusions were slopes with inclination above 40 onwards were found to be highly vulnerable for slopes of height 10 m and above even under dry static condition. Maximum horizontal displacement showed an exponential increase with an increase in inclination from 30 to 70. The vulnerability of the slopes was seen to be further increased during rainy season as even slopes of minimal steepness of 30 for height 20 m was seen to be on the verge of failure. Also, during dynamic analysis slopes safe during static analysis were found to be highly vulnerable. Lastly, as a part of the study a comparative study on Strength Reduction Method (SRM) versus Limit Equilibrium Method (LEM) was also carried out and some of the advantages and disadvantages were figured out.

Keywords: dynamic analysis, factor of safety, slope stability, strength reduction method

Procedia PDF Downloads 260
32441 Stability of Solutions of Semidiscrete Stochastic Systems

Authors: Ramazan Kadiev, Arkadi Ponossov

Abstract:

Semidiscrete systems contain both continuous and discrete components. This means that the dynamics is mostly continuous, but at certain instants, it is exposed to abrupt influences. Such systems naturally appear in applications, for example, in biological and ecological models as well as in the control theory. Therefore, the study of semidiscrete systems has recently attracted the attention of many specialists. Stochastic effects are an important part of any realistic approach to modeling. For example, stochasticity arises in the population dynamics, demographic and ecological due to a change in time of factors external to the system affecting the survival of the population. In control theory, random coefficients can simulate inaccuracies in measurements. It will be shown in the presentation how to incorporate such effects into semidiscrete systems. Stability analysis is an essential part of modeling real-world problems. In the presentation, it will be explained how sufficient conditions for the moment stability of solutions in terms of the coefficients for linear semidiscrete stochastic equations can be derived using non-Lyapunov technique.

Keywords: abrupt changes, exponential stability, regularization, stochastic noises

Procedia PDF Downloads 188
32440 Portfolio Optimization under a Hybrid Stochastic Volatility and Constant Elasticity of Variance Model

Authors: Jai Heui Kim, Sotheara Veng

Abstract:

This paper studies the portfolio optimization problem for a pension fund under a hybrid model of stochastic volatility and constant elasticity of variance (CEV) using asymptotic analysis method. When the volatility component is fast mean-reverting, it is able to derive asymptotic approximations for the value function and the optimal strategy for general utility functions. Explicit solutions are given for the exponential and hyperbolic absolute risk aversion (HARA) utility functions. The study also shows that using the leading order optimal strategy results in the value function, not only up to the leading order, but also up to first order correction term. A practical strategy that does not depend on the unobservable volatility level is suggested. The result is an extension of the Merton's solution when stochastic volatility and elasticity of variance are considered simultaneously.

Keywords: asymptotic analysis, constant elasticity of variance, portfolio optimization, stochastic optimal control, stochastic volatility

Procedia PDF Downloads 299
32439 R Software for Parameter Estimation of Spatio-Temporal Model

Authors: Budi Nurani Ruchjana, Atje Setiawan Abdullah, I. Gede Nyoman Mindra Jaya, Eddy Hermawan

Abstract:

In this paper, we propose the application package to estimate parameters of spatiotemporal model based on the multivariate time series analysis using the R open-source software. We build packages mainly to estimate the parameters of the Generalized Space Time Autoregressive (GSTAR) model. GSTAR is a combination of time series and spatial models that have parameters vary per location. We use the method of Ordinary Least Squares (OLS) and use the Mean Average Percentage Error (MAPE) to fit the model to spatiotemporal real phenomenon. For case study, we use oil production data from volcanic layer at Jatibarang Indonesia or climate data such as rainfall in Indonesia. Software R is very user-friendly and it is making calculation easier, processing the data is accurate and faster. Limitations R script for the estimation of model parameters spatiotemporal GSTAR built is still limited to a stationary time series model. Therefore, the R program under windows can be developed either for theoretical studies and application.

Keywords: GSTAR Model, MAPE, OLS method, oil production, R software

Procedia PDF Downloads 243
32438 A Gradient Orientation Based Efficient Linear Interpolation Method

Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar

Abstract:

This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.

Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing

Procedia PDF Downloads 261