Search results for: piecewise linear inputs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3937

Search results for: piecewise linear inputs

1957 Microstructural Characterization and Mechanical Properties of Al-2Mn-5Fe Ternary Eutectic Alloy

Authors: Emin Çadirli, Izzettin Yilmazer, Uğur Büyük, Hasan Kaya

Abstract:

Al-2Mn-5Fe eutectic alloy (wt.%) was prepared in a graphite crucible under vacuum atmosphere. The samples were directionally solidified upward at a constant temperature gradient in four different of growth rates by using a Bridgman method. The values of eutectic spacing were measured from longitudinal and transverse sections of the samples. The dependence of eutectic spacing on the growth rate was determined by using linear regression analysis. The microhardness and tensile strength of the studied alloy also were measured from directionally solidified samples. The dependency of the microhardness and tensile strength for directionally solidified Al-2Mn-5Fe eutectic alloy on the growth rate were investigated and the relationships between them were experimentally obtained by using regression analysis. The results obtained in present work were compared with the previous similar experimental results obtained for binary and ternary alloys.

Keywords: eutectic alloy, microhardness, microstructure, tensile strength

Procedia PDF Downloads 471
1956 Evaluation of Water Management Options to Improve the Crop Yield and Water Productivity for Semi-Arid Watershed in Southern India Using AquaCrop Model

Authors: V. S. Manivasagam, R. Nagarajan

Abstract:

Modeling the soil, water and crop growth interactions are attaining major importance, considering the future climate change and water availability for agriculture to meet the growing food demand. Progress in understanding the crop growth response during water stress period through crop modeling approach provides an opportunity for improving and sustaining the future agriculture water use efficiency. An attempt has been made to evaluate the potential use of crop modeling approach for assessing the minimal supplementary irrigation requirement for crop growth during water limited condition and its practical significance in sustainable improvement of crop yield and water productivity. Among the numerous crop models, water driven-AquaCrop model has been chosen for the present study considering the modeling approach and water stress impact on yield simulation. The study has been evaluated in rainfed maize grown area of semi-arid Shanmuganadi watershed (a tributary of the Cauvery river system) located in southern India during the rabi cropping season (October-February). In addition to actual rainfed maize growth simulation, irrigated maize scenarios were simulated for assessing the supplementary irrigation requirement during water shortage condition for the period 2012-2015. The simulation results for rainfed maize have shown that the average maize yield of 0.5-2 t ha-1 was observed during deficit monsoon season (<350 mm) whereas 5.3 t ha-1 was noticed during sufficient monsoonal period (>350 mm). Scenario results for irrigated maize simulation during deficit monsoonal period has revealed that 150-200 mm of supplementary irrigation has ensured the 5.8 t ha-1 of irrigated maize yield. Thus, study results clearly portrayed that minimal application of supplementary irrigation during the critical growth period along with the deficit rainfall has increased the crop water productivity from 1.07 to 2.59 kg m-3 for major soil types. Overall, AquaCrop is found to be very effective for the sustainable irrigation assessment considering the model simplicity and minimal inputs requirement.

Keywords: AquaCrop, crop modeling, rainfed maize, water stress

Procedia PDF Downloads 261
1955 Optimization of the Jatropha curcas Supply Chain as a Criteria for the Implementation of Future Collection Points in Rural Areas of Manabi-Ecuador

Authors: Boris G. German, Edward Jiménez, Sebastián Espinoza, Andrés G. Chico, Ricardo A. Narváez

Abstract:

The unique flora and fauna of The Galapagos Islands has leveraged a tourism-driven growth in the islands. Nonetheless, such development is energy-intensive and requires thousands of gallons of diesel each year for thermoelectric electricity generation. The needed transport of fossil fuels from the continent has generated oil spillages and affectations to the fragile ecosystem of the islands. The Zero Fossil Fuels initiative for The Galapagos proposed by the Ecuadorian government as an alternative to reduce the use of fossil fuels in the islands, considers the replacement of diesel in thermoelectric generators, by Jatropha curcas vegetable oil. However, the Jatropha oil supply cannot entirely cover yet the demand for electricity generation in Galapagos. Within this context, the present work aims to provide an optimization model that can be used as a selection criterion for approving new Jatropha Curcas collection points in rural areas of Manabi-Ecuador. For this purpose, existing Jatropha collection points in Manabi were grouped under three regions: north (7 collection points), center (4 collection points) and south (9 collection points). Field work was carried out in every region in order to characterize the collection points, to establish local Jatropha supply and to determine transportation costs. Data collection was complemented using GIS software and an objective function was defined in order to determine the profit associated to Jatropha oil production. The market price of both Jatropha oil and residual cake, were considered for the total revenue; whereas Jatropha price, transportation and oil extraction costs were considered for the total cost. The tonnes of Jatropha fruit and seed, transported from collection points to the extraction plant, were considered as variables. The maximum and minimum amount of the collected Jatropha from each region constrained the optimization problem. The supply chain was optimized using linear programming in order to maximize the profits. Finally, a sensitivity analysis was performed in order to find a profit-based criterion for the acceptance of future collection points in Manabi. The maximum profit reached a value of $ 4,616.93 per year, which represented a total Jatropha collection of 62.3 tonnes Jatropha per year. The northern region of Manabi had the biggest collection share (69%), followed by the southern region (17%). The criteria for accepting new Jatropha collection points in the rural areas of Manabi can be defined by the current maximum profit of the zone and by the variation in the profit when collection points are removed one at a time. The definition of new feasible collection points plays a key role in the supply chain associated to Jatropha oil production. Therefore, a mathematical model that assists decision makers in establishing new collection points while assuring profitability, contributes to guarantee a continued Jatropha oil supply for Galapagos and a sustained economic growth in the rural areas of Ecuador.

Keywords: collection points, Jatropha curcas, linear programming, supply chain

Procedia PDF Downloads 424
1954 Failure Mechanism in Fixed-Ended Reinforced Concrete Deep Beams under Cyclic Load

Authors: A. Aarabzadeh, R. Hizaji

Abstract:

Reinforced Concrete (RC) deep beams are a special type of beams due to their geometry, boundary conditions, and behavior compared to ordinary shallow beams. For example, assumption of a linear strain-stress distribution in the cross section is not valid. Little study has been dedicated to fixed-end RC deep beams. Also, most experimental studies are carried out on simply supported deep beams. Regarding recent tendency for application of deep beams, possibility of using fixed-ended deep beams has been widely increased in structures. Therefore, it seems necessary to investigate the aforementioned structural element in more details. In addition to experimental investigation of a concrete deep beam under cyclic load, different failure mechanisms of fixed-ended deep beams under this type of loading have been evaluated in the present study. The results show that failure mechanisms of deep beams under cyclic loads are quite different from monotonic loads.

Keywords: deep beam, cyclic load, reinforced concrete, fixed-ended

Procedia PDF Downloads 354
1953 Flux-Gate vs. Anisotropic Magneto Resistance Magnetic Sensors Characteristics in Closed-Loop Operation

Authors: Neoclis Hadjigeorgiou, Spyridon Angelopoulos, Evangelos V. Hristoforou, Paul P. Sotiriadis

Abstract:

The increasing demand for accurate and reliable magnetic measurements over the past decades has paved the way for the development of different types of magnetic sensing systems as well as of more advanced measurement techniques. Anisotropic Magneto Resistance (AMR) sensors have emerged as a promising solution for applications requiring high resolution, providing an ideal balance between performance and cost. However, certain issues of AMR sensors such as non-linear response and measurement noise are rarely discussed in the relevant literature. In this work, an analog closed loop compensation system is proposed, developed and tested as a means to eliminate the non-linearity of AMR response, reduce the 1/f noise and enhance the sensitivity of magnetic sensor. Additional performance aspects, such as cross-axis and hysteresis effects are also examined. This system was analyzed using an analytical model and a P-Spice model, considering both the sensor itself as well as the accompanying electronic circuitry. In addition, a commercial closed loop architecture Flux-Gate sensor (calibrated and certified), has been used for comparison purposes. Three different experimental setups have been constructed for the purposes of this work, each one utilized for DC magnetic field measurements, AC magnetic field measurements and Noise density measurements respectively. The DC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to calibrate and characterize the system under consideration. A high-accuracy DC power supply has been used for providing the operating current to the Helmholtz coils. The results were recorded by a multichannel voltmeter The AC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to examine the effective bandwidth not only of the proposed system but also for the Flux-Gate sensor. A voltage controlled current source driven by a function generator has been utilized for the Helmholtz coil excitation. The result was observed by the oscilloscope. The third experimental apparatus incorporated an AC magnetic shielding construction composed of several layers of electric steel that had been demagnetized prior to the experimental process. Each sensor was placed alone and the response was captured by the oscilloscope. The preliminary experimental results indicate that closed loop AMR response presented a maximum deviation of 0.36% with respect to the ideal linear response, while the corresponding values for the open loop AMR system and the Fluxgate sensor reached 2% and 0.01% respectively. Moreover, the noise density of the proposed close loop AMR sensor system remained almost as low as the noise density of the AMR sensor itself, yet considerably higher than that of the Flux-Gate sensor. All relevant numerical data are presented in the paper.

Keywords: AMR sensor, chopper, closed loop, electronic noise, magnetic noise, memory effects, flux-gate sensor, linearity improvement, sensitivity improvement

Procedia PDF Downloads 417
1952 Non-intrusive Hand Control of Drone Using an Inexpensive and Streamlined Convolutional Neural Network Approach

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

The purpose of this work is to develop a method for classifying hand signals and using the output in a drone control algorithm. To achieve this, methods based on Convolutional Neural Networks (CNN) were applied. CNN's are a subset of deep learning, which allows grid-like inputs to be processed and passed through a neural network to be trained for classification. This type of neural network allows for classification via imaging, which is less intrusive than previous methods using biosensors, such as EMG sensors. Classification CNN's operate purely from the pixel values in an image; therefore they can be used without additional exteroceptive sensors. A development bench was constructed using a desktop computer connected to a high-definition webcam mounted on a scissor arm. This allowed the camera to be pointed downwards at the desk to provide a constant solid background for the dataset and a clear detection area for the user. A MATLAB script was created to automate dataset image capture at the development bench and save the images to the desktop. This allowed the user to create their own dataset of 12,000 images within three hours. These images were evenly distributed among seven classes. The defined classes include forward, backward, left, right, idle, and land. The drone has a popular flip function which was also included as an additional class. To simplify control, the corresponding hand signals chosen were the numerical hand signs for one through five for movements, a fist for land, and the universal “ok” sign for the flip command. Transfer learning with PyTorch (Python) was performed using a pre-trained 18-layer residual learning network (ResNet-18) to retrain the network for custom classification. An algorithm was created to interpret the classification and send encoded messages to a Ryze Tello drone over its 2.4 GHz Wi-Fi connection. The drone’s movements were performed in half-meter distance increments at a constant speed. When combined with the drone control algorithm, the classification performed as desired with negligible latency when compared to the delay in the drone’s movement commands.

Keywords: classification, computer vision, convolutional neural networks, drone control

Procedia PDF Downloads 206
1951 Correlation between Consumer Knowledge of the Circular Economy and Consumer Behavior towards Its Application: A Canadian Exploratory Study

Authors: Christopher E. A. Ramsey, Halia Valladares Montemayor

Abstract:

This study examined whether the dissemination of information about the circular economy (CE) has any bearing on the likelihood of the implementation of its concepts on an individual basis. Specifically, the goal of this research study was to investigate the impact of consumer knowledge about the circular economy on their behavior in applying such concepts. Given that our current linear supply chains are unsustainable, it is of great importance that we understand what mechanisms are most effective in encouraging consumers to embrace CE. The theoretical framework employed was the theory of planned behavior (TPB). TPB, with its analysis of how attitude, subjective norms, and perceived behavioral control affect intention, provided an adequate model for testing the effects of increased information about the CE on the implementation of its recommendations. The empirical research consisted of a survey distributed among university students, faculty, and staff at a Canadian University in British Columbia.

Keywords: circular economy, consumer behavior, sustainability, theory of planned behavior

Procedia PDF Downloads 113
1950 Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’

Authors: Yasmeen Bassas, Sandra Kuebler, Allen Riddell

Abstract:

Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus.

Keywords: NLI, NLP, content-based features, content independent features, social media corpus, ML

Procedia PDF Downloads 129
1949 An Embedded High Speed Adder for Arithmetic Computations

Authors: Kala Bharathan, R. Seshasayanan

Abstract:

In this paper, a 1-bit Embedded Logic Full Adder (EFA) circuit in transistor level is proposed, which reduces logic complexity, gives low power and high speed. The design is further extended till 64 bits. To evaluate the performance of EFA, a 16, 32, 64-bit both Linear and Square root Carry Select Adder/Subtractor (CSLAS) Structure is also proposed. Realistic testing of proposed circuits is done on 8 X 8 Modified Booth multiplier and comparison in terms of power and delay is done. The EFA is implemented for different multiplier architectures for performance parameter comparison. Overall delay for CSLAS is reduced to 78% when compared to conventional one. The circuit implementations are done on TSMC 28nm CMOS technology using Cadence Virtuoso tool. The EFA has power savings of up to 14% when compared to the conventional adder. The present implementation was found to offer significant improvement in terms of power and speed in comparison to other full adder circuits.

Keywords: embedded logic, full adder, pdp, xor gate

Procedia PDF Downloads 445
1948 Magnetoelectric Effect in Polyvinylidene Fluoride Beta Phase Thin Films

Authors: Belouadah Rabah, Guyomar Daneil, Guiffard Benoit

Abstract:

The magnetoelectric (ME) materials has dielectric polarization induced by the magnetic field or induced magnetization under an electric field. A strong ME effect requires the simultaneous presence of magnetic moments and electric dipoles. In the last decades, extensive research has been conducted on the ME effect in single phase and composite materials. This article reported the results obtained with two samples, the first is mono layer of PVDF bi-stretched and the second is the multi layer PVDF bi-stretched with the Polyurethane filled with micro particles magnetic Fe3O4 (PU+2% Fe3O4). Compare with non ME material like Alumine, a large ME polarization coefficient for the two samples was obtained. The piezoelectric properties of the PVDF and elastic proprieties of Pu+2% Fe3O4 give a big linear ME coefficient of the multi layer PVDF/(Pu+2% Fe3O4) than in the monolayer of PVDF.

Keywords: magnetoelectric effect, polymers, magnetic particles, composites, films

Procedia PDF Downloads 390
1947 Income and Factor Analysis of Small Scale Broiler Production in Imo State, Nigeria

Authors: Ubon Asuquo Essien, Okwudili Bismark Ibeagwa, Daberechi Peace Ubabuko

Abstract:

The Broiler Poultry subsector is dominated by small scale production with low aggregate output. The high cost of inputs currently experienced in Nigeria tends to aggravate the situation; hence many broiler farmers struggle to break-even. This study was designed to examine income and input factors in small scale deep liter broiler production in Imo state, Nigeria. Specifically, the study examined; socio-economic characteristics of small scale deep liter broiler producing Poultry farmers; estimate cost and returns of broiler production in the area; analyze input factors in broiler production in the area and examined marketability, age and profitability of the enterprise. A multi-stage sampling technique was adopted in selecting 60 small scale broiler farmers who use deep liter system from 6 communities through the use of structured questionnaire. The socioeconomic characteristics of the broiler farmers and the profitability/ marketability age of the birds were described using descriptive statistical tools such as frequencies, means and percentages. Gross margin analysis was used to analyze the cost and returns to broiler production, while Cobb Douglas production function was employed to analyze input factors in broiler production. The result of the study revealed that the cost of feed (P<0.1), deep liter material (P<0.05) and medication (P<0.05) had a significant positive relationship with the gross return of broiler farmers in the study area, while cost of labour, fuel and day old chicks were not significant. Furthermore, Gross profit margin of the farmers who market their broiler at the 8th week of rearing was 80.7%; and 78.7% and 60.8% for farmers who market at the 10th week and 12th week of rearing, respectively. The business is, therefore, profitable but at varying degree. Government and Development partners should make deliberate efforts to curb the current rise in the prices of poultry feeds, drugs and timber materials used as bedding so as to widen the profit margin and encourage more farmers to go into the business. The farmers equally need more technical assistance from extension agents with regards to timely and profitable marketing.

Keywords: broilers, factor analysis, income, small scale

Procedia PDF Downloads 75
1946 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 131
1945 Length-Weight and Length-Length Relationships for 14 Sparidae Species, from the Northeastern Mediterranean Sea Coast of Turkey

Authors: Hacer Yeldan, Erhan Akamca, Sedat Gündogdu

Abstract:

Length-Weight and Length-length relationship were estimated of 14 species Sparidae (Boops boops, Diplodus annularis, Diplodus cervinus, Dipladus puntazzo, Diplodus sargus, Diplodus vulgaris, Lithognathus mormyrus, Oblada melanura, Pagellus acarne, Pagellus erythrinus, Pagrus auriga, Pagrus caeruleostictus, Sarpa salpa, Sparus aurata) sampled from in the Northeastern Mediterranean Sea coast of Turkey, Iskenderun Bay. Samples were collected from July 2014 to June 2015, using bottom trawl and trammel net into three different depth; 0-10 m, 10-20 m, 20-50m. Length-length relationships were determined size measurements: standard length (SL) and fork length (FL) to total length (TL) for fish species. The relationships between TL, FL and TL, SL were all linear. The values of the exponent b of the length-weight relationships ranged between 2.685 and 3.473. The type of growth for fish species was algometric growth.

Keywords: Sparidae, Iskenderun bay, length-length, length-weight relationships

Procedia PDF Downloads 283
1944 Identification of Nonlinear Systems Using Radial Basis Function Neural Network

Authors: C. Pislaru, A. Shebani

Abstract:

This paper uses the radial basis function neural network (RBFNN) for system identification of nonlinear systems. Five nonlinear systems are used to examine the activity of RBFNN in system modeling of nonlinear systems; the five nonlinear systems are dual tank system, single tank system, DC motor system, and two academic models. The feed forward method is considered in this work for modelling the non-linear dynamic models, where the K-Means clustering algorithm used in this paper to select the centers of radial basis function network, because it is reliable, offers fast convergence and can handle large data sets. The least mean square method is used to adjust the weights to the output layer, and Euclidean distance method used to measure the width of the Gaussian function.

Keywords: system identification, nonlinear systems, neural networks, radial basis function, K-means clustering algorithm

Procedia PDF Downloads 465
1943 Sum Capacity with Regularized Channel Inversion in Multi-Antenna Downlink Systems under Equal Power Constraint

Authors: Attaullah Khawaja, Amna Shabbir

Abstract:

Channel inversion is one of the simplest techniques for multiuser downlink systems with single-antenna users. In this paper regularized channel inversion under equal power constraint in the multiuser multiple input multiple output (MU-MIMO) broadcast channels has been considered. Sum capacity with plain channel inversion also known as Zero Forcing Beam Forming (ZFBF) and optimum sum capacity using Dirty Paper Coding (DPC) has also been investigated. Analysis and simulations show that regularization enhances the system performance and empower linear growth in Sum Capacity and specially work well at low signal to noise ratio (SNRs) regime.

Keywords: broadcast channel, channel inversion, multiple antenna multiple-user wireless, multiple-input multiple-output (MIMO), regularization, dirty paper coding (DPC), sum capacity

Procedia PDF Downloads 523
1942 Microfluidic Paper-Based Electrochemical Biosensor

Authors: Ahmad Manbohi, Seyyed Hamid Ahmadi

Abstract:

A low-cost paper-based microfluidic device (PAD) for the multiplex electrochemical determination of glucose, uric acid, and dopamine in biological fluids was developed. Using wax printing, PAD containing a central zone, six channels, and six detection zones was fabricated, and the electrodes were printed on detection zones using pre-made electrodes template. For each analyte, two detection zones were used. The carbon working electrode was coated with chitosan-BSA (and enzymes for glucose and uric acid). To detect glucose and uric acid, enzymatic reactions were employed. These reactions involve enzyme-catalyzed redox reactions of the analytes and produce free electrons for electrochemical measurement. Calibration curves were linear (R² > 0.980) in the range of 0-80 mM for glucose, 0.09–0.9 mM for dopamine, and 0–50 mM for uric acid, respectively. Blood samples were successfully analyzed by the proposed method.

Keywords: biological fluids, biomarkers, microfluidic paper-based electrochemical biosensors, Multiplex

Procedia PDF Downloads 279
1941 Modeling of Compaction Curves for CCA-Cement Stabilized Lateritic Soils

Authors: O. Ahmed Apampa, Yinusa, A. Jimoh

Abstract:

The aim of this study was to develop an appropriate model for predicting the compaction behavior of lateritic soils and corn cob ash (CCA) stabilized lateritic soils. This was done by first adopting an equation earlier developed for fine-grained soils and subsequent adaptation by others and extending it to modified lateritic soil through the introduction of alpha and beta parameters which are polynomial functions of the CCA binder input. The polynomial equations were determined with MATLAB R2011 curve fitting tool, while the alpha and beta parameters were determined by standard linear programming techniques using the Solver function of Microsoft Excel 2010. The model so developed was a good fit with a correlation coefficient R2 value of 0.86. The paper concludes that it is possible to determine the optimum moisture content and the maximum dry density of CCA stabilized soils from the compaction test of the unmodified soil, and recommends that this procedure is extended to other binder stabilized lateritic soils to facilitate quick decision making in roadworks.

Keywords: compaction, corn cob ash, lateritic soil, stabilization

Procedia PDF Downloads 527
1940 Fourier Galerkin Approach to Wave Equation with Absorbing Boundary Conditions

Authors: Alexandra Leukauf, Alexander Schirrer, Emir Talic

Abstract:

Numerical computation of wave propagation in a large domain usually requires significant computational effort. Hence, the considered domain must be truncated to a smaller domain of interest. In addition, special boundary conditions, which absorb the outward travelling waves, need to be implemented in order to describe the system domains correctly. In this work, the linear one dimensional wave equation is approximated by utilizing the Fourier Galerkin approach. Furthermore, the artificial boundaries are realized with absorbing boundary conditions. Within this work, a systematic work flow for setting up the wave problem, including the absorbing boundary conditions, is proposed. As a result, a convenient modal system description with an effective absorbing boundary formulation is established. Moreover, the truncated model shows high accuracy compared to the global domain.

Keywords: absorbing boundary conditions, boundary control, Fourier Galerkin approach, modal approach, wave equation

Procedia PDF Downloads 391
1939 Inverse Dynamics of the Mould Base of Blow Molding Machines

Authors: Vigen Arakelian

Abstract:

This paper deals with the study of devices for displacement of the mould base of blow-molding machines. The displacement of the mould in the studied case is carried out by a linear actuator, which ensures the descent of the mould base and by extension springs, which return the letter in the initial position. The aim of this paper is to study the inverse dynamics of the device for displacement of the mould base of blow-molding machines and to determine its optimum parameters for higher rate of production. In the other words, it is necessary to solve the inverse dynamic problem to find the equation of motion linking applied forces with displacements. This makes it possible to determine the stiffness coefficient of the spring to turn the mold base back to the initial position for a given time. The obtained results are illustrated by a numerical example. It is shown that applying a spring with stiffness returns the mould base of the blow molding machine into the initial position in 0.1 sec.

Keywords: design, mechanisms, dynamics, blow-molding machines

Procedia PDF Downloads 148
1938 Non-linear Analysis of Spontaneous EEG After Spinal Cord Injury: An Experimental Study

Authors: Jiangbo Pu, Hanhui Xu, Yazhou Wang, Hongyan Cui, Yong Hu

Abstract:

Spinal cord injury (SCI) brings great negative influence to the patients and society. Neurological loss in human after SCI is a major challenge in clinical. Instead, neural regeneration could have been seen in animals after SCI, and such regeneration could be retarded by blocking neural plasticity pathways, showing the importance of neural plasticity in functional recovery. Here we used sample entropy as an indicator of nonlinear dynamical in the brain to quantify plasticity changes in spontaneous EEG recordings of rats before and after SCI. The results showed that the entropy values were increased after the injury during the recovery in one week. The increasing tendency of sample entropy values is consistent with that of behavioral evaluation scores. It is indicated the potential application of sample entropy analysis for the evaluation of neural plasticity in spinal cord injury rat model.

Keywords: spinal cord injury (SCI), sample entropy, nonlinear, complex system, firing pattern, EEG, spontaneous activity, Basso Beattie Bresnahan (BBB) score

Procedia PDF Downloads 460
1937 Structural Safety of Biocomposites under Cracking: A Fracture Analytical Approach using the Gғ-Concept

Authors: Brandtner-Hafner Martin

Abstract:

Biocomposites have established themselves as a sustainable material class in the industry. Their advantages include lower density, lower price, and easier recycling compared to conventional materials. Now there are a variety of ways to measure their technical performance. One possibility is mechanical tests, which are widely used and standardized. However, these provide only very limited insights into damage capacity, which is particularly problematic under cracking conditions. To overcome such shortcomings, experimental tests were performed applying the fracture energetically GF-concept to study the structural safety of the interface under crack opening (mode-I loading). Two different types of biocomposites based on extruded henequen-fibers (NFRP) and wood-particles (WPC) in an HDPE matrix were evaluated. The results show that the fracture energy values obtained are higher than those given in the literature. This suggests that alternatives to previous linear elastic testing methods are needed to perform authentic safety evaluations of green plastics.

Keywords: biocomposites, structural safety, Gғ-concept, fracture analysis

Procedia PDF Downloads 153
1936 Estimation of the Acute Toxicity of Halogenated Phenols Using Quantum Chemistry Descriptors

Authors: Khadidja Bellifa, Sidi Mohamed Mekelleche

Abstract:

Phenols and especially halogenated phenols represent a substantial part of the chemicals produced worldwide and are known as aquatic pollutants. Quantitative structure–toxicity relationship (QSTR) models are useful for understanding how chemical structure relates to the toxicity of chemicals. In the present study, the acute toxicities of 45 halogenated phenols to Tetrahymena Pyriformis are estimated using no cost semi-empirical quantum chemistry methods. QSTR models were established using the multiple linear regression technique and the predictive ability of the models was evaluated by the internal cross-validation, the Y-randomization and the external validation. Their structural chemical domain has been defined by the leverage approach. The results show that the best model is obtained with the AM1 method (R²= 0.91, R²CV= 0.90, SD= 0.20 for the training set and R²= 0.96, SD= 0.11 for the test set). Moreover, all the Tropsha’ criteria for a predictive QSTR model are verified.

Keywords: halogenated phenols, toxicity mechanism, hydrophobicity, electrophilicity index, quantitative stucture-toxicity relationships

Procedia PDF Downloads 293
1935 Exact Solutions for Steady Response of Nonlinear Systems under Non-White Excitation

Authors: Yaping Zhao

Abstract:

In the present study, the exact solutions for the steady response of quasi-linear systems under non-white wide-band random excitation are considered by means of the stochastic averaging method. The non linearity of the systems contains the power-law damping and the cross-product term of the power-law damping and displacement. The drift and diffusion coefficients of the Fokker-Planck-Kolmogorov (FPK) equation after averaging are obtained by a succinct approach. After solving the averaged FPK equation, the joint probability density function and the marginal probability density function in steady state are attained. In the process of resolving, the eigenvalue problem of ordinary differential equation is handled by integral equation method. Some new results are acquired and the novel method to deal with the problems in nonlinear random vibration is proposed.

Keywords: random vibration, stochastic averaging method, FPK equation, transition probability density

Procedia PDF Downloads 501
1934 Conventional Synthesis and Characterization of Zirconium Molybdate, Nd2Zr3(MoO4)9

Authors: G. Çelik Gül, F. Kurtuluş

Abstract:

Rare earths containing complex metal oxides have drawn much attention due to physical, chemical and optical properties which make them feasible in so many areas such as non-linear optical materials and ion exchanger. We have researched a systematic study to obtain rare earth containing zirconium molybdate compound, characterization, investigation of crystal system and calculation of unit cell parameters.  After a successful synthesis of Nd2Zr3(MoO4)9 which is a member of rare earth metal containing complex oxides family, X-ray diffraction (XRD), High Score Plus/Rietveld refinement analysis, and Fourier Transform Infrared Spectroscopy (FTIR) were completed to determine the crystal structure. Morphological properties and elemental composition were determined by scanning electron microscopy (SEM) and energy dispersive X-ray (EDX) analysis. Thermal properties were observed via Thermogravimetric-differential thermal analysis (TG/DTA).

Keywords: Nd₂Zr₃(MoO₄)₉, powder x-ray diffraction, solid state synthesis, zirconium molybdates

Procedia PDF Downloads 394
1933 Effect of Variable Fluxes on Optimal Flux Distribution in a Metabolic Network

Authors: Ehsan Motamedian

Abstract:

Finding all optimal flux distributions of a metabolic model is an important challenge in systems biology. In this paper, a new algorithm is introduced to identify all alternate optimal solutions of a large scale metabolic network. The algorithm reduces the model to decrease computations for finding optimal solutions. The algorithm was implemented on the Escherichia coli metabolic model to find all optimal solutions for lactate and acetate production. There were more optimal flux distributions when acetate production was optimized. The model was reduced from 1076 to 80 variable fluxes for lactate while it was reduced to 91 variable fluxes for acetate. These 11 more variable fluxes resulted in about three times more optimal flux distributions. Variable fluxes were from 12 various metabolic pathways and most of them belonged to nucleotide salvage and extra cellular transport pathways.

Keywords: flux variability, metabolic network, mixed-integer linear programming, multiple optimal solutions

Procedia PDF Downloads 430
1932 Current Status and a Forecasting Model of Community Household Waste Generation: A Case Study on Ward 24 (Nirala), Khulna, Bangladesh

Authors: Md. Nazmul Haque, Mahinur Rahman

Abstract:

The objective of the research is to determine the quantity of household waste generated and forecast the future condition of Ward No 24 (Nirala). For performing that, three core issues are focused: (i) the capacity and service area of the dumping stations; (ii) the present waste generation amount per capita per day; (iii) the responsibility of the local authority in the household waste collection. This research relied on field survey-based data collection from all stakeholders and GIS-based secondary analysis of waste collection points and their coverage. However, these studies are mostly based on the inherent forecasting approaches, cannot predict the amount of waste correctly. The findings of this study suggest that Nirala is a formal residential area introducing a better approach to the waste collection - self-controlled and collection system. Here, a forecasting model proposed for waste generation as Y = -2250387 + 1146.1 * X, where X = year.

Keywords: eco-friendly environment, household waste, linear regression, waste management

Procedia PDF Downloads 278
1931 Geographic Information System-Based Map for Best Suitable Place for Cultivating Permanent Trees in South-Lebanon

Authors: Allaw Kamel, Al-Chami Leila

Abstract:

It is important to reduce the human influence on natural resources by identifying an appropriate land use. Moreover, it is essential to carry out the scientific land evaluation. Such kind of analysis allows identifying the main factors of agricultural production and enables decision makers to develop crop management in order to increase the land capability. The key is to match the type and intensity of land use with its natural capability. Therefore; in order to benefit from these areas and invest them to obtain good agricultural production, they must be organized and managed in full. Lebanon suffers from the unorganized agricultural use. We take south Lebanon as a study area, it is the most fertile ground and has a variety of crops. The study aims to identify and locate the most suitable area to cultivate thirteen type of permanent trees which are: apples, avocados, stone fruits in coastal regions and stone fruits in mountain regions, bananas, citrus, loquats, figs, pistachios, mangoes, olives, pomegranates, and grapes. Several geographical factors are taken as criterion for selection of the best location to cultivate. Soil, rainfall, PH, temperature, and elevation are main inputs to create the final map. Input data of each factor is managed, visualized and analyzed using Geographic Information System (GIS). Management GIS tools are implemented to produce input maps capable of identifying suitable areas related to each index. The combination of the different indices map generates the final output map of the suitable place to get the best permanent tree productivity. The output map is reclassified into three suitability classes: low, moderate, and high suitability. Results show different locations suitable for different kinds of trees. Results also reflect the importance of GIS in helping decision makers finding a most suitable location for every tree to get more productivity and a variety in crops.

Keywords: agricultural production, crop management, geographical factors, Geographic Information System, GIS, land capability, permanent trees, suitable location

Procedia PDF Downloads 138
1930 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 119
1929 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 171
1928 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 259