Search results for: Yield Improvement.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2039

Search results for: Yield Improvement.

209 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet

Authors: Amir Moslemi, Amir Movafeghi, Shahab Moradi

Abstract:

One of the most important challenging factors in medical images is nominated as noise. Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjects to low quality due to the noise. Quality of CT images is dependent on absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete Wavelet Transform (DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).

Keywords: Computed Tomography (CT), noise reduction, curve-let, contour-let, Signal to Noise Peak-Peak Ratio (PSNR), Structure Similarity (Ssim), Absorbed Dose to Patient (ADP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2909
208 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling. The research proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling. The paper concludes the challenges and improvement directions for Deep Reinforcement Learning-based resource scheduling algorithms.

Keywords: Resource scheduling, deep reinforcement learning, distributed system, artificial intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 468
207 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography

Authors: Y. Kumru, K. Enhos, H. Köymen

Abstract:

In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.

Keywords: Coded excitation, complementary Golay codes, DiPhAS, medical ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 891
206 Influence of Transverse Steel and Casting Direction on Shear Response and Ductility of Reinforced Ultra-High Performance Concrete Beams

Authors: Timothy E. Frank, Peter J. Amaddio, Elizabeth D. Decko, Alexis M. Tri, Darcy A. Farrell, Cole M. Landes

Abstract:

Ultra-high performance concrete (UHPC) is a class of cementitious composites with a relatively large percentage of cement generating high compressive strength. Additionally, UHPC contains disbursed fibers, which control crack width, carry the tensile load across narrow cracks, and limit spalling. These characteristics lend themselves to a wide range of structural applications when UHPC members are reinforced with longitudinal steel. Efficient use of fibers and longitudinal steel is required to keep lifecycle cost competitive in reinforced UHPC members; this requires full utilization of both the compressive and tensile qualities of the reinforced cementitious composite. The objective of this study is to investigate the shear response of steel-reinforced UHPC beams to guide design decisions that keep initial costs reasonable, limit serviceability crack widths, and ensure a ductile structural response and failure path. Five small-scale, reinforced UHPC beams were experimentally tested. Longitudinal steel, transverse steel, and casting direction were varied. Results indicate that an increase in transverse steel in short-spanned reinforced UHPC beams provided additional shear capacity and increased the peak load achieved. Beams with very large longitudinal steel reinforcement ratios did not achieve yield and fully utilized the tension properties of the longitudinal steel. Casting the UHPC beams from the end or from the middle affected load-carrying capacity and ductility, but image analysis determined that the fiber orientation was not significantly different. It is believed that the presence of transverse and longitudinal steel reinforcement minimized the effect of different UHPC casting directions. Results support recent recommendations in the literature suggesting that a 1% fiber volume fraction is sufficient within UHPC to prevent spalling and provide compressive fracture toughness under extreme loading conditions.

Keywords: Fiber orientation, reinforced ultra-high performance concrete beams, shear, transverse steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190
205 A Survey of 2nd Year Students’ Frequent English Writing Errors and the Effects of Participatory Error Correction Process

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 52 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.00. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 0.6801, 0.5093, 0.5071, and 0.5296 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally the students’ overall satisfaction of the participatory error correction process is in high level (Mean = 4.39, S.D. = 0.76).

Keywords: Coded indirect corrective feedback, participatory error correction process, error treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
204 A Design of Elliptic Curve Cryptography Processor Based on SM2 over GF(p)

Authors: Shiji Hu, Lei Li, Wanting Zhou, Daohong Yang

Abstract:

The data encryption is the foundation of today’s communication. On this basis, to improve the speed of data encryption and decryption is always an important goal for high-speed applications. This paper proposed an elliptic curve crypto processor architecture based on SM2 prime field. Regarding hardware implementation, we optimized the algorithms in different stages of the structure. For modulo operation on finite field, we proposed an optimized improvement of the Karatsuba-Ofman multiplication algorithm and shortened the critical path through the pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between the affine coordinate system and the Jacobi projective coordinate system. In the parallel scheduling point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU (dual-core ARM Cortex-A9).

Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 225
203 Comparative Study of Tensile Properties of Cast and Hot Forged Alumina Nanoparticle Reinforced Composites

Authors: S. Ghanaraja, Subrata Ray, S. K. Nath

Abstract:

Particle reinforced Metal Matrix Composite (MMC) succeeds in synergizing the metallic matrix with ceramic particle reinforcements to result in improved strength, particularly at elevated temperatures, but adversely it affects the ductility of the matrix because of agglomeration and porosity. The present study investigates the outcome of tensile properties in a cast and hot forged composite reinforced simultaneously with coarse and fine particles. Nano-sized alumina particles have been generated by milling mixture of aluminum and manganese dioxide powders. Milled particles after drying are added to molten metal and the resulting slurry is cast. The microstructure of the composites shows good distribution of both the size categories of particles without significant clustering. The presence of nanoparticles along with coarser particles in a composite improves both strength and ductility considerably. Delay in debonding of coarser particles to higher stress is due to reduced mismatch in extension caused by increased strain hardening in presence of the nanoparticles. However, higher addition of powder mix beyond a limit results in deterioration of mechanical properties, possibly due to clustering of nanoparticles. The porosity in cast composite generally increases with the increasing addition of powder mix as observed during process and on forging it has got reduced. The base alloy and nanocomposites show improvement in flow stress which could be attributed to lowering of porosity and grain refinement as a consequence of forging.

Keywords: Aluminum, alumina, nanoparticle reinforced composites, porosity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
202 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
201 A Frame Work for the Development of a Suitable Method to Find Shoot Length at Maturity of Mustard Plant Using Soft Computing Model

Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri

Abstract:

The production of a plant can be measured in terms of seeds. The generation of seeds plays a critical role in our social and daily life. The fruit production which generates seeds, depends on the various parameters of the plant, such as shoot length, leaf number, root length, root number, etc When the plant is growing, some leaves may be lost and some new leaves may appear. It is very difficult to use the number of leaves of the tree to calculate the growth of the plant.. It is also cumbersome to measure the number of roots and length of growth of root in several time instances continuously after certain initial period of time, because roots grow deeper and deeper under ground in course of time. On the contrary, the shoot length of the tree grows in course of time which can be measured in different time instances. So the growth of the plant can be measured using the data of shoot length which are measured at different time instances after plantation. The environmental parameters like temperature, rain fall, humidity and pollution are also play some role in production of yield. The soil, crop and distance management are taken care to produce maximum amount of yields of plant. The data of the growth of shoot length of some mustard plant at the initial stage (7,14,21 & 28 days after plantation) is available from the statistical survey by a group of scientists under the supervision of Prof. Dilip De. In this paper, initial shoot length of Ken( one type of mustard plant) has been used as an initial data. The statistical models, the methods of fuzzy logic and neural network have been tested on this mustard plant and based on error analysis (calculation of average error) that model with minimum error has been selected and can be used for the assessment of shoot length at maturity. Finally, all these methods have been tested with other type of mustard plants and the particular soft computing model with the minimum error of all types has been selected for calculating the predicted data of growth of shoot length. The shoot length at the stage of maturity of all types of mustard plants has been calculated using the statistical method on the predicted data of shoot length.

Keywords: Fuzzy time series, neural network, forecasting error, average error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1579
200 Advantages of Large Strands in Precast/Prestressed Concrete Highway Application

Authors: Amin Akhnoukh

Abstract:

The objective of this research is to investigate the advantages of using large-diameter 0.7 inch prestressing strands in pretention applications. The advantages of large-diameter strands are mainly beneficial in the heavy construction applications. Bridges and tunnels are subjected to a higher daily traffic with an exponential increase in trucks ultimate weight, which raise the demand for higher structural capacity of bridges and tunnels. In this research, precast prestressed I-girders were considered as a case study. Flexure capacities of girders fabricated using 0.7 inch strands and different concrete strengths were calculated and compared to capacities of 0.6 inch strands girders fabricated using equivalent concrete strength. The effect of bridge deck concrete strength on composite deck-girder section capacity was investigated due to its possible effect on final section capacity. Finally, a comparison was made to compare the bridge cross-section of girders designed using regular 0.6 inch strands and the large-diameter 0.7 inch. The research findings showed that structural advantages of 0.7 inch strands allow for using fewer bridge girders, reduced material quantity, and light-weight members. The structural advantages of 0.7 inch strands are maximized when high strength concrete (HSC) are used in girder fabrication, and concrete of minimum 5ksi compressive strength is used in pouring bridge decks. The use of 0.7 inch strands in bridge industry can partially contribute to the improvement of bridge conditions, minimize construction cost, and reduce the construction duration of the project.

Keywords: 0.7 Inch Strands, I-Girders, Pretension, Flexure Capacity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2726
199 Analyzing the Impact of Spatio-Temporal Climate Variations on the Rice Crop Calendar in Pakistan

Authors: Muhammad Imran, Iqra Basit, Mobushir Riaz Khan, Sajid Rasheed Ahmad

Abstract:

The present study investigates the space-time impact of climate change on the rice crop calendar in tropical Gujranwala, Pakistan. The climate change impact was quantified through the climatic variables, whereas the existing calendar of the rice crop was compared with the phonological stages of the crop, depicted through the time series of the Normalized Difference Vegetation Index (NDVI) derived from Landsat data for the decade 2005-2015. Local maxima were applied on the time series of NDVI to compute the rice phonological stages. Panel models with fixed and cross-section fixed effects were used to establish the relation between the climatic parameters and the time-series of NDVI across villages and across rice growing periods. Results show that the climatic parameters have significant impact on the rice crop calendar. Moreover, the fixed effect model is a significant improvement over cross-sectional fixed effect models (R-squared equal to 0.673 vs. 0.0338). We conclude that high inter-annual variability of climatic variables cause high variability of NDVI, and thus, a shift in the rice crop calendar. Moreover, inter-annual (temporal) variability of the rice crop calendar is high compared to the inter-village (spatial) variability. We suggest the local rice farmers to adapt this change in the rice crop calendar.

Keywords: Landsat NDVI, panel models, temperature, rainfall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 902
198 Study on the Addition of Solar Generating and Energy Storage Units to a Power Distribution System

Authors: T. Costa, D. Narvaez, K. Melo, M. Villalva

Abstract:

Installation of micro-generators based on renewable energy in power distribution system has increased in recent years, with the main renewable sources being solar and wind. Due to the intermittent nature of renewable energy sources, such micro-generators produce time-varying energy which does not correspond at certain times of the day to the peak energy consumption of end users. For this reason, the use of energy storage units next to the grid contributes to the proper leveling of the buses’ voltage level according to Brazilian energy quality standards. In this work, the effect of the addition of a photovoltaic solar generator and a store of energy in the busbar voltages of an electric system is analyzed. The consumption profile is defined as the average hourly use of appliances in a common residence, and the generation profile is defined as a function of the solar irradiation available in a locality. The power summation method is validated with analytical calculation and is used to calculate the modules and angles of the voltages in the buses of an electrical system based on the IEEE standard, at each hour of the day and with defined load and generation profiles. The results show that bus 5 presents the worst voltage level at the power consumption peaks and stabilizes at the appropriate range with the inclusion of the energy storage during the night time period. Solar generator maintains improvement of the voltage level during the period when it receives solar irradiation, having peaks of production during the 12 pm (without exceeding the appropriate maximum levels of tension).

Keywords: Energy storage, power distribution system, solar generator, voltage level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
197 Improvement of Ventilation and Thermal Comfort Using the Atrium Design for Traditional Folk Houses-Fujian Earthen Building

Authors: Ying-Ming Su

Abstract:

Fujian earthen building which was known as a classic for ecological buildings was listed on the world heritage in 2008 (UNESCO) in China. Its design strategy can be applied to modern architecture planning and design. This study chose two different cases (Round Atrium: Er-Yi Building, Double Round Atrium: Zhen-Chen Building) of earthen building in Fu-Jian to compare the ventilation effects of different atrium forms. We adopt field measurements and computational fluid dynamics (CFD) simulation of temperature, humidity, and wind environment to identify the relationship between external environment and atrium about comfort and to confirm the relationship about atrium H/W (height/width). Results indicate that, through the atrium convection effect, it makes the natural wind guides to each space surrounded and keeps indoor comfort. It illustrates that the smaller the ratio of the H/W which is the relationship between the height and the width of an atrium is, the greater the wind speed generated within the street valley. Moreover, the wind speed is very close to the reference wind speed. This field measurement verifies that the value of H/W has great influence of solar radiation heat and sunshine shadows. The ventilation efficiency is: Er-Yi Building (H/W =0.2778) > Zhen-Chen Building (H/W=0.3670). Comparing the cases with the same shape but with different H/W, through the different size patios, airflow revolves in the atriums and can be brought into each interior space. The atrium settings meet the need of building ventilation, and can adjust the humidity and temperature within the buildings. It also creates good ventilation effect.

Keywords: Traditional folk houses, Atrium, Earthen building, Ventilation, Building microclimate, PET.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1406
196 Influence of Sire Breed, Protein Supplementation and Gender on Wool Spinning Fineness in First-Cross Merino Lambs

Authors: A. E. O. Malau-Aduli, B. W. B. Holman, P. A. Lane

Abstract:

Our objectives were to evaluate the effects of sire breed, type of protein supplement, level of supplementation and sex on wool spinning fineness (SF), its correlations with other wool characteristics and prediction accuracy in F1 Merino crossbred lambs. Texel, Coopworth, White Suffolk, East Friesian and Dorset rams were mated with 500 purebred Merino dams at a ratio of 1:100 in separate paddocks within a single management system. The F1 progeny were raised on ryegrass pasture until weaning, before forty lambs were randomly allocated to treatments in a 5 x 2 x 2 x 2 factorial experimental design representing 5 sire breeds, 2 supplementary feeds (canola or lupins), 2 levels of supplementation (1% or 2% of liveweight) and sex (wethers or ewes). Lambs were supplemented for six weeks after an initial three weeks of adjustment, wool sampled at the commencement and conclusion of the feeding trial and analyzed for SF, mean fibre diameter (FD), coefficient of variation (CV), standard deviation, comfort factor (CF), fibre curvature (CURV), and clean fleece yield. Data were analyzed using mixed linear model procedures with sire fitted as a random effect, and sire breed, sex, supplementary feed type, level of supplementation and their second-order interactions as fixed effects. Sire breed (P<0.001), sex (P<0.004), sire breed x level of supplementation (P<0.004), and sire breed x sex (P<0.019) interactions significantly influenced SF. SF ranged from 22.7 ± 0.2μm in White Suffolk-sired lambs to 25.1 ± 0.2μm in East Friesian crossbred lambs. Ewes had higher SF than wethers. There were significant (P<0.001) correlations between SF and FD (0.93), CV (0.40), CF (-0.94) and CURV (-0.12). Its strong relationship with other wool quality traits enabled accurate predictions explaining up to about 93% of the observed variation. The interactions between sire breed genetics and nutrition will have an impact on the choices that dual-purpose sheep producers make when selecting sire breeds and protein supplementary feed levels to achieve optimal wool spinning fineness at the farmgate level. This will facilitate selective breeding programs being able to better account for SF and its interactions with other wool characteristics.

Keywords: Merino crossbred sheep, protein supplementation, sire breed, wool quality, wool spinning fineness

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169
195 Detecting Tomato Flowers in Greenhouses Using Computer Vision

Authors: Dor Oppenheim, Yael Edan, Guy Shani

Abstract:

This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.

Keywords: Agricultural engineering, computer vision, image processing, flower detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2349
194 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements

Authors: Henok Hailemariam, Frank Wuttke

Abstract:

Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.

Keywords: Collapsible soil, relative subsidence, dielectric permittivity, moisture content.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1105
193 Effect of Cold, Warm or Contrast Therapy on Controlling Knee Osteoarthritis Associated Problems

Authors: Amal E. Shehata, Manal E. Fareed

Abstract:

Osteoarthritis (OA) is the most prevalent and far common debilitating form of arthritis which can be defined as a degenerative condition affecting synovial joint. Patients suffering from osteoarthritis often complain of dull ache pain on movement. Physical agents can fight the painful process when correctly indicated and used such as heat or cold therapy Aim. This study was carried out to: Compare the effect of cold, warm and contrast therapy on controlling knee osteoarthritis associated problems. Setting: The study was carried out in orthopedic outpatient clinics of Menoufia University and teaching Hospitals, Egypt. Sample: A convenient sample of 60 adult patients with unilateral knee osteoarthritis. Tools: three tools were utilized to collect the data. Tool I : An interviewing questionnaire. It comprised of three parts covering  sociodemographic data, medical data and adverse effects of the treatment protocol. Tool II : Knee Injury and Osteoarthritis Outcome Score (KOOS) It consists of five main parts. Tool II1 : 0-10 Numeric pain rating scale. Results: reveled that the total knee symptoms score was decreased from moderate symptoms pre intervention to mild symptoms after warm and contrast method of therapy, but the contrast therapy had significant effect in reducing the knee symptoms and pain than the other symptoms. Conclusions: all of the three methods of therapy resulted in improvement in all knee symptoms and pain but the most appropriate protocol of treatment to relive symptoms and pain was contrast therapy.

Keywords: Knee Osteoarthritis, Cold, Warm and Contrast Therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5702
192 Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach

Authors: Farhad Asadi, S. Hossein Sadati

Abstract:

This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.

Keywords: Feedforward neural networks, nonlinear signal prediction, echo state neural networks approach, leaking rates, capacity of neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747
191 Inquiry on the Improvement Teaching Quality in the Classroom with Meta-Teaching Skills

Authors: Shahlan Surat, Saemah Rahman, Saadiah Kummin

Abstract:

When teachers reflect and evaluate whether their teaching methods actually have an impact on students’ learning, they will adjust their practices accordingly. This inevitably improves their students’ learning and performance. The approach in meta-teaching can invigorate and create a passion for teaching. It thus helps to increase the commitment and love for the teaching profession. This study was conducted to determine the level of metacognitive thinking of teachers in the process of teaching and learning in the classroom. Metacognitive thinking teachers include the use of metacognitive knowledge which consists of different types of knowledge: declarative, procedural and conditional. The ability of the teachers to plan, monitor and evaluate the teaching process can also be determined. This study was conducted on 377 graduate teachers in Klang Valley, Malaysia. The stratified sampling method was selected for the purpose of this study. The metacognitive teaching inventory consisting of 24 items is called InKePMG (Teacher Indicators of Effectiveness Meta-Teaching). The results showed the level of mean is high for two components of metacognitive knowledge; declarative knowledge (mean = 4.16) and conditional (mean = 4.11) whereas, the mean of procedural knowledge is 4.00 (moderately high). Similarly, the level of knowledge in monitoring (mean = 4.11), evaluating (mean = 4.00) which indicate high score and planning (mean = 4.00) are moderately high score among teachers. In conclusion, this study shows that the planning and procedural knowledge is an important element in improving the quality of teachers teaching in the classroom. Thus, the researcher recommended that further studies should focus on training programs for teachers on metacognitive skills and also on developing creative thinking among teachers.

Keywords: Metacognitive thinking skills, procedural knowledge, conditional knowledge, declarative knowledge, meta-teaching and regulation of cognitive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
190 Creep Behaviour of Heterogeneous Timber-UHPFRC Beams Assembled by Bonding: Experimental and Analytical Investigation

Authors: K. Kong, E. Ferrier, L. Michel

Abstract:

The purpose of this research was to investigate the creep behaviour of the heterogeneous Timber-UHPFRC beams. New developments have been done to further improve the structural performance, such as strengthening of the timber (glulam) beam by bonding composite material combine with an ultra-high performance fibre reinforced concrete (UHPFRC) internally reinforced with or without carbon fibre reinforced polymer (CFRP) bars. However, in the design of wooden structures, in addition to the criteria of strengthening and stiffness, deformability due to the creep of wood, especially in horizontal elements, is also a design criterion. Glulam, UHPFRC and CFRP may be an interesting composite mix to respond to the issue of creep behaviour of composite structures made of different materials with different rheological properties. In this paper, we describe an experimental and analytical investigation of the creep performance of the glulam-UHPFRC-CFRP beams assembled by bonding. The experimental investigations creep behaviour was conducted for different environments: in- and outside under constant loading for approximately a year. The measured results are compared with numerical ones obtained by an analytical model. This model was developed to predict the creep response of the glulam-UHPFRCCFRP beams based on the creep characteristics of the individual components. The results show that heterogeneous glulam-UHPFRC beams provide an improvement in both the strengthening and stiffness, and can also effectively reduce the creep deflection of wooden beams.

Keywords: Carbon fibre-reinforced polymer (CFRP) bars, creep behaviour, glulam, ultra-high performance fibre reinforced concrete (UHPFRC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2530
189 Maize Tolerance to Natural and Artificial Infestation with Diabrotica virgifera virgifera Eggs

Authors: Snežana T. Tanasković, Sonja M. Gvozdenac, Branka D. Popović, Vesna M. Đurović, Matthias Erb

Abstract:

Western corn rootworm – WCR (Diabrotica virgifera sp.virgifera, Coleoptera, Chrysomelidae) is economically the most important pest of maize worldwide. WCR natural population is already very abundant on Serbian fields, and keeps increasing each year. Tolerance is recognized by larger root size and bigger root regrowth. Severe larval injuries cause lack of compensatory regrowth and lead to reduction of plant growth and yield. The aim of this research was to evaluate tolerance of commercial Serbian maize hybrid NS 640, under natural WCR infestation and under conditions of artificial infestation, and to obtain the information about its tolerance to WCR larval feeding in two consecutive years. Field experiments were conducted in 2015 and 2016, in Bečej (Vojvodina province, Serbia). In experimental field, 96 plants were selected, marked and arranged in 48 pairs. Each pair represented two plants. The first plant was artificially infested with 4 mL WCR egg suspension in agar (550 eggs plant-1) in the root zone (D plant). The second plant represented control plant (C plant) with injection of 4 mL distilled water in root zone. The experimental field was inspected weekly. A hybrid tolerance was assessed based on root injury level and root mass. Root injury was rated using the Node-Injury Scale 1-6, during the last field inspection (September – October). Comparing the root injuries on D and C plants in 2015, more severe damages were recorded on D plants (12 plants - rate 5 and 17 plants - rate 6) compared to C plants (2 plants - rate 5 and 8 plants - rate 6). Also, the highest number of plants with healthy roots (rate 1), was registered in the control (25 plants), while only 4 D plants were rated as injury level 1. In 2016, root injuries caused by WCR larvae on D and C plants did not differ significantly. The reason is the difference in climatic conditions between the years. The 2015 was extremely dry and more suitable for WCR larval development and movement in the soil, compared to 2016. Thus, more severe damages appeared on artificially infested plants (D plants). Root mass was in strong correlation with the level of root injury, but did not differ significantly between D and C plants, in both years.

Keywords: D. v. virgifera, maize, root injury, tolerance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 863
188 A Preliminary Analysis of Sustainable Development in the Belgrade Metropolitan Area

Authors: S. Zeković, M. Vujošević, T. Maričić

Abstract:

The paper provides a comprehensive analysis of the sustainable development in the Belgrade Metropolitan Region - BMA (level NUTS 2) preliminary evaluating the three chosen components: 1) economic growth and developmental changes; 2) competitiveness; and 3) territorial concentration and industrial specialization. First, we identified the main results of development changes and economic growth by applying Shift-share analysis on the metropolitan level. Second, the empirical evaluation of competitiveness in the BMA is based on the analysis of absolute and relative values of eight indicators by Spider method. Paper shows that the consideration of the national share, industrial mix and metropolitan/regional share in total Shift share of the BMA, as well as economic/functional specialization of the BMA indicate very strong process of deindustrialization. Allocative component of the BMA economic growth has positive value, reflecting the above-average sector productivity compared to the national average. Third, the important positive role of metropolitan/regional component in decomposition of the BMA economic growth is highlighted as one of the key results. Finally, comparative analysis of the industrial territorial concentration in the BMA in relation to Serbia is based on location quotient (LQ) or Balassa index as a valid measure. The results indicate absolute and relative differences in decrease of industry territorial concentration as well as inefficiency of utilizing territorial capital in the BMA. Results are important for the increase of regional competitiveness and territorial distribution in this area as well as for improvement of sustainable metropolitan and sector policies, planning and governance on this level.

Keywords: Belgrade Metropolitan Area (BMA), Comprehensive analysis/evaluation, economic growth and competitiveness, sustainable development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
187 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study

Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio D. Grieco, Emanuela Guerriero

Abstract:

Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from a real-life pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.

Keywords: Constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1964
186 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: Coupled Markov random field, environment, object-based analysis, Polarimetric SAR images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 852
185 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: Constraint parameters, derivative matrix, magnetocardiography, regular term, total variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 675
184 Bio-Surfactant Production and Its Application in Microbial EOR

Authors: A. Rajesh Kanna, G. Suresh Kumar, Sathyanaryana N. Gummadi

Abstract:

There are various sources of energies available worldwide and among them, crude oil plays a vital role. Oil recovery is achieved using conventional primary and secondary recovery methods. In-order to recover the remaining residual oil, technologies like Enhanced Oil Recovery (EOR) are utilized which is also known as tertiary recovery. Among EOR, Microbial enhanced oil recovery (MEOR) is a technique which enables the improvement of oil recovery by injection of bio-surfactant produced by microorganisms. Bio-surfactant can retrieve unrecoverable oil from the cap rock which is held by high capillary force. Bio-surfactant is a surface active agent which can reduce the interfacial tension and reduce viscosity of oil and thereby oil can be recovered to the surface as the mobility of the oil is increased. Research in this area has shown promising results besides the method is echo-friendly and cost effective compared with other EOR techniques. In our research, on laboratory scale we produced bio-surfactant using the strain Pseudomonas putida (MTCC 2467) and injected into designed simple sand packed column which resembles actual petroleum reservoir. The experiment was conducted in order to determine the efficiency of produced bio-surfactant in oil recovery. The column was made of plastic material with 10 cm in length. The diameter was 2.5 cm. The column was packed with fine sand material. Sand was saturated with brine initially followed by oil saturation. Water flooding followed by bio-surfactant injection was done to determine the amount of oil recovered. Further, the injection of bio-surfactant volume was varied and checked how effectively oil recovery can be achieved. A comparative study was also done by injecting Triton X 100 which is one of the chemical surfactant. Since, bio-surfactant reduced surface and interfacial tension oil can be easily recovered from the porous sand packed column.

Keywords: Bio-surfactant, Bacteria, Interfacial tension, Sand column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2771
183 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: Anomaly detection, digital twin, Generalised Additive Model, Power Consumption Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
182 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems

Authors: Vijaya K. Srivastava, Davide Spinello

Abstract:

This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.

Keywords: Constrained integer problems, enumerative search algorithm, Heuristic algorithm, tunneling algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 792
181 Complexity of Operation and Maintenance in Irrigation Network Management-A Case of the Dez Scheme in the Greater Dezful, Iran

Authors: Najaf Hedayat

Abstract:

Food and fibre production in arid and semi-arid regions has emerged as one of the major challenges for various socio-economic and political reasons such as the food security and self-sufficiency. Productive use of the renewable water resources has risen on top ofthe decision-making agenda. For this reason, efficient operation and maintenance of modern irrigation and drainage schemes become part and parcel and indispensible reality in agricultural policy making arena. The aim of this paper is to investigate the complexity of operating and maintaining such schemes, mainly focussing on challenges which enhance and opportunities that impedsustainable food and fibre production. The methodology involved using secondary data complemented byroutine observations and stakeholders views on issues that influence the O&M in the Dez command area. The SPSS program was used as an analytical framework for data analysis and interpretation.Results indicate poor application efficiency in most croplands, much of which is attributed to deficient operation of conveyance and distribution canals. These in turn, are reportedly linked to inadequate maintenance of the pumping stations and hydraulic structures like turnouts,flumes and other control systems particularly in the secondary and tertiary canals. Results show that the aforementioned deficiencies have been the major impediment to establishing regular flow toward the farm gates which subsequently undermine application efficiency and tillage operationsat farm level. Results further show that accumulative impact of such deficiencies has been the major causes of poorcrop yield and quality that deem production system in these croplands uneconomic. Results further show that the present state might undermine the sustainability of agricultural system in the command area. The overall conclusion being that present water management is unlikely to be responsive to challenges that the sector faces. And in the absence of coherent measures to shift the status quo situation in favour of more productive resource use, it would be hard to fulfil the objectives of the National Economic and Socio-cultural Development Plans.

Keywords: renewable water resources, Dez scheme, irrigationand drainage, sustainable crop production, O&M

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
180 Trimmed Mean as an Adaptive Robust Estimator of a Location Parameter for Weibull Distribution

Authors: Carolina B. Baguio

Abstract:

One of the purposes of the robust method of estimation is to reduce the influence of outliers in the data, on the estimates. The outliers arise from gross errors or contamination from distributions with long tails. The trimmed mean is a robust estimate. This means that it is not sensitive to violation of distributional assumptions of the data. It is called an adaptive estimate when the trimming proportion is determined from the data rather than being fixed a “priori-. The main objective of this study is to find out the robustness properties of the adaptive trimmed means in terms of efficiency, high breakdown point and influence function. Specifically, it seeks to find out the magnitude of the trimming proportion of the adaptive trimmed mean which will yield efficient and robust estimates of the parameter for data which follow a modified Weibull distribution with parameter λ = 1/2 , where the trimming proportion is determined by a ratio of two trimmed means defined as the tail length. Secondly, the asymptotic properties of the tail length and the trimmed means are also investigated. Finally, a comparison is made on the efficiency of the adaptive trimmed means in terms of the standard deviation for the trimming proportions and when these were fixed a “priori". The asymptotic tail lengths defined as the ratio of two trimmed means and the asymptotic variances were computed by using the formulas derived. While the values of the standard deviations for the derived tail lengths for data of size 40 simulated from a Weibull distribution were computed for 100 iterations using a computer program written in Pascal language. The findings of the study revealed that the tail lengths of the Weibull distribution increase in magnitudes as the trimming proportions increase, the measure of the tail length and the adaptive trimmed mean are asymptotically independent as the number of observations n becomes very large or approaching infinity, the tail length is asymptotically distributed as the ratio of two independent normal random variables, and the asymptotic variances decrease as the trimming proportions increase. The simulation study revealed empirically that the standard error of the adaptive trimmed mean using the ratio of tail lengths is relatively smaller for different values of trimming proportions than its counterpart when the trimming proportions were fixed a 'priori'.

Keywords: Adaptive robust estimate, asymptotic efficiency, breakdown point, influence function, L-estimates, location parameter, tail length, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2062