Search results for: CBR experiments
3001 Removal of Basic Yellow 28 Dye from Aqueous Solutions Using Plastic Wastes
Authors: Nadjib Dahdouh, Samira Amokrane, Elhadj Mekatel, Djamel Nibou
Abstract:
The removal of Basic Yellow 28 (BY28) from aqueous solutions by plastic wastes PMMA was investigated. The characteristics of plastic wastes PMMA were determined by SEM, FTIR and chemical composition analysis. The effects of solution pH, initial Basic Yellow 28 (BY28) concentration C, solid/liquid ratio R, and temperature T were studied in batch experiments. The Freundlich and the Langmuir models have been applied to the adsorption process, and it was found that the equilibrium followed well Langmuir adsorption isotherm. A comparison of kinetic models applied to the adsorption of BY28 on the PMMA was evaluated for the pseudo-first-order and the pseudo-second-order kinetic models. It was found that used models were correlated with the experimental data. Intraparticle diffusion model was also used in these experiments. The thermodynamic parameters namely the enthalpy ∆H°, entropy ∆S° and free energy ∆G° of adsorption of BY28 on PMMA were determined. From the obtained results, the negative values of Gibbs free energy ∆G° indicated the spontaneity of the adsorption of BY28 by PMMA. The negative values of ∆H° revealed the exothermic nature of the process and the negative values of ∆S° suggest the stability of BY28 on the surface of SW PMMA.Keywords: removal, Waste PMMA, BY28 dye, equilibrium, kinetic study, thermodynamic study
Procedia PDF Downloads 1513000 Graphic Calculator Effectiveness in Biology Teaching and Learning
Authors: Nik Azmah Nik Yusuff, Faridah Hassan Basri, Rosnidar Mansor
Abstract:
The purpose of the study is to find out the effectiveness of using Graphic calculators (GC) with Calculator Based Laboratory 2 (CBL2) in teaching and learning of form four biology for these topics: Nutrition, Respiration and Dynamic Ecosystem. Sixty form four science stream students were the participants of this study. The participants were divided equally into the treatment and control groups. The treatment group used GC with CBL2 during experiments while the control group used the ordinary conventional laboratory apparatus without using GC with CBL2. Instruments in this study were a set of pre-test and post-test and a questionnaire. T-Test was used to compare the student’s biology achievement while a descriptive statistic was used to analyze the outcome of the questionnaire. The findings of this study indicated the use of GC with CBL2 in biology had significant positive effect. The highest mean was 4.43 for item stating the use of GC with CBL2 had saved collecting experiment result’s time. The second highest mean was 4.10 for item stating GC with CBL2 had saved drawing and labelling graphs. The outcome from the questionnaire also showed that GC with CBL2 were easy to use and save time. Thus, teachers should use GC with CBL2 in support of efforts by Malaysia Ministry of Education in encouraging technology-enhanced lessons.Keywords: biology experiments, Calculator-Based Laboratory 2 (CBL2), graphic calculators, Malaysia Secondary School, teaching/learning
Procedia PDF Downloads 4022999 JaCoText: A Pretrained Model for Java Code-Text Generation
Authors: Jessica Lopez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid Dahhane, El Hassane Ettifouri
Abstract:
Pretrained transformer-based models have shown high performance in natural language generation tasks. However, a new wave of interest has surged: automatic programming language code generation. This task consists of translating natural language instructions to a source code. Despite the fact that well-known pre-trained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformer neural network. It aims to generate java source code from natural language text. JaCoText leverages the advantages of both natural language and code generation models. More specifically, we study some findings from state of the art and use them to (1) initialize our model from powerful pre-trained models, (2) explore additional pretraining on our java dataset, (3) lead experiments combining the unimodal and bimodal data in training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results.Keywords: java code generation, natural language processing, sequence-to-sequence models, transformer neural networks
Procedia PDF Downloads 2832998 Features of Fossil Fuels Generation from Bazhenov Formation Source Rocks by Hydropyrolysis
Authors: Anton G. Kalmykov, Andrew Yu. Bychkov, Georgy A. Kalmykov
Abstract:
Nowadays, most oil reserves in Russia and all over the world are hard to recover. That is the reason oil companies are searching for new sources for hydrocarbon production. One of the sources might be high-carbon formations with unconventional reservoirs. Bazhenov formation is a huge source rock formation located in West Siberia, which contains unconventional reservoirs on some of the areas. These reservoirs are formed by secondary processes with low predicting ratio. Only one of five wells is drilled through unconventional reservoirs, in others kerogen has low thermal maturity, and they are of low petroliferous. Therefore, there was a request for tertiary methods for in-situ cracking of kerogen and production of oil. Laboratory experiments of Bazhenov formation rock hydrous pyrolysis were used to investigate features of the oil generation process. Experiments on Bazhenov rocks with a different mineral composition (silica concentration from 15 to 90 wt.%, clays – 5-50 wt.%, carbonates – 0-30 wt.%, kerogen – 1-25 wt.%) and thermal maturity (from immature to late oil window kerogen) were performed in a retort under reservoir conditions. Rock samples of 50 g weight were placed in retort, covered with water and heated to the different temperature varied from 250 to 400°C with the durability of the experiments from several hours to one week. After the experiments, the retort was cooled to room temperature; generated hydrocarbons were extracted with hexane, then separated from the solvent and weighted. The molecular composition of this synthesized oil was then investigated via GC-MS chromatography Characteristics of rock samples after the heating was measured via the Rock-Eval method. It was found, that the amount of synthesized oil and its composition depending on the experimental conditions and composition of rocks. The highest amount of oil was produced at a temperature of 350°C after 12 hours of heating and was up to 12 wt.% of initial organic matter content in the rocks. At the higher temperatures and within longer heating time secondary cracking of generated hydrocarbons occurs, the mass of produced oil is lowering, and the composition contains more hydrocarbons that need to be recovered by catalytical processes. If the temperature is lower than 300°C, the amount of produced oil is too low for the process to be economically effective. It was also found that silica and clay minerals work as catalysts. Selection of heating conditions allows producing synthesized oil with specified composition. Kerogen investigations after heating have shown that thermal maturity increases, but the yield is only up to 35% of the maximum amount of synthetic oil. This yield is the result of gaseous hydrocarbons formation due to secondary cracking and aromatization and coaling of kerogen. Future investigations will allow the increase in the yield of synthetic oil. The results are in a good agreement with theoretical data on kerogen maturation during oil production. Evaluated trends could be tooled up for in-situ oil generation by shale rocks thermal action.Keywords: Bazhenov formation, fossil fuels, hydropyrolysis, synthetic oil
Procedia PDF Downloads 1132997 Fused Structure and Texture (FST) Features for Improved Pedestrian Detection
Authors: Hussin K. Ragb, Vijayan K. Asari
Abstract:
In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.Keywords: pedestrian detection, phase congruency, local phase, LBP features, CSLBP features, FST descriptor
Procedia PDF Downloads 4862996 A Laundry Algorithm for Colored Textiles
Authors: H. E. Budak, B. Arslan-Ilkiz, N. Cakmakci, I. Gocek, U. K. Sahin, H. Acikgoz-Tufan, M. H. Arslan
Abstract:
The aim of this study is to design a novel laundry algorithm for colored textiles which have significant decoloring problem. During the experimental work, bleached knitted single jersey fabric made of 100% cotton and dyed with reactive dyestuff was utilized, since according to a conducted survey textiles made of cotton are the most demanded textile products in the textile market by the textile consumers and for coloration of textiles reactive dyestuffs are the ones that are the most commonly used in the textile industry for dyeing cotton-made products. Therefore, the fabric used in this study was selected and purchased in accordance with the survey results. The fabric samples cut out of this fabric were dyed with different dyeing parameters by using Remazol Brilliant Red 3BS dyestuff in Gyrowash machine at laboratory conditions. From the alternative reactive-dyed cotton fabric samples, the ones that have high tendency to color loss were determined and examined. Accordingly, the parameters of the dyeing process used for these fabric samples were evaluated and the dyeing process which was chosen to be used for causing high tendency to color loss for the cotton fabrics was determined in order to reveal the level of improvement in color loss during this study clearly. Afterwards, all of the untreated fabric samples cut out of the fabric purchased were dyed with the dyeing process selected. When dyeing process was completed, an experimental design was created for the laundering process by using Minitab® program considering temperature, time and mechanical action as parameters. All of the washing experiments were performed in domestic washing machine. 16 washing experiments were performed with 8 different experimental conditions and 2 repeats for each condition. After each of the washing experiments, water samples of the main wash of the laundering process were measured with UV spectrophotometer. The values obtained were compared with the calibration curve of the materials used for the dyeing process. The results of the washing experiments were statistically analyzed with Minitab® program. According to the results, the most suitable washing algorithm to be used in terms of the parameters temperature, time and mechanical action for domestic washing machines for minimizing fabric color loss was chosen. The laundry algorithm proposed in this study have the ability of minimalizing the problem of color loss of colored textiles in washing machines by eliminating the negative effects of the parameters of laundering process on color of textiles without compromising the fundamental effects of basic cleaning action being performed properly. Therefore, since fabric color loss is minimized with this washing algorithm, dyestuff residuals will definitely be lower in the grey water released from the laundering process. In addition to this, with this laundry algorithm it is possible to wash and clean other types of textile products with proper cleaning effect and minimized color loss.Keywords: color loss, laundry algorithm, textiles, domestic washing process
Procedia PDF Downloads 3572995 Effect of Electromagnetic Fields at 27 GHz on Sperm Quality of Mytilus galloprovincialis
Authors: Carmen Sica, Elena M. Scalisi, Sara Ignoto, Ludovica Palmeri, Martina Contino, Greta Ferruggia, Antonio Salvaggio, Santi C. Pavone, Gino Sorbello, Loreto Di Donato, Roberta Pecoraro, Maria V. Brundo
Abstract:
Recently, a rise in the use of wireless internet technologies such as Wi-Fi and 5G routers/modems have been demonstrated. These devices emit a considerable amount of electromagnetic radiation (EMR), which could interact with the male reproductive system either by thermal or non-thermal mechanisms. The aim of this study was to investigate the direct in vitro influence of 5G radiation on sperm quality in Mytilus galloprovincialis, considered an excellent model for reproduction studies. The experiments at 27 GHz were conducted by using a no commercial high gain pyramidal horn antenna. To evaluate the specific absorption rate (SAR), a numerical simulation has been performed. The resulting incident power density was significantly lower than the power density limit of 10 mW/cm2 set by the international guidelines as a limit for nonthermal effects above 6 GHz. However, regarding temperature measurements of the aqueous sample, it has been verified an increase of 0.2°C, compared to the control samples. This very low-temperature increase couldn’t interfere with experiments. For experiments, sperm samples taken from sexually mature males of Mytilus galloprovincialis were placed in artificial seawater, salinity 30 + 1% and pH 8.3 filtered with a 0.2 m filter. After evaluating the number and quality of spermatozoa, sperm cells were exposed to electromagnetic fields a 27GHz. The effect of exposure on sperm motility and quality was evaluated after 10, 20, 30 and 40 minutes with a light microscope and also using the Eosin test to verify the vitality of the gametes. All the samples were performed in triplicate and statistical analysis was carried out using one-way analysis of variance (ANOVA) with Turkey test for multiple comparations of means to determine differences of sperm motility. A significant decrease (30%) in sperm motility was observed after 10 minutes of exposure and after 30 minutes, all sperms were immobile and not vital. Due to little literature data about this topic, these results could be useful for further studies concerning a great diffusion of these new technologies.Keywords: mussel, spermatozoa, sperm motility, millimeter waves
Procedia PDF Downloads 1662994 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 2962993 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines
Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder
Abstract:
One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.Keywords: affective computing, emotion recognition, humanoid robot, human-robot-interaction (HRI), social robots
Procedia PDF Downloads 2342992 Aerodynamic Study of an Open Window Moving Bus with Passengers
Authors: Pawan Kumar Pant, Bhanu Gupta, S. R. Kale, S. V. Veeravalli
Abstract:
In many countries, buses are the principal means of transport, of which a majority are naturally ventilated with open windows. The design of this ventilation has little scientific basis and to address this problem a study has been undertaken involving both experiments and numerical simulations. The flow pattern inside and around of an open window bus with passengers has been investigated in detail. A full scale three-dimensional numerical simulation has been used for a) a bus with closed windows and b) with open windows. In either simulation, the bus had 58 seated passengers. The bus dimensions used were 2500 mm wide × 2500 mm high (exterior) × 10500 mm long and its speed was set at 40 km/h. In both cases, the flow separates at the top front edge forming a vortex and reattaches close to the mid-length. This attached flow separates once more as it leaves the bus. However, the strength and shape of the vortices at the top front and wake region is different for both cases. The streamline pattern around the bus is also different for the two cases. For the bus with open windows, the dominant airflow inside the bus is from the rear to the front of the bus and air velocity at the face level of the passengers was found to be 1/10th of the free stream velocity. These findings are in good agreement with flow visualization experiments performed in a water channel at 10 m/s, and with smoke/tuft visualizations in a wind tunnel with a free-stream velocity of approximately 40 km/h on a 1:25 scaled Perspex model.Keywords: air flow, moving bus, open windows, vortex, wind tunnel
Procedia PDF Downloads 2312991 Interfacial Instability and Mixing Behavior between Two Liquid Layers Bounded in Finite Volumes
Authors: Lei Li, Ming M. Chai, Xiao X. Lu, Jia W. Wang
Abstract:
The mixing process of two liquid layers in a cylindrical container includes the upper liquid with higher density rushing into the lower liquid with lighter density, the lower liquid rising into the upper liquid, meanwhile the two liquid layers having interactions with each other, forming vortices, spreading or dispersing in others, entraining or mixing with others. It is a complex process constituted of flow instability, turbulent mixing and other multiscale physical phenomena and having a fast evolution velocity. In order to explore the mechanism of the process and make further investigations, some experiments about the interfacial instability and mixing behavior between two liquid layers bounded in different volumes are carried out, applying the planar laser induced fluorescence (PLIF) and the high speed camera (HSC) techniques. According to the results, the evolution of interfacial instability between immiscible liquid develops faster than theoretical rate given by the Rayleigh-Taylor Instability (RTI) theory. It is reasonable to conjecture that some mechanisms except the RTI play key roles in the mixture process of two liquid layers. From the results, it is shown that the invading velocity of the upper liquid into the lower liquid does not depend on the upper liquid's volume (height). Comparing to the cases that the upper and lower containers are of identical diameter, in the case that the lower liquid volume increases to larger geometric space, the upper liquid spreads and expands into the lower liquid more quickly during the evolution of interfacial instability, indicating that the container wall has important influence on the mixing process. In the experiments of miscible liquid layers’ mixing, the diffusion time and pattern of the liquid interfacial mixing also does not depend on the upper liquid's volumes, and when the lower liquid volume increases to larger geometric space, the action of the bounded wall on the liquid falling and rising flow will decrease, and the liquid interfacial mixing effects will also attenuate. Therefore, it is also concluded that the volume weight of upper heavier liquid is not the reason of the fast interfacial instability evolution between the two liquid layers and the bounded wall action is limited to the unstable and mixing flow. The numerical simulations of the immiscible liquid layers’ interfacial instability flow using the VOF method show the typical flow pattern agree with the experiments. However the calculated instability development is much slower than the experimental measurement. The numerical simulation of the miscible liquids’ mixing, which applying Fick’s diffusion law to the components’ transport equation, shows a much faster mixing rate than the experiments on the liquids’ interface at the initial stage. It can be presumed that the interfacial tension plays an important role in the interfacial instability between the two liquid layers bounded in finite volume.Keywords: interfacial instability and mixing, two liquid layers, Planar Laser Induced Fluorescence (PLIF), High Speed Camera (HSC), interfacial energy and tension, Cahn-Hilliard Navier-Stokes (CHNS) equations
Procedia PDF Downloads 2482990 Parametric Influence and Optimization of Wire-EDM on Oil Hardened Non-Shrinking Steel
Authors: Nixon Kuruvila, H. V. Ravindra
Abstract:
Wire-cut Electro Discharge Machining (WEDM) is a special form of conventional EDM process in which electrode is a continuously moving conductive wire. The present study aims at determining parametric influence and optimum process parameters of Wire-EDM using Taguchi’s Technique and Genetic algorithm. The variation of the performance parameters with machining parameters was mathematically modeled by Regression analysis method. The objective functions are Dimensional Accuracy (DA) and Material Removal Rate (MRR). Experiments were designed as per Taguchi’s L16 Orthogonal Array (OA) where in Pulse-on duration, Pulse-off duration, Current, Bed-speed and Flushing rate have been considered as the important input parameters. The matrix experiments were conducted for the material Oil Hardened Non Shrinking Steel (OHNS) having the thickness of 40 mm. The results of the study reveals that among the machining parameters it is preferable to go in for lower pulse-off duration for achieving over all good performance. Regarding MRR, OHNS is to be eroded with medium pulse-off duration and higher flush rate. Finally, the validation exercise performed with the optimum levels of the process parameters. The results confirm the efficiency of the approach employed for optimization of process parameters in this study.Keywords: dimensional accuracy (DA), regression analysis (RA), Taguchi method (TM), volumetric material removal rate (VMRR)
Procedia PDF Downloads 4082989 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation
Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen
Abstract:
Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling
Procedia PDF Downloads 872988 Failure Mechanisms in Zirconium Alloys during Wear and Corrosion
Authors: Bharat Kumar, Deepak Kumar, Vijay Chaudhry
Abstract:
Zirconium alloys are used as core components of nuclear reactors due to their high wear resistance, good corrosion properties, and good mechanical stability at high temperatures. Water flows inside the pressure tube through fuel claddings, which produces vibration of these core components and results in the wear of some components. Some components are subjected to the environment of coolant water containing LiOH which results in the corrosion of these components. The present work simulates some of these conditions to determine the failure mechanisms under these conditions and the effect of various parameters on them. Friction and wear experiments were performed varying the surrounding environment (room temperature, high temperature, and water submerged), duration, frequency, and displacement amplitude. Electrochemical corrosion experiments were performed by varying the concentration of LiOH in water. The worn and corroded surfaces were analyzed using scanning electron microscopy (SEM) to analyze the wear and corrosion mechanism and energy dispersive x-ray spectroscopy (EDS) and Raman spectroscopy to analyze the tribo-oxide layer formed during the wear and oxide layer formed during the corrosion. Wear increases with frequency and amplitude, and corrosion increases with LiOH concentration in water.Keywords: zirconium alloys, wear, oxide layer, corrosion, EIS, linear polarization
Procedia PDF Downloads 662987 Hazardous Waste Management at Chemistry Section in Dubai Police Forensic Lab
Authors: Adnan Lanjawi
Abstract:
This paper is carried out to investigate the management of hazardous waste in the chemistry section which belongs to Dubai Police forensic laboratory. The chemicals are the main contributor toward the accumulation of hazardous waste in the section. This is due to the requirement to use it in analysis, such as of explosives, drugs, inorganic and fire debris cases. This leads to negative effects on the environment and to the employees’ health and safety. The research investigates the quantity of chemicals there, the labels, the storage room and equipment used. The target is to reduce the need for disposal by looking at alternative options, such as elimination, substitution and recycling. The data was collected by interviewing the top managers there who have been working in the lab more than 20 years. Also, data was collected by observing employees and how they carry out experiments. Therefore, a survey was made to assess their knowledge about the hazardous waste. The management of hazardous chemicals in the chemistry section needs to be improved. The main findings illustrate that about 110 bottles of reference substances were going to be disposed of in 2014. These bottles were bought for about 100,000 UAE Dirhams (£17,600). This means that the management of substances purchase is not organised. There is no categorisation programme in place, which makes the waste control very difficult. In addition, the findings show that chemical are segregated according to alphabetical order, whereas the efficient way is to separate them according to their nature and property. In addition, the research suggested technology and experiments to follow to reduce the need for using solvents and chemicals in the sample preparation.Keywords: control, hazard, laboratories, waste,
Procedia PDF Downloads 4082986 Fuel Oxidation Reactions: Pathways and Reactive Intermediates Characterization via Synchrotron Photoionization Mass Spectrometry
Authors: Giovanni Meloni
Abstract:
Recent results are presented from experiments carried out at the Advanced Light Source (ALS) at the Chemical Dynamics Beamline of Lawrence Berkeley National Laboratory using multiplexed synchrotron photoionization mass spectrometry. The reaction mixture and a buffer gas (He) are introduced through individually calibrated mass flow controllers into a quartz slow flow reactor held at constant pressure and temperature. The gaseous mixture effuses through a 650 μm pinhole into a 1.5 mm skimmer, forming a molecular beam that enters a differentially pumped ionizing chamber. The molecular beam is orthogonally intersected by a tunable synchrotron radiation produced by the ALS in the 8-11 eV energy range. Resultant ions are accelerated, collimated, and focused into an orthogonal time-of-flight mass spectrometer. Reaction species are identified by their mass-to-charge ratios and photoionization (PI) spectra. Comparison of experimental PI spectra with literature and/or simulated curves is routinely done to assure the identity of a given species. With the aid of electronic structure calculations, potential energy surface scans are performed, and Franck-Condon spectral simulations are obtained. Examples of these experiments are discussed, ranging from new intermediates characterization to reaction mechanisms elucidation and biofuels oxidation pathways identification.Keywords: mass spectrometry, reaction intermediates, synchrotron photoionization, oxidation reactions
Procedia PDF Downloads 712985 Causes Analysis of Vacuum Consolidation Failure to Soft Foundation Filled by Newly Dredged Mud
Authors: Bao Shu-Feng, Lou Yan, Dong Zhi-Liang, Mo Hai-Hong, Chen Ping-Shan
Abstract:
For soft foundation filled by newly dredged mud, after improved by Vacuum Preloading Technology (VPT), the soil strength was increased only a little, the effective improved depth was small, and the ground bearing capacity is still low. To analyze the causes in depth, it was conducted in laboratory of several comparative single well model experiments of VPT. It was concluded: (1) it mainly caused serious clogging problem and poor drainage performance in vertical drains of high content of fine soil particles and strong hydrophilic minerals in dredged mud, too fast loading rate at the early stage of vacuum preloading (namely rapidly reaching-80kPa) and too small characteristic opening size of the filter of the existed vertical drains; (2) it commonly reduced the drainage efficiency of drainage system, in turn weaken vacuum pressure in soils and soil improvement effect of the greater partial loss and friction loss of vacuum pressure caused by larger curvature of vertical drains and larger transfer resistance of vacuum pressure in horizontal drain.Keywords: newly dredged mud, single well model experiments of vacuum preloading technology, poor drainage performance of vertical drains, poor soil improvement effect, causes analysis
Procedia PDF Downloads 2852984 Experimental Investigation and Analysis of Wear Parameters on Al/Sic/Gr: Metal Matrix Hybrid Composite by Taguchi Method
Authors: Rachit Marwaha, Rahul Dev Gupta, Vivek Jain, Krishan Kant Sharma
Abstract:
Metal matrix hybrid composites (MMHCs) are now gaining their usage in aerospace, automotive and other industries because of their inherent properties like high strength to weight ratio, hardness and wear resistance, good creep behaviour, light weight, design flexibility and low wear rate etc. Al alloy base matrix reinforced with silicon carbide (10%) and graphite (5%) particles was fabricated by stir casting process. The wear and frictional properties of metal matrix hybrid composites were studied by performing dry sliding wear test using pin on disc wear test apparatus. Experiments were conducted based on the plan of experiments generated through Taguchi’s technique. A L9 Orthogonal array was selected for analysis of data. Investigation to find the influence of applied load, sliding speed and track diameter on wear rate as well as coefficient of friction during wearing process was carried out using ANOVA. Objective of the model was chosen as smaller the better characteristics to analyse the dry sliding wear resistance. Results show that track diameter has highest influence followed by load and sliding speed.Keywords: Taguchi method, orthogonal array, ANOVA, metal matrix hybrid composites
Procedia PDF Downloads 3292983 Development of Positron Emission Tomography (PET) Tracers for the in-Vivo Imaging of α-Synuclein Aggregates in α-Synucleinopathies
Authors: Bright Chukwunwike Uzuegbunam, Wojciech Paslawski, Hans Agren, Christer Halldin, Wolfgang Weber, Markus Luster, Thomas Arzberger, Behrooz Hooshyar Yousefi
Abstract:
There is a need to develop a PET tracer that will enable to diagnosis and track the progression of Alpha-synucleinopathies (Parkinson’s disease [PD], dementia with Lewy bodies [DLB], multiple system atrophy [MSA]) in living subjects over time. Alpha-synuclein aggregates (a-syn), which are present in all the stages of disease progression, for instance, in PD, are a suitable target for in vivo PET imaging. For this reason, we have developed some promising a-syn tracers based on a disarylbisthiazole (DABTA) scaffold. The precursors are synthesized via a modified Hantzsch thiazole synthesis. The precursors were then radiolabeled via one- or two-step radiofluorination methods. The ligands were initially screened using a combination of molecular dynamics and quantum/molecular mechanics approaches in order to calculate the binding affinity to a-syn (in silico binding experiments). Experimental in vitro binding assays were also performed. The ligands were further screened in other experiments such as log D, in vitro plasma protein binding & plasma stability, biodistribution & brain metabolite analyses in healthy mice. Radiochemical yields were up to 30% - 72% in some cases. Molecular docking revealed possible binding sites in a-syn and also the free energy of binding to those sites (-28.9 - -66.9 kcal/mol), which correlated to the high binding affinity of the DABTAs to a-syn (Ki as low as 0.5 nM) and selectivity (> 100-fold) over Aβ and tau, which usually co-exist with a-synin some pathologies. The log D values range from 2.88 - 2.34, which correlated with free-protein fraction of 0.28% - 0.5%. Biodistribution experiments revealed that the tracers are taken up (5.6 %ID/g - 7.3 %ID/g) in the brain at 5 min (post-injection) p.i., and cleared out (values as low as 0.39 %ID/g were obtained at 120 min p.i. Analyses of the mice brain 20 min p.i. Revealed almost no radiometabolites in the brain in most cases. It can be concluded that in silico study presents a new venue for the rational development of radioligands with suitable features. The results obtained so far are promising and encourage us to further validate the DABTAs in autoradiography, immunohistochemistry, and in vivo imaging in non-human primates and humans.Keywords: alpha-synuclein aggregates, alpha-synucleinopathies, PET imaging, tracer development
Procedia PDF Downloads 2342982 Effects of Initial Moisture Content on the Physical and Mechanical Properties of Norway Spruce Briquettes
Authors: Miloš Matúš, Peter Križan, Ľubomír Šooš, Juraj Beniak
Abstract:
The moisture content of densified biomass is a limiting parameter influencing the quality of this solid biofuel. It influences its calorific value, density, mechanical strength and dimensional stability as well as affecting its production process. This paper deals with experimental research into the effect of moisture content of the densified material on the final quality of biofuel in the form of logs (briquettes or pellets). Experiments based on the single-axis densification of the spruce sawdust were carried out with a hydraulic piston press (piston and die), where the densified logs were produced at room temperature. The effect of moisture content on the qualitative properties of the logs, including density, change of moisture, expansion and physical changes, and compressive and impact resistance were studied. The results show the moisture ranges required for producing good-quality logs. The experiments were evaluated and the moisture content of the tested material was optimized to achieve the optimum value for the best quality of the solid biofuel. The dense logs also have high-energy content per unit volume. The research results could be used to develop and optimize industrial technologies and machinery for biomass densification to achieve high quality solid biofuel.Keywords: biomass, briquettes, densification, fuel quality, moisture content, density
Procedia PDF Downloads 4272981 25 Years of the Neurolinguistic Approach: Origin, Outcomes, Expansion and Current Experiments
Authors: Steeve Mercier, Joan Netten, Olivier Massé
Abstract:
The traditional lack of success of most Canadian students in the regular French program in attaining the ability to communicate spontaneously led to the conceptualization of a modified program. This program, called Intensive French, introduced and evaluated as an experiment in several school districts, formed the basis for the creation of a more effective approach for the development of skills in a second/foreign language and literacy: the Neurolinguistic Approach (NLA).The NLA expresses the major change in the understanding of how communication skills are developed: learning to communicate spontaneously in a second language depends on the reuse of structures in a variety of cognitive situations to express authentic messages rather than on knowledge of the way a language functions. Put differently, it prioritises the acquisition of implicit competence over the learning of grammatical knowledge. This is achieved by the adoption of a literacy-based approach and an increase in intensity of instruction.Besides having strong support empirically from numerous experiments, the NLA has sound theoretical foundation, as it conforms to research in neurolinguistics. The five pedagogical principles that define the approach will be explained, as well as the differences between the NLA and the paradigm on which most current resources and teaching strategies are based. It is now 25 years since the original research occurred. The use of the NLA, as it will be shown, has expanded widely. With some adaptations, it is used for other languages and in other milieus. In Canada, classes are offered in mandarin, Ukrainian, Spanish and Arabic, amongst others. It has also been used in several indigenous communities, such as to restore the use of Mohawk, Cri and Dene. Its use has expanded throughout the world, as in China, Japan, France, Germany, Belgium, Poland, Russia, as well as Mexico. The Intensive French program originally focussed on students in grades 5 or 6 (ages 10 -12); nowadays, the programs based on the approach include adults, particularly immigrants entering new countries. With the increasing interest in inclusion and cultural diversity, there is a demand for language learning amongst pre-school and primary children that can be successfully addressed by the NLA. Other current experiments target trilingual schools and work with Inuit communities of Nunavik in the province of Quebec.Keywords: neuroeducation, neurolinguistic approach, literacy, second language acquisition, plurilingualism, foreign language teaching and learning
Procedia PDF Downloads 712980 Contribution to the Development of a New Design of Dentist's Gowns: A Case Study of Using Infra-Red Technology and Pressure Sensors
Authors: Tran Thi Anh Dao, M. Arnold, L. Schacher, D. C. Adolphe, G. Reys
Abstract:
During tooth extraction or implant surgery, dentists are in contact with numerous infectious germs from patients' saliva and blood. For that reason, dentist's clothes have to play their role of protection from contamination. In addition, dentist's apparels should be not only protective but also comfortable and breathable because dentists have to perform many operations and treatments on patients throughout the day with high concentration and intensity. However, this type of protective garments has not been studied scientifically, whereas dentists are facing new risks and eager for looking for a comfortable personal protective equipment. For that reason, we have proposed some new designs of dentist's gown. They were expected to diminish heat accumulation that are considered as an important factor in reducing the level of comfort experienced by users. Experiments using infra-red technology were carried out in order to compare the breathable properties between a traditional gown and a new design with open zones. Another experiment using pressure sensors was also carried out to study ergonomic aspects trough the flexibility of movements of sleeves. The sleeves-design which is considered comfortable and flexible will be chosen for the further step. The results from the two experiments provide valuable information for the development of a new design of dentists' gowns in order to achieve maximum levels of cooling and comfort for the human body.Keywords: garment, dentists, comfort, design, protection, thermal
Procedia PDF Downloads 2182979 A Study of Two Disease Models: With and Without Incubation Period
Authors: H. C. Chinwenyi, H. D. Ibrahim, J. O. Adekunle
Abstract:
The incubation period is defined as the time from infection with a microorganism to development of symptoms. In this research, two disease models: one with incubation period and another without incubation period were studied. The study involves the use of a mathematical model with a single incubation period. The test for the existence and stability of the disease free and the endemic equilibrium states for both models were carried out. The fourth order Runge-Kutta method was used to solve both models numerically. Finally, a computer program in MATLAB was developed to run the numerical experiments. From the results, we are able to show that the endemic equilibrium state of the model with incubation period is locally asymptotically stable whereas the endemic equilibrium state of the model without incubation period is unstable under certain conditions on the given model parameters. It was also established that the disease free equilibrium states of the model with and without incubation period are locally asymptotically stable. Furthermore, results from numerical experiments using empirical data obtained from Nigeria Centre for Disease Control (NCDC) showed that the overall population of the infected people for the model with incubation period is higher than that without incubation period. We also established from the results obtained that as the transmission rate from susceptible to infected population increases, the peak values of the infected population for the model with incubation period decrease and are always less than those for the model without incubation period.Keywords: asymptotic stability, Hartman-Grobman stability criterion, incubation period, Routh-Hurwitz criterion, Runge-Kutta method
Procedia PDF Downloads 1732978 Modeling of Surface Roughness in Hard Turning of DIN 1.2210 Cold Work Tool Steel with Ceramic Tools
Authors: Mehmet Erdi Korkmaz, Mustafa Günay
Abstract:
Nowadays, grinding is frequently replaced with hard turning for reducing set up time and higher accuracy. This paper focused on mathematical modeling of average surface roughness (Ra) in hard turning of AISI L2 grade (DIN 1.2210) cold work tool steel with ceramic tools. The steel was hardened to 60±1 HRC after the heat treatment process. Cutting speed, feed rate, depth of cut and tool nose radius was chosen as the cutting conditions. The uncoated ceramic cutting tools were used in the machining experiments. The machining experiments were performed according to Taguchi L27 orthogonal array on CNC lathe. Ra values were calculated by averaging three roughness values obtained from three different points of machined surface. The influences of cutting conditions on surface roughness were evaluated as statistical and experimental. The analysis of variance (ANOVA) with 95% confidence level was applied for statistical analysis of experimental results. Finally, mathematical models were developed using the artificial neural networks (ANN). ANOVA results show that feed rate is the dominant factor affecting surface roughness, followed by tool nose radius and cutting speed.Keywords: ANN, hard turning, DIN 1.2210, surface roughness, Taguchi method
Procedia PDF Downloads 3712977 Diversity Indices as a Tool for Evaluating Quality of Water Ways
Authors: Khadra Ahmed, Khaled Kheireldin
Abstract:
In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.Keywords: planktons, diversity indices, water quality index, water ways
Procedia PDF Downloads 5162976 Determination of Forced Convection Heat Transfer Performance in Lattice Geometric Heat Sinks
Authors: Bayram Sahin, Baris Gezdirici, Murat Ceylan, Ibrahim Ates
Abstract:
In this experimental study, the effects of heat transfer and flow characteristics on lattice geometric heat sinks, where high rates of heat removal are required, were investigated. The design parameters were Reynolds number, the height of heat sink (H), horizontal (Sy) and vertical (Sx) distances between heat sinks. In the experiments, the Reynolds number ranged from 4000 to 20000; heat sink heights were (H) 20 mm and 40 mm; the distances (Sy) between the heat sinks in the flow direction were45 mm, 32 mm, 23.3 mm; the distances (Sx) between the heat sinks perpendicular to the flow direction were selected to be 23.3 mm, 12.5 mm and 6 mm. A total of 90 experiments were conducted and the maximum Nusselt number and minimum friction coefficient were targeted. Experimental results have shown that heat sinks in lattice geometry have a significant effect on heat transfer enhancement. Under the different experimental conditions, the highest increase in Nusselt number was 283% while the lowest increase was calculated as 66% as compared with the straight channel results. The lowest increase in the friction factor was also obtained as 173% according to the straight channel results. It is seen that the increase in heat sink height and flow velocity increased the level of turbulence in the channel, leading to higher Nusselt number and friction factor values.Keywords: forced convection, heat transfer enhancement, lattice geometric heat sinks, pressure drop
Procedia PDF Downloads 1882975 Reduction Conditions of Briquetted Solid Wastes Generated by the Integrated Iron and Steel Plant
Authors: Gökhan Polat, Dicle Kocaoğlu Yılmazer, Muhlis Nezihi Sarıdede
Abstract:
Iron oxides are the main input to produce iron in integrated iron and steel plants. During production of iron from iron oxides, some wastes with high iron content occur. These main wastes can be classified as basic oxygen furnace (BOF) sludge, flue dust and rolling scale. Recycling of these wastes has a great importance for both environmental effects and reduction of production costs. In this study, recycling experiments were performed on basic oxygen furnace sludge, flue dust and rolling scale which contain 53.8%, 54.3% and 70.2% iron respectively. These wastes were mixed together with coke as reducer and these mixtures are pressed to obtain cylindrical briquettes. These briquettes were pressed under various compacting forces from 1 ton to 6 tons. Also, both stoichiometric and twice the stoichiometric cokes were added to investigate effect of coke amount on reduction properties of the waste mixtures. Then, these briquettes were reduced at 1000°C and 1100°C during 30, 60, 90, 120 and 150 min in a muffle furnace. According to the results of reduction experiments, the effect of compacting force, temperature and time on reduction ratio of the wastes were determined. It is found that 1 ton compacting force, 150 min reduction time and 1100°C are the optimum conditions to obtain reduction ratio higher than 75%.Keywords: Coke, iron oxide wastes, recycling, reduction
Procedia PDF Downloads 3392974 Moderate Electric Field and Ultrasound as Alternative Technologies to Raspberry Juice Pasteurization Process
Authors: Cibele F. Oliveira, Debora P. Jaeschke, Rodrigo R. Laurino, Amanda R. Andrade, Ligia D. F. Marczak
Abstract:
Raspberry is well-known as a good source of phenolic compounds, mainly anthocyanin. Some studies pointed out the importance of these bioactive compounds consumption, which is related to the decrease of the risk of cancer and cardiovascular diseases. The most consumed raspberry products are juices, yogurts, ice creams and jellies and, to ensure the safety of these products, raspberry is commonly pasteurized, for enzyme and microorganisms inactivation. Despite being efficient, the pasteurization process can lead to degradation reactions of the bioactive compounds, decreasing the products healthy benefits. Therefore, the aim of the present work was to evaluate moderate electric field (MEF) and ultrasound (US) technologies application on the pasteurization process of raspberry juice and compare the results with conventional pasteurization process. For this, phenolic compounds, anthocyanin content and physical-chemical parameters (pH, color changes, titratable acidity) of the juice were evaluated before and after the treatments. Moreover, microbiological analyses of aerobic mesophiles microorganisms, molds and yeast were performed in the samples before and after the treatments, to verify the potential of these technologies to inactivate microorganisms. All the pasteurization processes were performed in triplicate for 10 min, using a cylindrical Pyrex® vessel with a water jacket. The conventional pasteurization was performed at 90 °C using a hot water bath connected to the extraction cell. The US assisted pasteurization was performed using 423 and 508 W cm-2 (75 and 90 % of ultrasound intensity). It is important to mention that during US application the temperature was kept below 35 °C; for this, the water jacket of the extraction cell was connected to a water bath with cold water. MEF assisted pasteurization experiments were performed similarly to US experiments, using 25 and 50 V. Control experiments were performed at the maximum temperature of US and MEF experiments (35 °C) to evaluate only the effect of the aforementioned technologies on the pasteurization. The results showed that phenolic compounds concentration in the juice was not affected by US and MEF application. However, it was observed that the US assisted pasteurization, performed at the highest intensity, decreased anthocyanin content in 33 % (compared to in natura juice). This result was possibly due to the cavitation phenomena, which can lead to free radicals formation and accumulation on the medium; these radicals can react with anthocyanin decreasing the content of these antioxidant compounds in the juice. Physical-chemical parameters did not present statistical differences for samples before and after the treatments. Microbiological analyses results showed that all the pasteurization treatments decreased the microorganism content in two logarithmic cycles. However, as values were lower than 1000 CFU mL-1 it was not possible to verify the efficacy of each treatment. Thus, MEF and US were considered as potential alternative technologies for pasteurization process, once in the right conditions the application of the technologies decreased microorganism content in the juice and did not affected phenolic and anthocyanin content, as well as physical-chemical parameters. However, more studies are needed regarding the influence of MEF and US processes on microorganisms’ inactivation.Keywords: MEF, microorganism inactivation, anthocyanin, phenolic compounds
Procedia PDF Downloads 2402973 MLProxy: SLA-Aware Reverse Proxy for Machine Learning Inference Serving on Serverless Computing Platforms
Authors: Nima Mahmoudi, Hamzeh Khazaei
Abstract:
Serving machine learning inference workloads on the cloud is still a challenging task at the production level. The optimal configuration of the inference workload to meet SLA requirements while optimizing the infrastructure costs is highly complicated due to the complex interaction between batch configuration, resource configurations, and variable arrival process. Serverless computing has emerged in recent years to automate most infrastructure management tasks. Workload batching has revealed the potential to improve the response time and cost-effectiveness of machine learning serving workloads. However, it has not yet been supported out of the box by serverless computing platforms. Our experiments have shown that for various machine learning workloads, batching can hugely improve the system’s efficiency by reducing the processing overhead per request. In this work, we present MLProxy, an adaptive reverse proxy to support efficient machine learning serving workloads on serverless computing systems. MLProxy supports adaptive batching to ensure SLA compliance while optimizing serverless costs. We performed rigorous experiments on Knative to demonstrate the effectiveness of MLProxy. We showed that MLProxy could reduce the cost of serverless deployment by up to 92% while reducing SLA violations by up to 99% that can be generalized across state-of-the-art model serving frameworks.Keywords: serverless computing, machine learning, inference serving, Knative, google cloud run, optimization
Procedia PDF Downloads 1782972 Evaluation of Botanical Plant Powders against Zabrotes subfasciatus (Boheman) (Coleoptera: Bruchidae) in Stored Local Common Bean Varieties
Authors: Fikadu Kifle Hailegeorgis
Abstract:
Common bean is one of the most important sources of protein in Ethiopia and other developing countries. However, the Mexican bean weevil, Zabrotes subfasciatus (Boheman), is a major factor in the storage of common beans that causes losses. Studies were conducted to evaluate the efficacy of botanical powders of Jatropha curcas (L.), Neem/Azadrachta indica, and Parthenium hysterophorus (L) on local common bean varieties against Z subfasciatus at Melkassa Agriculture Research Center. Twenty local common bean varieties were evaluated twice against Z. Subfasciatus in a completely randomized design in three replications at the rate of 0.2g/250g of seed for each experiment. Malathion and untreated were used as standard checks. The result indicated that RAZ White and Round Yellow showed high resistance variety in experiments while Batu and Black showed high susceptible variety in experiments. Jatropha seed powder was the most effective against Z. subfasciatus. Parthenium seed powders and neem leaf powders also indicate promising results. Common beans treated with botanicals significantly (p<0.05) had a higher germination percentage than that of the untreated seed. In general, the results obtained indicated that using bean varieties (RAZ white and Round yellow) and botanicals (Jatropha) seed powder gave the best control of Z. subfasciatus.Keywords: botanicals, malathion, resistant varieties, Z. subfasciatus
Procedia PDF Downloads 57