Search results for: maximum input
4939 Improving the Growth Performance of Beetal Goat Kids Weaned at Various Stages with Various Levels of Dietary Protein in Starter Ration under High Input Feeding System
Authors: Ishaq Kashif, Muhammad Younas, Muhammad Riaz, Mubarak Ali
Abstract:
Poor feeding management during pre-weaning period is one of the factors resulting in compromised growth of Beetal kids fattened for meat purpose. The main reason for this anomaly may be less milk offered to kids and non-serious efforts for its management. This study was planned to find the most appropriate protein level suiting the age of the weaning while shifting animals to high input feeding system. Total of 42 Beetal male kids having 30 (±10), 60 (±10) and 90 (±10) days of age were selected with 16 in each age group. They were designated as G30, G60 and G90, respectively. The weights of animals were; 8±2 kg (G30), 12±2 kg (G60) and 16±2 kg (G90), respectively. All animals were weaned by introducing the total mix feed gradually and withdrawing the milk during the adjustment period of two weeks. The pelleted starter ration (total mix feed) with three various dietary protein levels designated as R1 (16% CP), R2 (20% CP) and R3 (26% CP) were introduced. The control group was reared on the fodder (Maize). The starter rations were iso-caloric and were offered for six-week duration. All animals were exposed to treatment using two-factor factorial (3×3) plus control treatment arrangement under completely randomized design. The data were collected on average daily feed intake (ADFI), average daily gain (ADG), gain to intake ratio, Klieber ratio (KR), body measurements and blood metabolites of kids. The data was analyzed using aov function of R-software. The statistical analysis showed that starter feed protein levels and age of weaning had significant interaction for ADG (P < 0.001), KR (P < 0.001), ADFI (P < 0.05) and blood urea nitrogen (P < 0.05) while serum creatinine and feed conversion had non-significant interaction. The trend analysis revealed that ADG had significant quadratic interaction (P < 0.05) within protein levels and age of weaning. It was found that animals weaned at 30 or 60 days, on R2 diet had better ADG (46.8 gm/day and 87.06 gm/day, respectively) weaned at 60 days of age. The animals weaned at 90 days had best ADG (127 gm/day) with R1. It is concluded that animal weaned at 30 or 40 days required 20% CP for better growth performance while animal at 90 days showed better performance with 16% CP.Keywords: average daily gain, starter protein levels, weaning age, gain to intake ratio
Procedia PDF Downloads 2494938 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections
Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette
Abstract:
A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation
Procedia PDF Downloads 864937 Comparison of Titanium and Aluminum Functions as Spoilers for Dose Uniformity Achievement in Abutting Oblique Electron Fields: A Monte Carlo Simulation Study
Authors: Faranak Felfeliyan, Parvaneh Shokrani, Maryam Atarod
Abstract:
Introduction Using electron beam is widespread in radiotherapy. The main criteria in radiation therapy is to irradiate the tumor volume with maximum prescribed dose and minimum dose to vital organs around it. Using abutting fields is common in radiotherapy. The main problem in using abutting fields is dose inhomogeneity in the junction region. Electron beam divergence and lateral scattering may lead to hot and cold spots in the junction region. One solution for this problem is using of a spoiler to broaden the penumbra and uniform dose in the junction region. The goal of this research was to compare titanium and aluminum effects as a spoiler for dose uniformity achievement in the junction region of oblique electron fields with Monte Carlo simulation. Dose uniformity in the junction region depends on density, scattering power, thickness of the spoiler and the angle between two fields. Materials and Methods In this study, Monte Carlo model of Siemens Primus linear accelerator was simulated for a 5 MeV nominal energy electron beam using manufacture provided specifications. BEAMnrc and EGSnrc user code were used to simulate the treatment head in electron mode (simulation of beam model). The resulting phase space file was used as a source for dose calculations for 10×10 cm2 field size at SSD=100 cm in a 30×30×45 cm3 water phantom using DOSXYZnrc user code (dose calculations). An automatic MP3-M water phantom tank, MEPHYSTO mc2 software platform and a Semi-Flex Chamber-31010 with sensitive volume of 0.125 cm3 (PTW, Freiburg, Germany) were used for dose distribution measurements. Moreover, the electron field size was 10×10 cm2 and SSD=100 cm. Validation of developed beam model was done by comparing the measured and calculated depth and lateral dose distributions (verification of electron beam model). Simulation of spoilers (using SLAB component module) placed at the end of the electron applicator, was done using previously validated phase space file for a 5 MeV nominal energy and 10×10 cm2 field size (simulation of spoiler). An in-house routine was developed in order to calculate the combined isodose curves resulting from the two simulated abutting fields (calculation of dose distribution in abutting electron fields). Results Verification of the developed 5.9 MeV electron beam model was done by comparing the calculated and measured dose distributions. The maximum percentage difference between calculated and measured PDD was 1%, except for the build-up region in which the difference was 2%. The difference between calculated and measured profile was 2% at the edges of the field and less than 1% in other regions. The effect of PMMA, aluminum, titanium and chromium in dose uniformity achievement in abutting normal electron fields with equivalent thicknesses to 5mm PMMA was evaluated. Comparing R90 and uniformity index of different materials, aluminum was chosen as the optimum spoiler. Titanium has the maximum surface dose. Thus, aluminum and titanium had been chosen to use for dose uniformity achievement in oblique electron fields. Using the optimum beam spoiler, junction dose decreased from 160% to 110% for 15 degrees, from 180% to 120% for 30 degrees, from 160% to 120% for 45 degrees and from 180% to 100% for 60 degrees oblique abutting fields. Using Titanium spoiler, junction dose decreased from 160% to 120% for 15 degrees, 180% to 120% for 30 degrees, 160% to 120% for 45 degrees and 180% to 110% for 60 degrees. In addition, penumbra width for 15 degrees, without spoiler in the surface was 10 mm and was increased to 15.5 mm with titanium spoiler. For 30 degrees, from 9 mm to 15 mm, for 45 degrees from 4 mm to 6 mm and for 60 degrees, from 5 mm to 8 mm. Conclusion Using spoilers, penumbra width at the surface increased, size and depth of hot spots was decreased and dose homogeneity improved at the junction of abutting electron fields. Dose at the junction region of abutting oblique fields was improved significantly by using spoiler. Maximum dose at the junction region for 15⁰, 30⁰, 45⁰ and 60⁰ was decreased about 40%, 60%, 40% and 70% respectively for Titanium and about 50%, 60%, 40% and 80% for Aluminum. Considering significantly decrease in maximum dose using titanium spoiler, unfortunately, dose distribution in the junction region was not decreased less than 110%.Keywords: abutting fields, electron beam, radiation therapy, spoilers
Procedia PDF Downloads 1764936 Quality-Of-Service-Aware Green Bandwidth Allocation in Ethernet Passive Optical Network
Authors: Tzu-Yang Lin, Chuan-Ching Sue
Abstract:
Sleep mechanisms are commonly used to ensure the energy efficiency of each optical network unit (ONU) that concerns a single class delay constraint in the Ethernet Passive Optical Network (EPON). How long the ONUs can sleep without violating the delay constraint has become a research problem. Particularly, we can derive an analytical model to determine the optimal sleep time of ONUs in every cycle without violating the maximum class delay constraint. The bandwidth allocation considering such optimal sleep time is called Green Bandwidth Allocation (GBA). Although the GBA mechanism guarantees that the different class delay constraints do not violate the maximum class delay constraint, packets with a more relaxed delay constraint will be treated as those with the most stringent delay constraint and may be sent early. This means that the ONU will waste energy in active mode to send packets in advance which did not need to be sent at the current time. Accordingly, we proposed a QoS-aware GBA using a novel intra-ONU scheduling to control the packets to be sent according to their respective delay constraints, thereby enhancing energy efficiency without deteriorating delay performance. If packets are not explicitly classified but with different packet delay constraints, we can modify the intra-ONU scheduling to classify packets according to their packet delay constraints rather than their classes. Moreover, we propose the switchable ONU architecture in which the ONU can switch the architecture according to the sleep time length, thus improving energy efficiency in the QoS-aware GBA. The simulation results show that the QoS-aware GBA ensures that packets in different classes or with different delay constraints do not violate their respective delay constraints and consume less power than the original GBA.Keywords: Passive Optical Networks, PONs, Optical Network Unit, ONU, energy efficiency, delay constraint
Procedia PDF Downloads 2844935 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria
Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale
Abstract:
The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.Keywords: allocative efficiency, DEA, Tobit regression, tuber crop
Procedia PDF Downloads 2894934 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 3394933 Plasma-Assisted Decomposition of Cyclohexane in a Dielectric Barrier Discharge Reactor
Authors: Usman Dahiru, Faisal Saleem, Kui Zhang, Adam Harvey
Abstract:
Volatile organic compounds (VOCs) are atmospheric contaminants predominantly derived from petroleum spills, solvent usage, agricultural processes, automobile, and chemical processing industries, which can be detrimental to the environment and human health. Environmental problems such as the formation of photochemical smog, organic aerosols, and global warming are associated with VOC emissions. Research showed a clear relationship between VOC emissions and cancer. In recent years, stricter emission regulations, especially in industrialized countries, have been put in place around the world to restrict VOC emissions. Non-thermal plasmas (NTPs) are a promising technology for reducing VOC emissions by converting them into less toxic/environmentally friendly species. The dielectric barrier discharge (DBD) plasma is of interest due to its flexibility, moderate capital cost, and ease of operation under ambient conditions. In this study, a dielectric barrier discharge (DBD) reactor has been developed for the decomposition of cyclohexane (as a VOC model compound) using nitrogen, dry, and humidified air carrier gases. The effect of specific input energy (1.2-3.0 kJ/L), residence time (1.2-2.3 s) and concentration (220-520 ppm) were investigated. It was demonstrated that the removal efficiency of cyclohexane increased with increasing plasma power and residence time. The removal of cyclohexane decreased with increasing cyclohexane inlet concentration at fixed plasma power and residence time. The decomposition products included H₂, CO₂, H₂O, lower hydrocarbons (C₁-C₅) and solid residue. The highest removal efficiency (98.2%) was observed at specific input energy of 3.0 kJ/L and a residence time of 2.3 s in humidified air plasma. The effect of humidity was investigated to determine whether it could reduce the formation of solid residue in the DBD reactor. It was observed that the solid residue completely disappeared in humidified air plasma. Furthermore, the presence of OH radicals due to humidification not only increased the removal efficiency of cyclohexane but also improves product selectivity. This work demonstrates that cyclohexane can be converted to smaller molecules by a dielectric barrier discharge (DBD) non-thermal plasma reactor by varying plasma power (SIE), residence time, reactor configuration, and carrier gas.Keywords: cyclohexane, dielectric barrier discharge reactor, non-thermal plasma, removal efficiency
Procedia PDF Downloads 1364932 Influence of Sewage Sludge on Agricultural Land Quality and Crop
Authors: Catalina Iticescu, Lucian P. Georgescu, Mihaela Timofti, Gabriel Murariu
Abstract:
Since the accumulation of large quantities of sewage sludge is producing serious environmental problems, numerous environmental specialists are looking for solutions to solve this problem. The sewage sludge obtained by treatment of municipal wastewater may be used as fertiliser on agricultural soils because such sludge contains large amounts of nitrogen, phosphorus and organic matter. In many countries, sewage sludge is used instead of chemical fertilizers in agriculture, this being the most feasible method to reduce the increasingly larger quantities of sludge. The use of sewage sludge on agricultural soils is allowed only with a strict monitoring of their physical and chemical parameters, because heavy metals exist in varying amounts in sewage sludge. Exceeding maximum permitted quantities of harmful substances may lead to pollution of agricultural soil and may cause their removal aside because the plants may take up the heavy metals existing in soil and these metals will most probably be found in humans and animals through food. The sewage sludge analyzed for the present paper was extracted from the Wastewater Treatment Station (WWTP) Galati, Romania. The physico-chemical parameters determined were: pH (upH), total organic carbon (TOC) (mg L⁻¹), N-total (mg L⁻¹), P-total (mg L⁻¹), N-NH₄ (mg L⁻¹), N-NO₂ (mg L⁻¹), N-NO₃ (mg L⁻¹), Fe-total (mg L⁻¹), Cr-total (mg L⁻¹), Cu (mg L⁻¹), Zn (mg L⁻¹), Cd (mg L⁻¹), Pb (mg L⁻¹), Ni (mg L⁻¹). The determination methods were electrometrical (pH, C, TSD) - with a portable HI 9828 HANNA electrodes committed multiparameter and spectrophotometric - with a Spectroquant NOVA 60 - Merck spectrophotometer and with specific Merck parameter kits. The tests made pointed out the fact that the sludge analysed is low heavy metal falling within the legal limits, the quantities of metals measured being much lower than the maximum allowed. The results of the tests made to determine the content of nutrients in the sewage sludge have shown that the existing nutrients may be used to increase the fertility of agricultural soils. Other tests were carried out on lands where sewage sludge was applied in order to establish the maximum quantity of sludge that may be used so as not to constitute a source of pollution. The tests were made on three plots: a first batch with no mud and no chemical fertilizers applied, a second batch on which only sewage sludge was applied, and a third batch on which small amounts of chemical fertilizers were applied in addition to sewage sludge. The results showed that the production increases when the soil is treated with sludge and small amounts of chemical fertilizers. Based on the results of the present research, a fertilization plan has been suggested. This plan should be reconsidered each year based on the crops planned, the yields proposed, the agrochemical indications, the sludge analysis, etc.Keywords: agricultural use, crops, physico–chemical parameters, sewage sludge
Procedia PDF Downloads 2904931 Connected Objects with Optical Rectenna for Wireless Information Systems
Authors: Chayma Bahar, Chokri Baccouch, Hedi Sakli, Nizar Sakli
Abstract:
Harvesting and transport of optical and radiofrequency signals are a topical subject with multiple challenges. In this paper, we present a Optical RECTENNA system. We propose here a hybrid system solar cell antenna for 5G mobile communications networks. Thus, we propose rectifying circuit. A parametric study is done to follow the influence of load resistance and input power on Optical RECTENNA system performance. Thus, we propose a solar cell antenna structure in the frequency band of future 5G standard in 2.45 GHz bands.Keywords: antenna, IoT, optical rectenna, solar cell
Procedia PDF Downloads 1784930 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 1284929 Sustainable Wood Harvesting from Juniperus procera Trees Managed under a Participatory Forest Management Scheme in Ethiopia
Authors: Mindaye Teshome, Evaldo Muñoz Braz, Carlos M. M. Eleto Torres, Patricia Mattos
Abstract:
Sustainable forest management planning requires up-to-date information on the structure, standing volume, biomass, and growth rate of trees from a given forest. This kind of information is lacking in many forests in Ethiopia. The objective of this study was to quantify the population structure, diameter growth rate, and standing volume of wood from Juniperus procera trees in the Chilimo forest. A total of 163 sample plots were set up in the forest to collect the relevant vegetation data. Growth ring measurements were conducted on stem disc samples collected from 12 J. procera trees. Diameter and height measurements were recorded from a total of 1399 individual trees with dbh ≥ 2 cm. The growth rate, maximum current and mean annual increments, minimum logging diameter, and cutting cycle were estimated, and alternative cutting cycles were established. Using these data, the harvestable volume of wood was projected by alternating four minimum logging diameters and five cutting cycles following the stand table projection method. The results show that J. procera trees have an average density of 183 stems ha⁻¹, a total basal area of 12.1 m² ha⁻¹, and a standing volume of 98.9 m³ ha⁻¹. The mean annual diameter growth ranges between 0.50 and 0.65 cm year⁻¹ with an overall mean of 0.59 cm year⁻¹. The population of J. procera tree followed a reverse J-shape diameter distribution pattern. The maximum current annual increment in volume (CAI) occurred at around 49 years when trees reached 30 cm in diameter. Trees showed the maximum mean annual increment in volume (MAI) around 91 years, with a diameter size of 50 cm. The simulation analysis revealed that 40 cm MLD and a 15-year cutting cycle are the best minimum logging diameter and cutting cycle. This combination showed the largest harvestable volume of wood potential, volume increments, and a 35% recovery of the initially harvested volume. It is concluded that the forest is well stocked and has a large amount of harvestable volume of wood from J. procera trees. This will enable the country to partly meet the national wood demand through domestic wood production. The use of the current population structure and diameter growth data from tree ring analysis enables the exact prediction of the harvestable volume of wood. The developed model supplied an idea about the productivity of the J. procera tree population and enables policymakers to develop specific management criteria for wood harvesting.Keywords: logging, growth model, cutting cycle, minimum logging diameter
Procedia PDF Downloads 894928 Geographic Information System-Based Map for Best Suitable Place for Cultivating Permanent Trees in South-Lebanon
Authors: Allaw Kamel, Al-Chami Leila
Abstract:
It is important to reduce the human influence on natural resources by identifying an appropriate land use. Moreover, it is essential to carry out the scientific land evaluation. Such kind of analysis allows identifying the main factors of agricultural production and enables decision makers to develop crop management in order to increase the land capability. The key is to match the type and intensity of land use with its natural capability. Therefore; in order to benefit from these areas and invest them to obtain good agricultural production, they must be organized and managed in full. Lebanon suffers from the unorganized agricultural use. We take south Lebanon as a study area, it is the most fertile ground and has a variety of crops. The study aims to identify and locate the most suitable area to cultivate thirteen type of permanent trees which are: apples, avocados, stone fruits in coastal regions and stone fruits in mountain regions, bananas, citrus, loquats, figs, pistachios, mangoes, olives, pomegranates, and grapes. Several geographical factors are taken as criterion for selection of the best location to cultivate. Soil, rainfall, PH, temperature, and elevation are main inputs to create the final map. Input data of each factor is managed, visualized and analyzed using Geographic Information System (GIS). Management GIS tools are implemented to produce input maps capable of identifying suitable areas related to each index. The combination of the different indices map generates the final output map of the suitable place to get the best permanent tree productivity. The output map is reclassified into three suitability classes: low, moderate, and high suitability. Results show different locations suitable for different kinds of trees. Results also reflect the importance of GIS in helping decision makers finding a most suitable location for every tree to get more productivity and a variety in crops.Keywords: agricultural production, crop management, geographical factors, Geographic Information System, GIS, land capability, permanent trees, suitable location
Procedia PDF Downloads 1414927 Analysis of Force Convection in Bandung Triga Reactor Core Plate Types Fueled Using Coolod-N2
Authors: K. A. Sudjatmi, Endiah Puji Hastuti, Surip Widodo, Reinaldy Nazar
Abstract:
Any pretensions to stop the production of TRIGA fuel elements by TRIGA reactor fuel elements manufacturer should be anticipated by the operating agency of TRIGA reactor to replace the cylinder type fuel element with plate type fuel element, that available on the market. This away was performed the calculation on U3Si2Al fuel with uranium enrichment of 19.75% and a load level of 2.96 gU/cm3. Maximum power that can be operated on free convection cooling mode at the BANDUNG TRIGA reactor fuel plate was 600 kW. This study has been conducted thermalhydraulic characteristic calculation model of the reactor core power 2MW. BANDUNG TRIGA reactor core fueled plate type is composed of 16 fuel elements, 4 control elements and one irradiation facility which is located right in the middle of the core. The reactor core is cooled using a pump which is already available with flow rate 900 gpm. Analysis on forced convection cooling mode with flow from the top down from 10%, 20%, 30% and so on up to a 100% rate of coolant flow. performed using the COOLOD-N2 code. The calculations result showed that the 2 MW power with inlet coolant temperature at 37 °C and cooling rate percentage of 50%, then the coolant temperature, maximum cladding and meat respectively 64.96 oC, 124.81 oC, and 125.08 oC, DNBR (departure from nucleate boiling ratio)=1.23 and OFIR (onset of flow instability ratio)=1:00. The results are expected to be used as a reference for determining the power and cooling rate level of the BANDUNG TRIGA reactor core plate types fueled.Keywords: TRIGA, COOLOD-N2, plate type fuel element, force convection, thermal hydraulic characteristic
Procedia PDF Downloads 3004926 Status of the European Atlas of Natural Radiation
Authors: G. Cinelli, T. Tollefsen, P. Bossew, V. Gruber, R. Braga, M. A. Hernández-Ceballos, M. De Cort
Abstract:
In 2006, the Joint Research Centre (JRC) of the European Commission started the project of the 'European Atlas of Natural Radiation'. The Atlas aims at preparing a collection of maps of Europe displaying the levels of natural radioactivity caused by different sources (indoor and outdoor radon, cosmic radiation, terrestrial radionuclides, terrestrial gamma radiation, etc). The overall goal of the project is to estimate, in geographical resolution, the annual dose that the public may receive from natural radioactivity, combining all the information from the different radiation components. The first map which has been developed is the European map of indoor radon (Rn) since in most cases Rn is the most important contribution to exposure. New versions of the map are realised when new countries join the project or when already participating countries send new data. We show the latest status of this map which currently includes 25 European countries. Second, the JRC has undertaken to map a variable which measures 'what earth delivers' in terms of Rn. The corresponding quantity is called geogenic radon potential (RP). Due to the heterogeneity of data sources across the Europe there is need to develop a harmonized quantity which at the one hand adequately measures or classifies the RP, and on the other hand is suited to accommodate the variety of input data used to estimate this target quantity. Candidates for input quantities which may serve as predictors of the RP, and for which data are available across Europe, to different extent, are Uranium (U) concentration in rocks and soils, soil gas radon and soil permeability, terrestrial gamma dose rate, geological information and indoor data from ground floor. The European Geogenic Radon Map gives the possibility to characterize areas, on European geographical scale, for radon hazard where indoor radon measurements are not available. Parallel to ongoing work on the European Indoor Radon, Geogenic Radon and Cosmic Radiation Maps, we made progress in the development of maps of terrestrial gamma radiation and U, Th and K concentrations in soil and bedrock. We show the first, preliminary map of the terrestrial gamma dose rate, estimated using the data of ambient dose equivalent rate available from the EURDEP system (about 5000 fixed monitoring stations across Europe). Also, the first maps of U, Th, and K concentrations in soil and bedrock are shown in the present work.Keywords: Europe, natural radiation, mapping, indoor radon
Procedia PDF Downloads 2914925 Adsorption of Heavy Metals Using Chemically-Modified Tea Leaves
Authors: Phillip Ahn, Bryan Kim
Abstract:
Copper is perhaps the most prevalent heavy metal used in the manufacturing industries, from food additives to metal-mechanic factories. Common methodologies to remove copper are expensive and produce undesired by-products. A good decontaminating candidate should be environment-friendly, inexpensive, and capable of eliminating low concentrations of the metal. This work suggests chemically modified spent tea leaves of chamomile, peppermint and green tea in their thiolated, sulfonated and carboxylated forms as candidates for the removal of copper from solutions. Batch experiments were conducted to maximize the adsorption of copper (II) ions. Effects such as acidity, salinity, adsorbent dose, metal concentration, and presence of surfactant were explored. Experimental data show that maximum adsorption is reached at neutral pH. The results indicate that Cu(II) can be removed up to 53%, 22% and 19% with the thiolated, carboxylated and sulfonated adsorbents, respectively. Maximum adsorption of copper on TPM (53%) is achieved with 150 mg and decreases with the presence of salts and surfactants. Conversely, sulfonated and carboxylated adsorbents show better adsorption in the presence of surfactants. Time-dependent experiments show that adsorption is reached in less than 25 min for TCM and 5 min for SCM. Instrumental analyses determined the presence of active functional groups, thermal resistance, and scanning electron microscopy, indicating that both adsorbents are promising materials for the selective recovery and treatment of metal ions from wastewaters. Finally, columns were prepared with these adsorbents to explore their application in scaled-up processes, with very positive results. A long-term goal involves the recycling of the exhausted adsorbent and/or their use in the preparation of biofuels due to changes in materials’ structures.Keywords: heavy metal removal, adsorption, wastewaters, water remediation
Procedia PDF Downloads 2904924 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 2024923 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 3054922 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N
Authors: Oindrila Nath, S. Sridharan
Abstract:
Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature
Procedia PDF Downloads 4104921 The Effect of Main Factors on Forces during FSJ Processing of AA2024 Aluminum
Authors: Dunwen Zuo, Yongfang Deng, Bo Song
Abstract:
An attempt is made here to measure the forces of three directions, under conditions of different feed speeds, different tilt angles of tool and without or with the pin on the tool, by using octagonal ring dynamometer in the AA2024 aluminum FSJ (Friction Stir Joining) process, and investigate how four main factors influence forces in the FSJ process. It is found that, high feed speed lead to small feed force and small lateral force, but high feed speed leads to large feed force in the stable joining stage of process. As the rotational speed increasing, the time of axial force drop from the maximum to the minimum required increased in the push-up process. In the stable joining stage, the rotational speed has little effect on the feed force; large rotational speed leads to small lateral force and axial force. The maximum axial force increases as the tilt angle of tool increases at the downward movement stage. At the moment of start feeding, as tilt angle of tool increases, the amplitudes of the axial force increasing become large. In the stable joining stage, with the increase of tilt angle of tool, the axial force is increased, the lateral force is decreased, and the feed force almost unchanged. The tool with pin will decrease axial force in the downward movement stage. The feed force and lateral force will increase, but the axial force will reduced in the stable joining stage by using the tool with pin compare to by using the tool without pin.Keywords: FSJ, force factor, AA2024 aluminum, friction stir joining
Procedia PDF Downloads 4914920 Isolation, Characterization and Optimization of Alkalophilic and Thermotolerant Lipase from Bacillus subtilis Strain
Authors: Indu Bhushan Sharma, Rashmi Saraswat
Abstract:
The thermotolerant, solvent stable and alkalophilic lipase producing bacterial strain was isolated from the water sample of the foothills of Trikuta Mountain in Kakryal (Reasi district) in Jammu and Kashmir, India. The lipase-producing microorganisms were screened using tributyrin agar plates. The selected microbe was optimized for maximum lipase production by subjecting to various carbon and nitrogen sources, incubation period and inoculum size. The selected strain was identified as Bacillus subtilis strain kakrayal_1 (BSK_1) using 16S rRNA sequence analysis. Effect of pH, temperature, metal ions, detergents and organic solvents were studied on lipase activity. Lipase was found to be stable over a pH range of 6.0 to 9.0 and exhibited maximum activity at pH 8. Lipolytic activity was highest at 37°C and the enzyme activity remained at 60°C for 24hrs, hence, established as thermo-tolerant. Production of lipase was significantly induced by vegetable oil and the best nitrogen source was found to be peptone. The isolated Bacillus lipase was stimulated by pre-treatment with Mn2+, Ca2+, K+, Zn2+, and Fe2+. Lipase was stable in detergents such as triton X 100, tween 20 and Tween 80. The 100% ethyl acetate enhanced lipase activity whereas, lipase activity were found to be stable in Hexane. The optimization resulted in 4 fold increase in lipase production. Bacillus lipases are ‘generally recognized as safe’ (GRAS) and are industrially interesting. The inducible alkaline, thermo-tolerant lipase exhibited the ability to be stable in detergents and organic solvents. This could be further researched as a potential biocatalyst for industrial applications such as biotransformation, detergent formulation, bioremediation and organic synthesis.Keywords: bacillus, lipase, thermotolerant, alkalophilic
Procedia PDF Downloads 2554919 Optimal Concentration of Fluorescent Nanodiamonds in Aqueous Media for Bioimaging and Thermometry Applications
Authors: Francisco Pedroza-Montero, Jesús Naín Pedroza-Montero, Diego Soto-Puebla, Osiris Alvarez-Bajo, Beatriz Castaneda, Sofía Navarro-Espinoza, Martín Pedroza-Montero
Abstract:
Nanodiamonds have been widely studied for their physical properties, including chemical inertness, biocompatibility, optical transparency from the ultraviolet to the infrared region, high thermal conductivity, and mechanical strength. In this work, we studied how the fluorescence spectrum of nanodiamonds quenches concerning the concentration in aqueous solutions systematically ranging from 0.1 to 10 mg/mL. Our results demonstrated a non-linear fluorescence quenching as the concentration increases for both of the NV zero-phonon lines; the 5 mg/mL concentration shows the maximum fluorescence emission. Furthermore, this behaviour is theoretically explained as an electronic recombination process that modulates the intensity in the NV centres. Finally, to gain more insight, the FRET methodology is used to determine the fluorescence efficiency in terms of the fluorophores' separation distance. Thus, the concentration level is simulated as follows, a small distance between nanodiamonds would be considered a highly concentrated system, whereas a large distance would mean a low concentrated one. Although the 5 mg/mL concentration shows the maximum intensity, our main interest is focused on the concentration of 0.5 mg/mL, which our studies demonstrate the optimal human cell viability (99%). In this respect, this concentration has the feature of being as biocompatible as water giving the possibility to internalize it in cells without harming the living media. To this end, not only can we track nanodiamonds on the surface or inside the cell with excellent precision due to their fluorescent intensity, but also, we can perform thermometry tests transforming a fluorescence contrast image into a temperature contrast image.Keywords: nanodiamonds, fluorescence spectroscopy, concentration, bioimaging, thermometry
Procedia PDF Downloads 4054918 Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV
Authors: Joyce K. Anastasi, Bernadette Capili
Abstract:
Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV Our research team focuses on developing and testing protocols to manage chronic symptoms. For many years, our team designed and implemented symptom management studies for people living with HIV (PLWH). We identify symptoms that are not curative and are not adequately controlled by conventional therapies. As an exemplar, we describe how we successfully engaged PLWH in developing and refining our research feasibility protocol for distal sensory peripheral neuropathy (DSP) associated with HIV. With input from PLWH with DSP, our research received National Institutes of Health (NIH) research funding support. Significance: DSP is one of the most common neurologic complications in HIV. It is estimated that DSP affects 21% to 50% of PLWH. The pathogenesis of DSP in HIV is complex and unclear. Proposed mechanisms include cytokine dysregulation, viral protein-produced neurotoxicity, and mitochondrial dysfunction associated with antiretroviral medications. There are no FDA-approved treatments for DSP in HIV. Purpose: Aims: 1) to explore the impact of DSP on the lives of PLWH, 2) to identify patients’ perspectives on successful treatments for DSP, 3) to identify interventions considered feasible and sensitive to the needs of PLWH with DSP, and 4) to obtain participant input for protocol/study design. Description of Process: We conducted a needs assessment with PLWH with DSP. From our needs assessment, we learned from the patients’ perspective detailed descriptions of their symptoms; physical functioning with DSP; self-care remedies tried, and desired interventions. We also asked about protocol scheduling, instrument clarity, study compensation, study-related burdens, and willingness to participate in a randomized controlled trial (RCT) with a placebo and a waitlist group. Implications: We incorporated many of the suggestions learned from the need assessment. We developed and completed a feasibility study that provided us with invaluable information that informed subsequent NIH-funded studies. In addition to our extensive clinical and research experience working with PLWH, learning from the patient perspective helped in developing our protocol and promoting a successful plan for recruitment and retention of study participants.Keywords: clinical trial development, peripheral neuropathy, traditional medicine, HIV, AIDS
Procedia PDF Downloads 854917 Using AI Based Software as an Assessment Aid for University Engineering Assignments
Authors: Waleed Al-Nuaimy, Luke Anastassiou, Manjinder Kainth
Abstract:
As the process of teaching has evolved with the advent of new technologies over the ages, so has the process of learning. Educators have perpetually found themselves on the lookout for new technology-enhanced methods of teaching in order to increase learning efficiency and decrease ever expanding workloads. Shortly after the invention of the internet, web-based learning started to pick up in the late 1990s and educators quickly found that the process of providing learning material and marking assignments could change thanks to the connectivity offered by the internet. With the creation of early web-based virtual learning environments (VLEs) such as SPIDER and Blackboard, it soon became apparent that VLEs resulted in higher reported computer self-efficacy among students, but at the cost of students being less satisfied with the learning process . It may be argued that the impersonal nature of VLEs, and their limited functionality may have been the leading factors contributing to this reported dissatisfaction. To this day, often faced with the prospects of assigning colossal engineering cohorts their homework and assessments, educators may frequently choose optimally curated assessment formats, such as multiple-choice quizzes and numerical answer input boxes, so that automated grading software embedded in the VLEs can save time and mark student submissions instantaneously. A crucial skill that is meant to be learnt during most science and engineering undergraduate degrees is gaining the confidence in using, solving and deriving mathematical equations. Equations underpin a significant portion of the topics taught in many STEM subjects, and it is in homework assignments and assessments that this understanding is tested. It is not hard to see that this can become challenging if the majority of assignment formats students are engaging with are multiple-choice questions, and educators end up with a reduced perspective of their students’ ability to manipulate equations. Artificial intelligence (AI) has in recent times been shown to be an important consideration for many technologies. In our paper, we explore the use of new AI based software designed to work in conjunction with current VLEs. Using our experience with the software, we discuss its potential to solve a selection of problems ranging from impersonality to the reduction of educator workloads by speeding up the marking process. We examine the software’s potential to increase learning efficiency through its features which claim to allow more customized and higher-quality feedback. We investigate the usability of features allowing students to input equation derivations in a range of different forms, and discuss relevant observations associated with these input methods. Furthermore, we make ethical considerations and discuss potential drawbacks to the software, including the extent to which optical character recognition (OCR) could play a part in the perpetuation of errors and create disagreements between student intent and their submitted assignment answers. It is the intention of the authors that this study will be useful as an example of the implementation of AI in a practical assessment scenario insofar as serving as a springboard for further considerations and studies that utilise AI in the setting and marking of science and engineering assignments.Keywords: engineering education, assessment, artificial intelligence, optical character recognition (OCR)
Procedia PDF Downloads 1234916 Production of Organic Solvent Tolerant Hydrolytic Enzymes (Amylase and Protease) by Bacteria Isolated from Soil of a Dairy Farm
Authors: Alok Kumar, Hari Ram, Lebin Thomas, Ved Pal Singh
Abstract:
Organic solvent tolerant amylases and proteases of microbial origin are in great demand for their application in transglycosylation of water-insoluble flavanoids and in peptide synthesizing reaction in organic media. Most of the amylases and proteases are unstable in presence of organic solvent. In the present work two different bacterial strains M-11 and VP-07 were isolated from the soil sample of a dairy farm in Delhi, India, for the efficient production of extracellular amylase and protease through their screening on starch agar (SA) and skimmed milk agar (SMA) plates, respectively. Both the strains (M-11 and VP-07) were identified based on morphological, biochemical and 16S rRNA gene sequencing methods. After analysis through Ez-Taxon software, the strains M-11 and VP-07 were found to have maximum pairwise similarity of 98.63% and 100% with Bacillus subtilis subsp. inaquosorum BGSC 3A28 and Bacillus anthracis ATCC 14578 and were therefore identified as Bacillus sp. UKS1 and Bacillus sp. UKS2, respectively. Time course study of enzyme activity and bacterial growth has shown that both strains exhibited typical sigmoid growth behavior and maximum production of amylase (180 U/ml) and protease (78 U/ml) by these strains (UKS1 and UKS2) was commenced during stationary phase of growth at 24 and 20 h, respectively. Thereafter, both amylase and protease were tested for their tolerance towards organic solvents and were found to be active as well stable in p-xylene (130% and 115%), chloroform (110% and 112%), isooctane (119% and 107%), benzene (121% and 104%), n-hexane (116% and 103%) and toluene (112% and 101%, respectively). Owing to such properties, these enzymes can be exploited for their potential application in industries for organic synthesis.Keywords: amylase, enzyme activity, industrial applications, organic solvent tolerant, protease
Procedia PDF Downloads 3444915 Soil Enzyme Activity as Influenced by Post-emergence Herbicides Applied in Soybean [Glycine max (L.) Merrill]
Authors: Uditi Dhakad, Baldev Ram, Chaman K. Jadon, R. K. Yadav, D. L. Yadav, Pratap Singh, Shalini Meena
Abstract:
A field experiment was conducted during Kharif 2021 at Agricultural Research Station, Kota, to evaluate the effect of different post-emergence herbicides applied to soybean [Glycine max (L.) Merrill] on soil enzymes activity viz. dehydrogenase, phosphatase, and urease. The soil of the experimental site was clay loam (vertisols) in texture and slightly alkaline in reaction with 7.7 pH. The soil was low in organic carbon (0.49%), medium in available nitrogen (210 kg/ha), phosphorus (23.5 P2O5 kg/ha), and high in potassium (400 K2O kg/ha) status. The results elucidated that no significant adverse effect on soil dehydrogenase, urease, and phosphatase activity was determined with the application of post-emergence herbicides over the untreated control. Two hands weeding at 20 and 40 DAS registered maximum dehydrogenase enzyme activity (0.329 μgTPF/g soil/d) closely followed by herbicides mixtures and sole herbicide while pre-emergence application of pendimethalin + imazethapyr 960 g a.i./ha and pendimethalin 1.0 kg a.i./ha significantly reduced dehydrogenase enzyme activity compared to control. Urease enzyme activity was not much affected under different weed control treatments and weedy checks. The treatments were found statistically non-significant, and values ranged between 1.16-1.25 μgNH4N/g soil/d. Phosphatase enzyme activity was also not influenced significantly due to various weed control treatments. Though maximum phosphatase enzyme activity (30.17 μgpnp/g soil/hr) was observed under two-hand weeding, followed by fomesafen + fluazifop-p-butyl 220 g a.i./ha. Herbicidal weed control measures did not influence the total bacteria, fungi, and actinomycetes population.Keywords: dehydrogenase, phosphatase, post-emergence, soil enzymes, urease.
Procedia PDF Downloads 1054914 An Empirical Analysis of Farmers Field Schools and Effect on Tomato Productivity in District Malakand Khyber Pakhtunkhwa-Pakistan
Authors: Mahmood Iqbal, Khalid Nawab, Tachibana Satoshi
Abstract:
Farmer Field School (FFS) is constantly aims to assist farmers to determine and learn about field ecology and integrated crop management. The study was conducted to examine the change in productivity of tomato crop in the study area; to determine increase in per acre yield of the crop, and find out reduction in per acre input cost. A study of tomato crop was conducted in ten villages namely Jabban, Bijligar Colony, Palonow, Heroshah, Zara Maira, Deghar Ghar, Sidra Jour, Anar Thangi, Miangano Korona and Wartair of district Malakand. From each village 15 respondents were selected randomly on the basis of identical allocation making sample size of 150 respondents. The research was based on primary as well as secondary data. Primary data was collected from farmers while secondary data were taken from Agriculture Extension Department Dargai, District Malakand. Interview schedule was planned and each farmer was interviewed personally. The study was based on comparison of cost, yield and income of tomato before and after FFS. Paired t-test and Statistical Package for Social Sciences (SPSS) was used for analysis; outcome of the study show that integrated pest management project has brought a positive change in the attitude of farmers of the project area through FFS approach. In district Malakand 66.0% of the respondents were between the age group of 31-50 years, 11.3% of respondents had primary level of education, 12.7% of middle level, 28.7% metric level, 3.3% of intermediate level and 2.0% of graduate level of education while 42.0% of respondents were illiterate and have no education. Average land holding size of farmers was 6.47 acres, cost of seed, crop protection from insect pest and crop protection from diseases was reduced by Rs. 210.67, Rs. 2584.43 and Rs. 3044.16 respectively, the cost of fertilizers and cost of farm yard manure was increased by Rs.1548.87 and Rs. 1151.40 respectively while tomato yield was increased by 1585.03 kg/acre from 7663.87 to 9248.90 kg/acre. The role of FFS initiate by integrated pest management project through department of agriculture extension for the development of agriculture was worth mentioning. It has brought enhancement in crop yield of tomato and their income through FFS approach. On the basis of results of the research studies, integrated pest management project should spread their developmental activities for maximum participation of the complete rural masses through participatory FFS approach.Keywords: agriculture, Farmers field schools, extension education, tomato
Procedia PDF Downloads 6134913 Assessing Storage of Stability and Mercury Reduction of Freeze-Dried Pseudomonas putida within Different Types of Lyoprotectant
Authors: A. A. M. Azoddein, Y. Nuratri, A. B. Bustary, F. A. M. Azli, S. C. Sayuti
Abstract:
Pseudomonas putida is a potential strain in biological treatment to remove mercury contained in the effluent of petrochemical industry due to its mercury reductase enzyme that able to reduce ionic mercury to elementary mercury. Freeze-dried P. putida allows easy, inexpensive shipping, handling and high stability of the product. This study was aimed to freeze dry P. putida cells with addition of lyoprotectant. Lyoprotectant was added into the cells suspension prior to freezing. Dried P. putida obtained was then mixed with synthetic mercury. Viability of recovery P. putida after freeze dry was significantly influenced by the type of lyoprotectant. Among the lyoprotectants, tween 80/ sucrose was found to be the best lyoprotectant. Sucrose able to recover more than 78% (6.2E+09 CFU/ml) of the original cells (7.90E+09CFU/ml) after freeze dry and able to retain 5.40E+05 viable cells after 4 weeks storage in 4oC without vacuum. Polyethylene glycol (PEG) pre-treated freeze dry cells and broth pre-treated freeze dry cells after freeze-dry recovered more than 64% (5.0 E+09CFU/ml) and >0.1% (5.60E+07CFU/ml). Freeze-dried P. putida cells in PEG and broth cannot survive after 4 weeks storage. Freeze dry also does not really change the pattern of growth P. putida but extension of lag time was found 1 hour after 3 weeks of storage. Additional time was required for freeze-dried P. putida cells to recover before introduce freeze-dried cells to more complicated condition such as mercury solution. The maximum mercury reduction of PEG pre-treated freeze-dried cells after freeze dry and after storage 3 weeks was 56.78% and 17.91%. The maximum of mercury reduction of tween 80/sucrose pre-treated freeze-dried cells after freeze dry and after storage 3 weeks were 26.35% and 25.03%. Freeze dried P. putida was found to have lower mercury reduction compare to the fresh P. putida that has been growth in agar. Result from this study may be beneficial and useful as initial reference before commercialize freeze-dried P. putida.Keywords: Pseudomonas putida, freeze-dry, PEG, tween80/Sucrose, mercury, cell viability
Procedia PDF Downloads 3554912 Implementation of 4-Bit Direct Charge Transfer Switched Capacitor DAC with Mismatch Shaping Technique
Authors: Anuja Askhedkar, G. H. Agrawal, Madhu Gudgunti
Abstract:
Direct Charge Transfer Switched Capacitor (DCT-SC) DAC is the internal DAC used in Delta-Sigma (∆∑) DAC which works on Over-Sampling concept. The Switched Capacitor DAC mainly suffers from mismatch among capacitors. Mismatch among capacitors in DAC, causes non linearity between output and input. Dynamic Element Matching (DEM) technique is used to match the capacitors. According to element selection logic there are many types. In this paper, Data Weighted Averaging (DWA) technique is used for mismatch shaping. In this paper, the 4 bit DCT-SC-DAC with DWA-DEM technique is implemented using WINSPICE simulation software in 180nm CMOS technology. DNL for DAC with DWA is ±0.03 LSB and INL is ± 0.02LSB.Keywords: ∑-Δ DAC, DCT-SC-DAC, mismatch shaping, DWA, DEM
Procedia PDF Downloads 3504911 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study
Authors: Cecile Laval, Harriet Lowe
Abstract:
Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.Keywords: eye-tracking, language teaching, processing instruction, second language acquisition
Procedia PDF Downloads 2804910 The Electric Car Wheel Hub Motor Work Analysis with the Use of 2D FEM Electromagnetic Method and 3D CFD Thermal Simulations
Authors: Piotr Dukalski, Bartlomiej Bedkowski, Tomasz Jarek, Tomasz Wolnik
Abstract:
The article is concerned with the design of an electric in wheel hub motor installed in an electric car with two-wheel drive. It presents the construction of the motor on the 3D cross-section model. Work simulation of the motor (applicated to Fiat Panda car) and selected driving parameters such as driving on the road with a slope of 20%, driving at maximum speed, maximum acceleration of the car from 0 to 100 km/h are considered by the authors in the article. The demand for the drive power taking into account the resistance to movement was determined for selected driving conditions. The parameters of the motor operation and the power losses in its individual elements, calculated using the FEM 2D method, are presented for the selected car driving parameters. The calculated power losses are used in 3D models for thermal calculations using the CFD method. Detailed construction of thermal models with materials data, boundary conditions and losses calculated using the FEM 2D method are presented in the article. The article presents and describes calculated temperature distributions in individual motor components such as winding, permanent magnets, magnetic core, body, cooling system components. Generated losses in individual motor components and their impact on the limitation of its operating parameters are described by authors. Attention is paid to the losses generated in permanent magnets, which are a source of heat as the removal of which from inside the motor is difficult. Presented results of calculations show how individual motor power losses, generated in different load conditions while driving, affect its thermal state.Keywords: electric car, electric drive, electric motor, thermal calculations, wheel hub motor
Procedia PDF Downloads 175