Search results for: maximum input
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6139

Search results for: maximum input

5059 Geometrical Fluid Model for Blood Rheology and Pulsatile Flow in Stenosed Arteries

Authors: Karan Kamboj, Vikramjeet Singh, Vinod Kumar

Abstract:

Considering blood to be a non-Newtonian Carreau liquid, this indirect numerical model investigates the pulsatile blood flow in a constricted restricted conduit that has numerous gentle stenosis inside the view of an increasing body speed. Asymptotic answers are obtained for the flow rate, pressure inclination, speed profile, sheer divider pressure, and longitudinal impedance to stream after the use of the twofold irritation approach to the problem of the succeeding non-straight limit esteem. It has been observed that the speed of the blood increases when there is an increase in the point of tightening of the conduit, the body speed increase, and the power regulation file. However, this rheological manner of behaving changes to one of longitudinal impedance to stream and divider sheer pressure when each of the previously mentioned boundaries increases. It has also been seen that the sheer divider pressure in the bloodstream greatly increases when there is an increase in the maximum depth of the stenosis but that it significantly decreases when there is an increase in the pulsatile Reynolds number. This is an interesting phenomenon. The assessments of the amount of growth in the longitudinal resistance to flow increase overall with the increment of the maximum depth of the stenosis and the Weissenberg number. Additionally, it is noted that the average speed of blood increases noticeably with the growth of the point of tightening of the corridor, and body speed increases border. This is something that can be observed.

Keywords: geometry of artery, pulsatile blood flow, numerous stenosis

Procedia PDF Downloads 99
5058 Study on the Strength and Durability Properties of Ternary Blended Concrete

Authors: Athira Babu, M. Nazeer

Abstract:

Concrete is the most common and versatile construction material used in any type of civil engineering structure. The durability and strength characteristics of concrete make it more desirable among any other construction materials. The manufacture and use of concrete produces wide range of environmental and social consequences. The major component in concrete, cement accounts for roughly 5 % of global CO2 emissions. In order to improve the environmental friendliness of concrete, suitable substitutes are added to concrete. The present study deals with GGBS and silica fume as supplementary cementitious materials. The strength and durability studies were conducted in this ternary blended concrete. Several mixes were adopted with varying percentages of Silica Fume i.e., 5%, 10% and 15%. Binary mix with 50% GGBS was also prepared. GGBS content has been kept constant for the rest of mixes. There is an improvement in compressive strength with addition of Silica Fume.Maximum workability, split tensile strength, modulus of elasticity, flexural strength and impact resistance are obtained for GGBS binary blend. For durability studies, maximum sulphate resistance,carbonation resistance andresistance to chloride ion penetration are obtained for ternary blended concrete. Partial replacement of GGBS and Silica Fume reduces the environmental effects, produces economical and eco-friendly concrete. The study showed that for strength characteristics, binary blended concrete showed better performance while for durability study ternary blend performed better.

Keywords: concrete, GGBS, silica fume, ternary blend

Procedia PDF Downloads 482
5057 Effect of Geometry on the Aerodynamic Performance of Darrieus H Yype Vertical Axis Wind Turbine

Authors: Belkheir Noura, Rabah Kerfah, Boumehani Abdellah

Abstract:

The influence of solidity variations on the aerodynamic performance of H type vertical axis wind turbine is studied in this paper. The wind turbine model used in this paper is the three-blade wind turbine with the symmetrical airfoil, NACA0021. The length of the chord is 0.265m. Numerical investigations were implemented for the different solidity by changing the radius and blade number. A two-dimensional model of the wind turbine is employed. The approach a Reynolds-Averaged Navier–Stokes equations, completed by the K- ώ SST turbulence model, is used. Motion mesh model capability of a computational fluid dynamics (CFD) solver is used. For each value of the solidity, the aerodynamics performances and the characteristics of the flow field are studied at several values of the tip speed ratio, λ = 0.5 to λ = 3, with an incoming wind speed of 8 m/s. The results show that increasing the number of blades will reduce the maximum value of the power coefficient of the wind turbine. Also, for the VAWT with a lower solidity can obtain the maximum Cp at a high tip speed ratio. The effects of changing the radius and blade number on aerodynamic performance are almost the same. Finally, for the validation, experimental data from the literature and computational results were compared. In conclusion, to study the influence of the solidity in the performances of the wind turbine is to provide the reference for the design of H type vertical axis wind turbines.

Keywords: wind energy, darrieus h type vertical axis wind turbine, computational fluid dynamic, solidity

Procedia PDF Downloads 96
5056 Hybrid Wind Solar Gas Reliability Optimization Using Harmony Search under Performance and Budget Constraints

Authors: Meziane Rachid, Boufala Seddik, Hamzi Amar, Amara Mohamed

Abstract:

Today’s energy industry seeks maximum benefit with maximum reliability. In order to achieve this goal, design engineers depend on reliability optimization techniques. This work uses a harmony search algorithm (HS) meta-heuristic optimization method to solve the problem of wind-Solar-Gas power systems design optimization. We consider the case where redundant electrical components are chosen to achieve a desirable level of reliability. The electrical power components of the system are characterized by their cost, capacity and reliability. The reliability is considered in this work as the ability to satisfy the consumer demand which is represented as a piecewise cumulative load curve. This definition of the reliability index is widely used for power systems. The proposed meta-heuristic seeks for the optimal design of series-parallel power systems in which a multiple choice of wind generators, transformers and lines are allowed from a list of product available in the market. Our approach has the advantage to allow electrical power components with different parameters to be allocated in electrical power systems. To allow fast reliability estimation, a universal moment generating function (UMGF) method is applied. A computer program has been developed to implement the UMGF and the HS algorithm. An illustrative example is presented.

Keywords: reliability optimization, harmony search optimization (HSA), universal generating function (UMGF)

Procedia PDF Downloads 576
5055 Effect of Different Spacings on Growth Yield and Fruit Quality of Peach in the Sub-Tropics of India

Authors: Harminder Singh, Rupinder Kaur

Abstract:

Peach is primarily a temperate fruit, but its low chilling cultivars are grown quite successfully in the sub-tropical climate as well. The area under peach cultivation is picking up rapidly in the sub tropics of northern India due to higher return on a unit area basis, availability of suitable peach cultivar and their production technology. Information on the use of different training systems on peach in the sub tropics is inadequate. In this investigation, conducted at Punjab Agricultural University, Ludhiana (Punjab), India, the trees of the Shan-i-Punjab peach were planted at four different spacings i.e. 6.0x3.0m, 6.0x2.5m, 4.5x3.0m and 4.5x2.5m and were trained to central leader system. The total radiation interception and penetration in the upper and lower canopy parts were higher in 6x3.0m and 6x2.5m planted trees as compared to other spacings. Average radiation interception was maximum in the upper part of the tree canopy, and it decreased significantly with the depth of the canopy in all the spacings. Tree planted at wider spacings produced more vegetative (tree height, tree girth, tree spread and canopy volume) and reproductive growth (flower bud density, number of fruits and fruit yield) per tree but productivity was maximum in the closely planted trees. Fruits harvested from the wider spaced trees were superior in fruit quality (size, weight, colour, TSS and acidity) and matured earlier than those harvested from closed spaced trees.

Keywords: quality, radiation, spacings, yield

Procedia PDF Downloads 188
5054 Urban Flood Risk Mapping–a Review

Authors: Sherly M. A., Subhankar Karmakar, Terence Chan, Christian Rau

Abstract:

Floods are one of the most frequent natural disasters, causing widespread devastation, economic damage and threat to human lives. Hydrologic impacts of climate change and intensification of urbanization are two root causes of increased flood occurrences, and recent research trends are oriented towards understanding these aspects. Due to rapid urbanization, population of cities across the world has increased exponentially leading to improperly planned developments. Climate change due to natural and anthropogenic activities on our environment has resulted in spatiotemporal changes in rainfall patterns. The combined effect of both aggravates the vulnerability of urban populations to floods. In this context, an efficient and effective flood risk management with its core component as flood risk mapping is essential in prevention and mitigation of flood disasters. Urban flood risk mapping involves zoning of an urban region based on its flood risk, which depicts the spatiotemporal pattern of frequency and severity of hazards, exposure to hazards, and degree of vulnerability of the population in terms of socio-economic, environmental and infrastructural aspects. Although vulnerability is a key component of risk, its assessment and mapping is often less advanced than hazard mapping and quantification. A synergic effort from technical experts and social scientists is vital for the effectiveness of flood risk management programs. Despite an increasing volume of quality research conducted on urban flood risk, a comprehensive multidisciplinary approach towards flood risk mapping still remains neglected due to which many of the input parameters and definitions of flood risk concepts are imprecise. Thus, the objectives of this review are to introduce and precisely define the relevant input parameters, concepts and terms in urban flood risk mapping, along with its methodology, current status and limitations. The review also aims at providing thought-provoking insights to potential future researchers and flood management professionals.

Keywords: flood risk, flood hazard, flood vulnerability, flood modeling, urban flooding, urban flood risk mapping

Procedia PDF Downloads 590
5053 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks

Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain

Abstract:

With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.

Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic

Procedia PDF Downloads 545
5052 A Transfer Function Representation of Thermo-Acoustic Dynamics for Combustors

Authors: Myunggon Yoon, Jung-Ho Moon

Abstract:

In this paper, we present a transfer function representation of a general one-dimensional combustor. The input of the transfer function is a heat rate perturbation of a burner and the output is a flow velocity perturbation at the burner. This paper considers a general combustor model composed of multiple cans with different cross sectional areas, along with a non-zero flow rate.

Keywords: combustor, dynamics, thermoacoustics, transfer function

Procedia PDF Downloads 381
5051 Recycling Waste Product for Metal Removal from Water

Authors: Saidur R. Chowdhury, Mamme K. Addai, Ernest K. Yanful

Abstract:

The research was performed to assess the potential of nickel smelter slag, an industrial waste, as an adsorbent in the removal of metals from aqueous solution. An investigation was carried out for Arsenic (As), Copper (Cu), lead (Pb) and Cadmium (Cd) adsorption from aqueous solution. Smelter slag was obtain from Ni ore at the Vale Inco Ni smelter in Sudbury, Ontario, Canada. The batch experimental studies were conducted to evaluate the removal efficiencies of smelter slag. The slag was characterized by surface analytical techniques. The slag contained different iron oxides and iron silicate bearing compounds. In this study, the effect of pH, contact time, particle size, competition by other ions, slag dose and distribution coefficient were evaluated to measure the optimum adsorption conditions of the slag as an adsorbent for As, Cu, Pb and Cd. The results showed 95-99% removal of As, Cu, Pb, and almost 50-60% removal of Cd, while batch experimental studies were conducted at 5-10 mg/L of initial concentration of metals, 10 g/L of slag doses, 10 hours of contact time and 170 rpm of shaking speed and 25oC condition. The maximum removal of Arsenic (As), Copper (Cu), lead (Pb) was achieved at pH 5 while the maximum removal of Cd was found after pH 7. The column experiment was also conducted to evaluate adsorption depth and service time for metal removal. This study also determined adsorption capacity, adsorption rate and mass transfer rate. The maximum adsorption capacity was found to be 3.84 mg/g for As, 4 mg/g for Pb, and 3.86 mg/g for Cu. The adsorption capacity of nickel slag for the four test metals were in decreasing order of Pb > Cu > As > Cd. Modelling of experimental data with Visual MINTEQ revealed that saturation indices of < 0 were recorded in all cases suggesting that the metals at this pH were under- saturated and thus in their aqueous forms. This confirms the absence of precipitation in the removal of these metals at the pHs. The experimental results also showed that Fe and Ni leaching from the slag during the adsorption process was found to be very minimal, ranging from 0.01 to 0.022 mg/L indicating the potential adsorbent in the treatment industry. The study also revealed that waste product (Ni smelter slag) can be used about five times more before disposal in a landfill or as a stabilization material. It also highlighted the recycled slags as a potential reactive adsorbent in the field of remediation engineering. It also explored the benefits of using renewable waste products for the water treatment industry.

Keywords: adsorption, industrial waste, recycling, slag, treatment

Procedia PDF Downloads 146
5050 Battery/Supercapacitor Emulator for Chargers Functionality Testing

Authors: S. Farag, A. Kuperman

Abstract:

In this paper, design of solid-state battery/super capacitor emulator based on dc-dc boost converter is described. The emulator mimics charging behavior of any storage device based on a predefined behavior set by the user. The device is operated by a two-level control structure: high-level emulating controller and low-level input voltage controller. Simulation and experimental results are shown to demonstrate the emulator operation.

Keywords: battery, charger, energy, storage, super capacitor

Procedia PDF Downloads 400
5049 Synergistic Effect of Platelet-Rich Plasma with Hyaluronic Acid Injection Following Arthrocentesis to Reduce Pain and Improve Function in Temporomandibular joint (TMJ) Osteoarthritis

Authors: Ayman Hegab

Abstract:

Increasing evidence supports the use of platelet-rich plasma (PRP) combined with hyaluronic acid (HA) for the treatment of knee osteoarthritis, which effectively promotes cartilage repair. This study aimed to determine whether injection of PRP+HA following arthrocentesis reduces pain and improves maximum incisal opening. This was a single-blind, prospective, randomized control study. The patients were selected based on the Hegab classification: Group I: patients treated with arthrocentesis followed by a single PRP injection; Group II (Control): patients treated with arthrocentesis followed by a single HA injection; and Group III: patients treated with arthrocentesis followed by a single PRP+HA combination injection. The primary predictor variable was the medication used for injection. The primary outcome variables were the maximum voluntary mouth opening and pain index scores. The secondary outcome variable was joint sounds. All outcome variables were assessed and compared among the three groups at baseline and at 1-, 3-, 6-, and 12-month intervals. Other variables, including patients’ age and sex, were evaluated in relation to the patient outcomes. Injecting PRP+HA showed statistically significant improvement in the primary and secondary treatment outcomes over PRP or HA injection throughout the study period (P<0.005). Injection of PRP+HA following arthrocentesis had significant long-term clinical efficacy regarding pain relief that was considered the main concern of both the patient and clinician.

Keywords: TMJ, HA, PRP, osteoarthritis

Procedia PDF Downloads 9
5048 Training a Neural Network to Segment, Detect and Recognize Numbers

Authors: Abhisek Dash

Abstract:

This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.

Keywords: convolutional neural networks, OCR, text detection, text segmentation

Procedia PDF Downloads 161
5047 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 49
5046 Studies on Irrigation and Nutrient Interactions in Sweet Orange (Citrus sinensis Osbeck)

Authors: S. M. Jogdand, D. D. Jagtap, N. R. Dalal

Abstract:

Sweet orange (Citrus sinensis Osbeck) is one of the most important commercially cultivated fruit crop in India. It stands on second position amongst citrus group after mandarin. Irrigation and fertigation are vital importance of sweet orange orchard and considered to be the most critical cultural operations. The soil acts as the reservoir of water and applied nutrients, the interaction between irrigation and fertigation leads to the ultimate quality and production of fruits. The increasing cost of fertilizers and scarcity of irrigation water forced the farmers for optimum use of irrigation and nutrients. The experiment was conducted with object to find out irrigation and nutrient interaction in sweet orange to optimize the use of both the factors. The experiment was conducted in medium to deep soil. The irrigation level I3,drip irrigation at 90% ER (effective rainfall) and fertigation level F3 80% RDF (recommended dose of fertilizer) recorded significantly maximum plant height, plant spread, canopy volume, number of fruits, weight of fruit, fruit yield kg/plant and t/ha followed by F2 , fertigation with 70% RDF. The interaction effect of irrigation and fertigation on growth was also significant and the maximum plant height, E-W spread, N-S spread, canopy volume, highest number of fruits, weight of fruit and yield kg/plant and t/ha was recorded in T9 i.e. I3F3 drip irrigation at 90% ER and fertigation with 80% of RDF followed by I3F2 drip irrigation at 90% ER and fertigation with 70% of RDF.

Keywords: sweet orange, fertigation, irrigation, interactions

Procedia PDF Downloads 180
5045 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML

Procedia PDF Downloads 129
5044 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis

Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork

Abstract:

The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.

Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia

Procedia PDF Downloads 120
5043 Relationship between Response of the Resistive Sensors on the Chosen Volatile Organic Compounds (VOCs) and Their Concentration

Authors: Marek Gancarz, Agnieszka Nawrocka, Robert Rusinek, Marcin Tadla

Abstract:

Volatile organic compounds (VOCs) are the fungi metabolites in the gaseous form produced during improper storage of agricultural commodities (e.g. grain, food). The spoilt commodities produce a wide range of VOCs including alcohols, esters, aldehydes, ketones, alkanes, alkenes, furans, phenols etc. The characteristic VOCs and odours can be determined by using electronic nose (e-Nose) which contains a matrix of different kinds of sensors e.g. resistive sensors. The aim of the present studies was to determine relationship between response of the resistive sensors on the chosen volatiles and their concentration. According to the literature, it was chosen volatiles characteristic for the cereals: ethanol, 3-methyl-1-butanol and hexanal. Analysis of the sensor signals shows that a signal shape is different for the different substances. Moreover, each VOC signal gives information about a maximum of the normalized sensor response (R/Rmax), an impregnation time (tIM) and a cleaning time at half maximum of R/Rmax (tCL). These three parameters can be regarded as a ‘VOC fingerprint’. Seven resistive sensors (TGS2600-B00, TGS2602-B00, TGS2610-C00, TGS2611-C00, TGS2611-E00, TGS2612-D00, TGS2620-C00) produced by Figaro USA Inc., and one (AS-MLV-P2) produced by AMS AG, Austria were used. Two out of seven sensors (TGS2611-E00, TGS2612-D00) did not react to the chosen VOCs. The most responsive sensor was AS-MLV-P2. The research was supported by the National Centre for Research and Development (NCBR), Grant No. PBS2/A8/22/2013.

Keywords: agricultural commodities, organic compounds, resistive sensors, volatile

Procedia PDF Downloads 369
5042 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 341
5041 Changes in Some Morphological Characters of Dill Under Cadmium Stress

Authors: A. M. Daneshian Moghaddam, A. H. Hosseinzadeh, A. Bandehagh

Abstract:

To investigate the effect of cadmium heavy metal stress on five ecotype of dill, this experiment was conducted in the greenhouse of Tabriz University and Shabestar Islamic Azad University’s laboratories with tree replications. After growing the plants, cadmium treatments (concentration 0,300, 600 µmol) were applied. The essential oil of the samples was measured by hydro distillation and using a Clevenger apparatus. Variables used in this study include: wet and dry roots and aerial part of plant, plant height, stem diameter, and root length. The results showed that different concentrations of heavy metal has statistical difference (p < 0.01) on the fresh weight, dry weight, plant height and root length but hadn’t significant difference on essential oil percentage and root length. Dill ecotypes have statistical significant difference on essential oil percent, fresh plant weight, plant height, root length, except plant dry weight. The interactions between Cd concentration and dill ecotypes have not significant effect on all traits, except root length. Maximum fresh weight (4.98 gr) and minimum amount (3.13 gr) were obtained in control trait and 600 ppm of cd concentration, respectively. Highest amount of fresh weight (4.78 gr) was obtained in Birjand ecotype. Maximum plant dry weight (1.2 gr) was obtained at control. The highest plant height (32.54 cm) was obtained in control and with applies cadmium concentrations from zero to 300 and 600 ppm was found significantly reduced in plant height.

Keywords: pollution, essential oil, ecotype, dill, heavy metals, cadmium

Procedia PDF Downloads 428
5040 Examining the Presence of Heterotrophic Aerobic Bacteria (HAB), and Sulphate Reducing Bacteria (SRB) in Some Types of Water from the City of Tripoli, Libya

Authors: Abdulsalam. I. Rafida, Marwa. F. Elalem, Hasna. E. Alemam

Abstract:

This study aimed at testing the various types of water in some areas of the city of Tripoli, Libya for the presence of Heterotrophic Aerobic Bacteria (HAB), and anaerobic Sulphate Reducing Bacteria (SRB). The water samples under investigation included rainwater accumulating on the ground, sewage water (from the city sewage treatment station, sulphate water from natural therapy swimming sites), and sea water (i.e. sea water exposed to pollution by untreated sewage water, and unpolluted sea water from specific locations). A total of 20 samples have been collected distributed as follows: rain water (8 samples), sewage water (6 samples), and sea water (6 samples). An up-to-date method for estimation has been used featuring readymade solutions i.e. (BARTTM test for HAB and BARTTM test for SRB). However, with the exception of one rain water sample, the results have indicated that the target bacteria have been present in all samples. Regarding HAB bacteria the samples have shown a maximum average of 7.0 x 106 cfu/ml featuring sewage and rain water and a minimum average of 1.8 x 104 cuf/ml featuring unpolluted sea water collected from a specific location. As for SRB bacteria; a maximum average of 7.0 x 105 cfu/ml has been shown by sewage and rain water and a minimum average of 1.8 x 104 cfu/ml by sewage and sea water. The above results highlight the relationship between pollution and the presence of bacteria in water particularly water collected from specific locations, and also the presence of bacteria as the result of the use of water provided that a suitable environment exists for its growth.

Keywords: heterotrophic aerobic bacteria (HAB), sulphate reducing bacteria (SRB), water, environmental sciences

Procedia PDF Downloads 491
5039 Optimal 3D Deployment and Path Planning of Multiple Uavs for Maximum Coverage and Autonomy

Authors: Indu Chandran, Shubham Sharma, Rohan Mehta, Vipin Kizheppatt

Abstract:

Unmanned aerial vehicles are increasingly being explored as the most promising solution to disaster monitoring, assessment, and recovery. Current relief operations heavily rely on intelligent robot swarms to capture the damage caused, provide timely rescue, and create road maps for the victims. To perform these time-critical missions, efficient path planning that ensures quick coverage of the area is vital. This study aims to develop a technically balanced approach to provide maximum coverage of the affected area in a minimum time using the optimal number of UAVs. A coverage trajectory is designed through area decomposition and task assignment. To perform efficient and autonomous coverage mission, solution to a TSP-based optimization problem using meta-heuristic approaches is designed to allocate waypoints to the UAVs of different flight capacities. The study exploits multi-agent simulations like PX4-SITL and QGroundcontrol through the ROS framework and visualizes the dynamics of UAV deployment to different search paths in a 3D Gazebo environment. Through detailed theoretical analysis and simulation tests, we illustrate the optimality and efficiency of the proposed methodologies.

Keywords: area coverage, coverage path planning, heuristic algorithm, mission monitoring, optimization, task assignment, unmanned aerial vehicles

Procedia PDF Downloads 215
5038 A Research on Determining the Viability of a Job Board Website for Refugees in Kenya

Authors: Prince Mugoya, Collins Oduor Ondiek, Patrick Kanyi Wamuyu

Abstract:

Refugee Job Board Website is a web-based application that provides a platform for organizations to post jobs specifically for refugees. Organizations upload job opportunities and refugees can view them on the website. The website also allows refugees to input their skills and qualifications. The methodology used to develop this system is a waterfall (traditional) methodology. Software development tools include Brackets which will be used to code the website and PhpMyAdmin to store all the data in a database.

Keywords: information technology, refugee, skills, utilization, economy, jobs

Procedia PDF Downloads 165
5037 An Assessment of Bathymetric Changes in the Lower Usuma Reservoir, Abuja, Nigera

Authors: Rayleigh Dada Abu, Halilu Ahmad Shaba

Abstract:

Siltation is a serious problem that affects public water supply infrastructures such as dams and reservoirs. It is a major problem which threatens the performance and sustainability of dams and reservoirs. It reduces the dam capacity for flood control, potable water supply, changes water stage, reduces water quality and recreational benefits. The focus of this study is the Lower Usuma reservoir. At completion the reservoir had a gross storage capacity of 100 × 106 m3 (100 million cubic metres), a maximum operational level of 587.440 m a.s.l., with a maximum depth of 49 m and a catchment area of 241 km2 at dam site with a daily designed production capacity of 10,000 cubic metres per hour. The reservoir is 1,300 m long and feeds the treatment plant mainly by gravity. The reservoir became operational in 1986 and no survey has been conducted to determine its current storage capacity and rate of siltation. Hydrographic survey of the reservoir by integrated acoustic echo-sounding technique was conducted in November 2012 to determine the level and rate of siltation. The result obtained shows that the reservoir has lost 12.0 meters depth to siltation in 26 years of its operation; indicating 24.5% loss in installed storage capacity. The present bathymetric survey provides baseline information for future work on siltation depth and annual rates of storage capacity loss for the Lower Usuma reservoir.

Keywords: sedimentation, lower Usuma reservoir, acoustic echo sounder, bathymetric survey

Procedia PDF Downloads 515
5036 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action

Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere

Abstract:

Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.

Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results

Procedia PDF Downloads 133
5035 A Corpus-based Study of Adjuncts in Colombian English as a Second Language (ESL) Argumentative Essays

Authors: E. Velasco

Abstract:

Meeting high standards of writing in a Second Language (L2) is extremely important for many students who wish to undertake studies at universities in both English and non-English speaking countries. University lecturers in English speaking countries continue to express dissatisfaction with the apparent poor quality of essay writing skills displayed by English as a Second Language (ESL) students, whose essays are often criticised for their lack of cohesion and coherence. These critiques have extended to contexts such as Colombia, where many ESL students are criticised for their inability to write high-quality academic texts in L2-English, particularly at the tertiary level. If Colombian ESL students are expected to meet high standards of writing when studying locally and abroad, it makes sense to carry out specific research that can perhaps lead to recommendations to support their quest for improving argumentative strategies. Employing Corpus Linguistics methods within a Learner Corpus Research framework, and a combination of Log-Likelihood and Bayes Factor measures, this paper investigated argumentative essays written by Colombian ESL students. The study specifically aimed to analyse conjunctive adjuncts in argumentative essays to find out how Colombian ESL students connect their ideas in discourse. Results suggest that a) Colombian ESL learners need explicit instruction on specific areas of conjunctive adjuncts to counteract overuse, underuse and misuse; b) underuse of endophoric and evidential adjuncts highlights gaps between IELTS-like essays and good quality tertiary-level essays and published papers, and these gaps are linked to prior knowledge brought into writing task, rhetorical functions in writing, and research processes before writing takes place; c) both Colombian ESL learners and L1-English writers (in a reference corpus) overuse some adjuncts and underuse endophoric and evidential adjuncts, when compared to skilled L1-English and L2-English writers, so differences in frequencies of adjuncts has little to do with the writers’ L1, and differences are rather linked to types of essays writers produce (e.g. ESL vs. university essays). Ender Velasco: The pedagogical recommendations deriving from the study are that: a) Colombian ESL learners need to be shown that overuse is not the only way of giving cohesion to argumentative essays and there are other alternatives to cohesion (e.g., implicit adjuncts, lexical chains and collocations); b) syllabi and classroom input need to raise awareness of gaps in writing skills between IELTS-like and tertiary-level argumentative essays, and of how endophoric and evidential adjuncts are used to refer to anaphoric and cataphoric sections of essays, and to other people’s work or ideas; c) syllabi and classroom input need to include essay-writing tasks based on previous research/reading which learners need to incorporate into their arguments, and tasks that raise awareness of referencing systems (e.g., APA); d) classroom input needs to include explicit instruction on use of punctuation, functions and/or syntax with specific conjunctive adjuncts such as for example, for that reason, although, despite and nevertheless.

Keywords: argumentative essays, colombian english as a second language (esl) learners, conjunctive adjuncts, corpus linguistics

Procedia PDF Downloads 85
5034 Using Signature Assignments and Rubrics in Assessing Institutional Learning Outcomes and Student Learning

Authors: Leigh Ann Wilson, Melanie Borrego

Abstract:

The purpose of institutional learning outcomes (ILOs) is to assess what students across the university know and what they do not. The issue is gathering this information in a systematic and usable way. This presentation will explain how one institution has engineered this process for both student success and maximum faculty curriculum and course design input. At Brandman University, there are three levels of learning outcomes: course, program, and institutional. Institutional Learning Outcomes (ILOs) are mapped to specific courses. Faculty course developers write the signature assignments (SAs) in alignment with the Institutional Learning Outcomes for each course. These SAs use a specific rubric that is applied consistently by every section and every instructor. Each year, the 12-member General Education Team (GET), as a part of their work, conducts the calibration and assessment of the university-wide SAs and the related rubrics for one or two of the five ILOs. GET members, who are senior faculty and administrators who represent each of the university's schools, lead the calibration meetings. Specifically, calibration is a process designed to ensure the accuracy and reliability of evaluating signature assignments by working with peer faculty to interpret rubrics and compare scoring. These calibration meetings include the full time and adjunct faculty members who teach the course to ensure consensus on the application of the rubric. Each calibration session is chaired by a GET representative as well as the course custodian/contact where the ILO signature assignment resides. The overall calibration process GET follows includes multiple steps, such as: contacting and inviting relevant faculty members to participate; organizing and hosting calibration sessions; and reviewing and discussing at least 10 samples of student work from class sections during the previous academic year, for each applicable signature assignment. Conversely, the commitment for calibration teams consist of attending two virtual meetings lasting up to three hours in duration. The first meeting focuses on interpreting the rubric, and the second meeting involves comparing scores for sample work and sharing feedback about the rubric and assignment. Next, participants are expected to follow all directions provided and participate actively, and respond to scheduling requests and other emails within 72 hours. The virtual meetings are recorded for future institutional use. Adjunct faculty are paid a small stipend after participating in both calibration meetings. Full time faculty can use this work on their annual faculty report for "internal service" credit.

Keywords: assessment, assurance of learning, course design, institutional learning outcomes, rubrics, signature assignments

Procedia PDF Downloads 280
5033 A Comprehensive Analysis of the Phylogenetic Signal in Ramp Sequences in 211 Vertebrates

Authors: Lauren M. McKinnon, Justin B. Miller, Michael F. Whiting, John S. K. Kauwe, Perry G. Ridge

Abstract:

Background: Ramp sequences increase translational speed and accuracy when rare, slowly-translated codons are found at the beginnings of genes. Here, the results of the first analysis of ramp sequences in a phylogenetic construct are presented. Methods: Ramp sequences were compared from 211 vertebrates (110 Mammalian and 101 non-mammalian). The presence and absence of ramp sequences were analyzed as a binary character in a parsimony and maximum likelihood framework. Additionally, ramp sequences were mapped to the Open Tree of Life taxonomy to determine the number of parallelisms and reversals that occurred, and these results were compared to what would be expected due to random chance. Lastly, aligned nucleotides in ramp sequences were compared to the rest of the sequence in order to examine possible differences in phylogenetic signal between these regions of the gene. Results: Parsimony and maximum likelihood analyses of the presence/absence of ramp sequences recovered phylogenies that are highly congruent with established phylogenies. Additionally, the retention index of ramp sequences is significantly higher than would be expected due to random chance (p-value = 0). A chi-square analysis of completely orthologous ramp sequences resulted in a p-value of approximately zero as compared to random chance. Discussion: Ramp sequences recover comparable phylogenies as other phylogenomic methods. Although not all ramp sequences appear to have a phylogenetic signal, more ramp sequences track speciation than expected by random chance. Therefore, ramp sequences may be used in conjunction with other phylogenomic approaches.

Keywords: codon usage bias, phylogenetics, phylogenomics, ramp sequence

Procedia PDF Downloads 162
5032 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 435
5031 Design and Optimization of Sustainable Buildings by Combined Cooling, Heating and Power System (CCHP) Based on Exergy Analysis

Authors: Saeed Karimi, Ali Behbahaninia

Abstract:

In this study, the design and optimization of combined cooling, heating, and power system (CCHP) for a sustainable building are dealt with. Sustainable buildings are environmentally responsible and help us to save energy also reducing waste, pollution and environmental degradation. CCHP systems are widely used to save energy sources. In these systems, electricity, cooling, and heating are generating using just one primary energy source. The selection of the size of components based on the maximum demand of users will lead to an increase in the total cost of energy and equipment for the building complex. For this purpose, a system was designed in which the prime mover (gas turbine), heat recovery boiler, and absorption chiller are lower than the needed maximum. The difference in months with peak consumption is supplied with the help of electrical absorption chiller and auxiliary boiler (and the national electricity network). In this study, the optimum capacities of each of the equipment are determined based on Thermo economic method, in a way that the annual capital cost and energy consumption will be the lowest. The design was done for a gas turbine prime mover, and finally, the optimum designs were investigated using exergy analysis and were compared with a traditional energy supply system.

Keywords: sustainable building, CCHP, energy optimization, gas turbine, exergy, thermo-economic

Procedia PDF Downloads 93
5030 Impact of Climate on Productivity of Major Cereal Crops in Sokoto State, Nigeria

Authors: M. B. Sokoto, L. Tanko, Y. M. Abdullahi

Abstract:

The study aimed at examining the impact of climatic factors (rainfall, minimum and maximum temperature) on the productivity of major cereals in Sokoto state, Nigeria. Secondary data from 1997-2008 were used in respect of annual yield of Major cereals crops (maize, millet, rice, and sorghum (t ha-1). Data in respect of climate was collected from Sokoto Energy Research Centre (SERC) for the period under review. Data collected was analyzed using descriptive statistics, correlation and regression analysis. The result of the research reveals that there is variation in the trend of the climatic factors and also variation in cereals output. The effect of average temperature on yields has a negative effect on crop yields. Similarly, rainfall is not significant in explaining the effect of climate on cereal crops production. The study has revealed to some extend the effect of climatic variables, such as rainfall, relative humidity, maximum and minimum temperature on major cereals production in Sokoto State. This will assist in planning ahead in cereals production in the area. Other factors such as soil fertility, correct timing of planting and good cultural practices (such as spacing of strands), protection of crops from weeds, pests and diseases and planting of high yielding varieties should also be taken into consideration for increase yield of cereals.

Keywords: cereals, climate, impact, major, productivity

Procedia PDF Downloads 390