Search results for: conventional computing
3520 Improvement in Drying Characteristics of Raisin by Carbonic Maceration– Process Optimization
Authors: Nursac Akyol, Merve S. Turan, Mustafa Ozcelik, Erdogan Kucukoner, Erkan Karacabey
Abstract:
Traditional raisin production is a long time drying process under sunlight. During this procedure, grapes are open to some environmental effects besides the adverse effects of the long drying period. Thus, there is a need to develop an alternative method being applicable instead of traditional one. To this extent, a combination of a potential pretreatment (carbonic maceration, CM) with convectional oven drying was examined. CM application was used in raisin production (grape drying) as a pretreatment process before oven drying. Pressure, temperature and time were examined as application parameters of CM. In conventional oven drying, the temperature is a process variable. The aim is to find out how CM and convectional drying processes affect the drying characteristics of grapes as well as their physical and chemical properties. For this purpose, the response surface method was used to determine both the effects of the variables and the optimum pretreatment and drying conditions. The optimum conditions of CM for raisin production were 0.3 MPa of pressure value, 4°C of application temperature and 8 hours of application time. The optimized drying temperature was 77°C. The results showed that the application of CM before the drying process improved the drying characteristics. Drying took only 389 minutes for grapes pretreated by CM under optimum conditions and 495 minutes for the control group dried only by the conventional drying process. According to these results, a decrease of 21% was achieved in the time requirement for raisin production. Also, it was observed that the samples dried under optimum conditions had similar physical properties as those the control group had. It was seen that raisin, which was dried under optimum conditions were in better condition in terms of some of the bioactive contents compared to control groups. In light of all results, it is seen that CM has an important potential in the industrial drying of grape samples. The current study was financially supported by TUBITAK, Turkey (Project no: 116R038).Keywords: drying time, pretreatment, response surface methodlogy, total phenolic
Procedia PDF Downloads 1383519 A Study of the Implications for the Health and Wellbeing of Energy-Efficient House Occupants: A UK-Based Investigation of Indoor Climate and Indoor Air Quality
Authors: Patricia Kermeci
Abstract:
Policies related to the reduction of both carbon dioxide and energy consumption within the residential sector have contributed towards a growing number of energy-efficient houses being built in several countries. Many of these energy-efficient houses rely on the construction of very well insulated and highly airtight structures, ventilated mechanically. Although energy-efficient houses are indeed more energy efficient than conventional houses, concerns have been raised over the quality of their indoor air and, consequently, the possible adverse health and wellbeing effects for their occupants. Using a longitudinal study design over three different weather seasons (winter, spring and summer), this study has investigated the indoor climate and indoor air quality of different rooms (bedroom, living room and kitchen) in five energy-efficient houses and four conventional houses in the UK. Occupants have kept diaries of their activities during the studied periods and interviews have been conducted to investigate possible behavioural explanations for the findings. Data has been compared with reviews of epidemiological, toxicological and other health related published literature to reveals three main findings. First, it shows that the indoor environment quality of energy-efficient houses cannot be treated as a holistic entity as different rooms presented dissimilar indoor climate and indoor air quality. Thus, such differences might contribute to the health and wellbeing of occupants in different ways. Second, the results show that the indoor environment quality of energy-efficient houses can vary following changes in weather season, leaving occupants at a lower or higher risk of adverse health and wellbeing effects during different weather seasons. Third, one cannot assume that even identical energy-efficient houses provide a similar indoor environment quality. Fourth, the findings reveal that the practices and behaviours of the occupants of energy-efficient houses likely determine whether they enjoy a healthier indoor environment when compared with their control houses. In conclusion, it has been considered vital to understand occupants’ practices and behaviours in order to explain the ways they might contribute to the indoor climate and indoor air quality in energy-efficient houses.Keywords: energy-efficient house, health and wellbeing, indoor environment, indoor air quality
Procedia PDF Downloads 2303518 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 1223517 Bariatric Surgery Referral as an Alternative to Fundoplication in Obese Patients Presenting with GORD: A Retrospective Hospital-Based Cohort Study
Authors: T. Arkle, D. Pournaras, S. Lam, B. Kumar
Abstract:
Introduction: Fundoplication is widely recognised as the best surgical option for gastro-oesophageal reflux disease (GORD) in the general population. However, there is controversy surrounding the use of conventional fundoplication in obese patients. Whilst the intra-operative failure of fundoplication, including wrap disruption, is reportedly higher in obese individuals, the more significant issue surrounds symptom recurrence post-surgery. Could a bariatric procedure be considered in obese patients for weight management, to treat the GORD, and to also reduce the risk of recurrence? Roux-en-Y gastric bypass, a widely performed bariatric procedure, has been shown to be highly successful both in controlling GORD symptoms and in weight management in obese patients. Furthermore, NICE has published clear guidelines on eligibility for bariatric surgery, with the main criteria being type 3 obesity or type 2 obesity with the presence of significant co-morbidities that would improve with weight loss. This study aims to identify the proportion of patients who undergo conventional fundoplication for GORD and/or hiatus hernia, which would have been eligible for bariatric surgery referral according to NICE guidelines. Methods: All patients who underwent fundoplication procedures for GORD and/or hiatus hernia repair at a single NHS foundation trust over a 10-year period will be identified using the Trust’s health records database. Pre-operative patient records will be used to find BMI and the presence of significant co-morbidities at the time of consideration for surgery. This information will be compared to NICE guidelines to determine potential eligibility for the bariatric surgical referral at the time of initial surgical intervention. Results: A total of 321 patients underwent fundoplication procedures between January 2011 and December 2020; 133 (41.4%) had available data for BMI or to allow BMI to be estimated. Of those 133, 40 patients (30%) had a BMI greater than 30kg/m², and 7 (5.3%) had BMI >35kg/m². One patient (0.75%) had a BMI >40 and would therefore be automatically eligible according to NICE guidelines. 4 further patients had significant co-morbidities, such as hypertension and osteoarthritis, which likely be improved by weight management surgery and therefore also indicated eligibility for referral. Overall, 3.75% (5/133) of patients undergoing conventional fundoplication procedures would have been eligible for bariatric surgical referral, these patients were all female, and the average age was 60.4 years. Conclusions: Based on this Trust’s experience, around 4% of obese patients undergoing fundoplication would have been eligible for bariatric surgical intervention. Based on current evidence, in class 2/3 obese patients, there is likely to have been a notable proportion with recurrent disease, potentially requiring further intervention. These patient’s may have benefitted more through undergoing bariatric surgery, for example a Roux-en-Y gastric bypass, addressing both their obesity and GORD. Use of patient written notes to obtain BMI data for the 188 patients with missing BMI data and further analysis to determine outcomes following fundoplication in all patients, assessing for incidence of recurrent disease, will be undertaken to strengthen conclusions.Keywords: bariatric surgery, GORD, Nissen fundoplication, nice guidelines
Procedia PDF Downloads 603516 Analysis of Process Methane Hydrate Formation That Include the Important Role of Deep-Sea Sediments with Analogy in Kerek Formation, Sub-Basin Kendeng, Central Java, Indonesia
Authors: Yan Bachtiar Muslih, Hangga Wijaya, Trio Fani, Putri Agustin
Abstract:
Demand of Energy in Indonesia always increases 5-6% a year, but production of conventional energy always decreases 3-5% a year, it means that conventional energy in 20-40 years ahead will not able to complete all energy demand in Indonesia, one of the solve way is using unconventional energy that is gas hydrate, gas hydrate is gas that form by biogenic process, gas hydrate stable in condition with extremely depth and low temperature, gas hydrate can form in two condition that is in pole condition and in deep-sea condition, wherein this research will focus in gas hydrate that association with methane form methane hydrate in deep-sea condition and usually form in depth between 150-2000 m, this research will focus in process of methane hydrate formation that is biogenic process and the important role of deep-sea sediment so can produce accumulation of methane hydrate, methane hydrate usually will be accumulated in find sediment in deep-sea environment with condition high-pressure and low-temperature this condition too usually make methane hydrate change into white nodule, methodology of this research is geology field work and laboratory analysis, from geology field work will get sample data consist of 10-15 samples from Kerek Formation outcrops as random for imagine the condition of deep-sea environment that influence the methane hydrate formation and also from geology field work will get data of measuring stratigraphy in outcrops Kerek Formation too from this data will help to imagine the process in deep-sea sediment like energy flow, supply sediment, and etc, and laboratory analysis is activity to analyze all data that get from geology field work, the result of this research can used to exploration activity of methane hydrate in another prospect deep-sea environment in Indonesia.Keywords: methane hydrate, deep-sea sediment, kerek formation, sub-basin of kendeng, central java, Indonesia
Procedia PDF Downloads 4623515 Wheat Dihaploid and Somaclonal Lines Screening for Resistance to P. nodorum
Authors: Lidia Kowalska, Edward Arseniuk
Abstract:
Glume and leaf blotch is a disease of wheat caused by necrotrophic fungus Parastagonospora nodorum. It is a serious pathogen in many wheat-growing areas throughout the world. Use of resistant cultivars is the most effective and economical means to control the above-mentioned disease. Plant breeders and pathologists have worked intensively to incorporate resistance to the pathogen in new cultivars. Conventional methods of breeding for resistance can be supported by using the biotechnological ones, i.e., somatic embryogenesis and androgenesis. Therefore, an effort was undertaken to compare genetic variation in P. nodorum resistance among winter wheat somaclones, dihaploids and conventional varieties. For the purpose, a population of 16 somaclonal and 4 dihaploid wheat lines from six crosses were used to assess their resistance to P. nodorum under field conditions. Lines were grown in disease-free (fungicide protected) and inoculated micro plots in 2 replications of a split-plot design in a single environment. The plant leaves were inoculated with a mixture of P. nodorum isolates three times. Spore concentrations were adjusted to 4 x 10⁶ of viable spores per one milliliter. The disease severity was rated on a scale, where > 90% – susceptible, < 10% - resistant. Disease ratings of plant leaves showed statistically significant differences among all lines tested. Higher resistance to P. nodorum was observed more often on leaves of somaclonal lines than on dihaploid ones. On average, disease, severity reached 15% on leaves of somaclones and 30% on leaves of dihaploids. Some of the genotypes were showing low leaf infection, e.g. dihaploid D-33 (disease severity 4%) and a somaclone S-1 (disease severity 2%). The results from this study prove that dihaploid and somaclonal variation might be successfully used as an additional source of wheat resistance to the pathogen and it could be recommended to use in commercial breeding programs. The reported results prove that biotechnological methods may effectively be used in breeding for disease resistance of wheat to fungal necrotrophic pathogens.Keywords: glume and leaf blotch, somaclonal, androgenic variation, wheat, resistance breeding
Procedia PDF Downloads 1203514 Design of Cloud Service Brokerage System Intermediating Integrated Services in Multiple Cloud Environment
Authors: Dongjae Kang, Sokho Son, Jinmee Kim
Abstract:
Cloud service brokering is a new service paradigm that provides interoperability and portability of application across multiple Cloud providers. In this paper, we designed cloud service brokerage system, any broker, supporting integrated service provisioning and SLA based service life cycle management. For the system design, we introduce the system concept and whole architecture, details of main components and use cases of primary operations in the system. These features ease the Cloud service provider and customer’s concern and support new Cloud service open market to increase cloud service profit and prompt Cloud service echo system in cloud computing related area.Keywords: cloud service brokerage, multiple Clouds, Integrated service provisioning, SLA, network service
Procedia PDF Downloads 4883513 Comparative Study of Scheduling Algorithms for LTE Networks
Authors: Samia Dardouri, Ridha Bouallegue
Abstract:
Scheduling is the process of dynamically allocating physical resources to User Equipment (UE) based on scheduling algorithms implemented at the LTE base station. Various algorithms have been proposed by network researchers as the implementation of scheduling algorithm which represents an open issue in Long Term Evolution (LTE) standard. This paper makes an attempt to study and compare the performance of PF, MLWDF and EXP/PF scheduling algorithms. The evaluation is considered for a single cell with interference scenario for different flows such as Best effort, Video and VoIP in a pedestrian and vehicular environment using the LTE-Sim network simulator. The comparative study is conducted in terms of system throughput, fairness index, delay, packet loss ratio (PLR) and total cell spectral efficiency.Keywords: LTE, multimedia flows, scheduling algorithms, mobile computing
Procedia PDF Downloads 3833512 Development of Map of Gridded Basin Flash Flood Potential Index: GBFFPI Map of QuangNam, QuangNgai, DaNang, Hue Provinces
Authors: Le Xuan Cau
Abstract:
Flash flood is occurred in short time rainfall interval: from 1 hour to 12 hours in small and medium basins. Flash floods typically have two characteristics: large water flow and big flow velocity. Flash flood is occurred at hill valley site (strip of lowland of terrain) in a catchment with large enough distribution area, steep basin slope, and heavy rainfall. The risk of flash floods is determined through Gridded Basin Flash Flood Potential Index (GBFFPI). Flash Flood Potential Index (FFPI) is determined through terrain slope flash flood index, soil erosion flash flood index, land cover flash floods index, land use flash flood index, rainfall flash flood index. Determining GBFFPI, each cell in a map can be considered as outlet of a water accumulation basin. GBFFPI of the cell is determined as basin average value of FFPI of the corresponding water accumulation basin. Based on GIS, a tool is developed to compute GBFFPI using ArcObjects SDK for .NET. The maps of GBFFPI are built in two types: GBFFPI including rainfall flash flood index (real time flash flood warning) or GBFFPI excluding rainfall flash flood index. GBFFPI Tool can be used to determine a high flash flood potential site in a large region as quick as possible. The GBFFPI is improved from conventional FFPI. The advantage of GBFFPI is that GBFFPI is taking into account the basin response (interaction of cells) and determines more true flash flood site (strip of lowland of terrain) while conventional FFPI is taking into account single cell and does not consider the interaction between cells. The GBFFPI Map of QuangNam, QuangNgai, DaNang, Hue is built and exported to Google Earth. The obtained map proves scientific basis of GBFFPI.Keywords: ArcObjects SDK for NET, basin average value of FFPI, gridded basin flash flood potential index, GBFFPI map
Procedia PDF Downloads 3813511 Biochemical Characterization of CTX-M-15 from Enterobacter cloacae and Designing a Novel Non-β-Lactam-β-Lactamase Inhibitor
Authors: Mohammad Faheem, M. Tabish Rehman, Mohd Danishuddin, Asad U. Khan
Abstract:
The worldwide dissemination of CTX-M type β-lactamases is a threat to human health. Previously, we have reported the spread of blaCTX-M-15 gene in different clinical strains of Enterobacteriaceae from the hospital settings of Aligarh in north India. In view of the varying resistance pattern against cephalosporins and other β-lactam antibiotics, we intended to understand the correlation between MICs and catalytic activity of CTX-M-15. In this study, steady-state kinetic parameters and MICs were determined on E. coli DH5α transformed with blaCTX-M-15 gene that was cloned from Enterobacter cloacae (EC-15) strain of clinical background. The effect of conventional β-lactamase inhibitors (clavulanic acid, sulbactam and tazobactam) on CTX-M-15 was also studied. We have found that tazobactam is the best among these inhibitors against CTX-M-15. The inhibition characteristic of tazobactam is defined by its very low IC50 value (6 nM), high affinity (Ki = 0.017 µM) and better acylation efficiency (k+2/K9 = 0.44 µM-1s-1). It forms an acyl-enzyme covalent complex, which is quite stable (k+3 = 0.0057 s-1). Since increasing resistance has been reported against conventional b-lactam antibiotic-inhibitor combinations, we aspire to design a non-b-lactam core containing b-lactamase inhibitor. For this, we screened ZINC database and performed molecular docking to identify a potential non-β-lactam based inhibitor (ZINC03787097). The MICs of cephalosporin antibiotics in combination with this inhibitor gave promising results. Steady-state kinetics and molecular docking studies showed that ZINC03787097 is a reversible inhibitor which binds non-covalently to the active site of the enzyme through hydrogen bonds and hydrophobic interactions. Though, it’s IC50 (180 nM) is much higher than tazobactam, it has good affinity for CTX-M-15 (Ki = 0.388 µM). This study concludes that ZINC03787097 compound can be used as seed molecule to design more efficient non-b-lactam containing b-lactamase inhibitor that could evade pre-existing bacterial resistance mechanisms.Keywords: ESBL, non-b-lactam-b-lactamase inhibitor, bioinformatics, biomedicine
Procedia PDF Downloads 2383510 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology
Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik
Abstract:
Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms
Procedia PDF Downloads 793509 Developement of a New Wearable Device for Automatic Guidance Service
Authors: Dawei Cai
Abstract:
In this paper, we present a new wearable device that provide an automatic guidance servie for visitors. By combining the position information from NFC and the orientation information from a 6 axis acceleration and terrestrial magnetism sensor, the head's direction can be calculated. We developed an algorithm to calculate the device orientation based on the data from acceleration and terrestrial magnetism sensor. If visitors want to know some explanation about an exhibit in front of him, what he has to do is just lift up his mobile device. The identification program will automatically identify the status based on the information from NFC and MEMS, and start playing explanation content for him. This service may be convenient for old people or disables or children.Keywords: wearable device, ubiquitous computing, guide sysem, MEMS sensor, NFC
Procedia PDF Downloads 4253508 Transmission Line Protection Challenges under High Penetration of Renewable Energy Sources and Proposed Solutions: A Review
Authors: Melake Kuflom
Abstract:
European power networks involve the use of multiple overhead transmission lines to construct a highly duplicated system that delivers reliable and stable electrical energy to the distribution level. The transmission line protection applied in the existing GB transmission network are normally independent unit differential and time stepped distance protection schemes, referred to as main-1 & main-2 respectively, with overcurrent protection as a backup. The increasing penetration of renewable energy sources, commonly referred as “weak sources,” into the power network resulted in the decline of fault level. Traditionally, the fault level of the GB transmission network has been strong; hence the fault current contribution is more than sufficient to ensure the correct operation of the protection schemes. However, numerous conventional coal and nuclear generators have been or about to shut down due to the societal requirement for CO2 emission reduction, and this has resulted in a reduction in the fault level on some transmission lines, and therefore an adaptive transmission line protection is required. Generally, greater utilization of renewable energy sources generated from wind or direct solar energy results in a reduction of CO2 carbon emission and can increase the system security and reliability but reduces the fault level, which has an adverse effect on protection. Consequently, the effectiveness of conventional protection schemes under low fault levels needs to be reviewed, particularly for future GB transmission network operating scenarios. The proposed paper will evaluate the transmission line challenges under high penetration of renewable energy sources andprovides alternative viable protection solutions based on the problem observed. The paper will consider the assessment ofrenewable energy sources (RES) based on a fully rated converter technology. The DIgSILENT Power Factory software tool will be used to model the network.Keywords: fault level, protection schemes, relay settings, relay coordination, renewable energy sources
Procedia PDF Downloads 2063507 A Low-Area Fully-Reconfigurable Hardware Design of Fast Fourier Transform System for 3GPP-LTE Standard
Authors: Xin-Yu Shih, Yue-Qu Liu, Hong-Ru Chou
Abstract:
This paper presents a low-area and fully-reconfigurable Fast Fourier Transform (FFT) hardware design for 3GPP-LTE communication standard. It can fully support 32 different FFT sizes, up to 2048 FFT points. Besides, a special processing element is developed for making reconfigurable computing characteristics possible, while first-in first-out (FIFO) scheduling scheme design technique is proposed for hardware-friendly FIFO resource arranging. In a synthesis chip realization via TSMC 40 nm CMOS technology, the hardware circuit only occupies core area of 0.2325 mm2 and dissipates 233.5 mW at maximal operating frequency of 250 MHz.Keywords: reconfigurable, fast Fourier transform (FFT), single-path delay feedback (SDF), 3GPP-LTE
Procedia PDF Downloads 2783506 Split Monotone Inclusion and Fixed Point Problems in Real Hilbert Spaces
Authors: Francis O. Nwawuru
Abstract:
The convergence analysis of split monotone inclusion problems and fixed point problems of certain nonlinear mappings are investigated in the setting of real Hilbert spaces. Inertial extrapolation term in the spirit of Polyak is incorporated to speed up the rate of convergence. Under standard assumptions, a strong convergence of the proposed algorithm is established without computing the resolvent operator or involving Yosida approximation method. The stepsize involved in the algorithm does not depend on the spectral radius of the linear operator. Furthermore, applications of the proposed algorithm in solving some related optimization problems are also considered. Our result complements and extends numerous results in the literature.Keywords: fixedpoint, hilbertspace, monotonemapping, resolventoperators
Procedia PDF Downloads 523505 A Practical Technique of Airless Tyres’ Mold Manufacturing
Authors: Ahmed E. Hodaib, Mohamed A. Hashem
Abstract:
Dissimilar to pneumatic tyres, airless tyres or flat-proof tyres (also known as tweel) is designed to have poly-composite compound treaded around a hub of flexible spokes. The main advantage of this design is its robustness as airless tyres are impossible to deflate or to blowout at highway speeds like conventional tyres so the driver does not have to be restless about having a spare tire. A summary of the study on manufacturing of airless tyres’ mold is given. Moreover, we have proposed some advantages and disadvantages of using tweel tyres.Keywords: airless tyres, tweel, non-pneumatic tyres, manufacturing
Procedia PDF Downloads 5013504 Investigations on Utilization of Chrome Sludge, Chemical Industry Waste, in Cement Manufacturing and Its Effect on Clinker Mineralogy
Authors: Suresh Vanguri, Suresh Palla, Prasad G., Ramaswamy V., Kalyani K. V., Chaturvedi S. K., Mohapatra B. N., Sunder Rao TBVN
Abstract:
The utilization of industrial waste materials and by-products in the cement industry helps in the conservation of natural resources besides avoiding the problems arising due to waste dumping. The use of non-carbonated materials as raw mix components in clinker manufacturing is identified as one of the key areas to reduce Green House Gas (GHG) emissions. Chrome sludge is a waste material generated from the manufacturing process of sodium dichromate. This paper aims to present studies on the use of chrome sludge in clinker manufacturing, its impact on the development of clinker mineral phases and on the cement properties. Chrome sludge was found to contain substantial amounts of CaO, Fe2O3 and Al2O3 and therefore was used to replace some conventional sources of alumina and iron in the raw mix. Different mixes were prepared by varying the chrome sludge content from 0 to 5 % and the mixes were evaluated for burnability. Laboratory prepared clinker samples were evaluated for qualitative and quantitative mineralogy using X-ray Diffraction Studies (XRD). Optical microscopy was employed to study the distribution of clinker phases, their granulometry and mineralogy. Since chrome sludge also contains considerable amounts of chromium, studies were conducted on the leachability of heavy elements in the chrome sludge as well as in the resultant cement samples. Estimation of heavy elements, including chromium was carried out using ICP-OES. Further, the state of chromium valence, Cr (III) & Cr (VI), was studied using conventional chemical analysis methods coupled with UV-VIS spectroscopy. Assimilation of chromium in the clinker phases was investigated using SEM-EDXA studies. Bulk cement was prepared from the clinker to study the effect of chromium sludge on the cement properties such as setting time, soundness, strength development against the control cement. Studies indicated that chrome sludge can be successfully utilized and its content needs to be optimized based on raw material characteristics.Keywords: chrome sludge, leaching, mineralogy, non-carbonate materials
Procedia PDF Downloads 2173503 Experimental Determination of Shear Strength Properties of Lightweight Expanded Clay Aggregates Using Direct Shear and Triaxial Tests
Authors: Mahsa Shafaei Bajestani, Mahmoud Yazdani, Aliakbar Golshani
Abstract:
Artificial lightweight aggregates have a wide range of applications in industry and engineering. Nowadays, the usage of this material in geotechnical activities, especially as backfill in retaining walls has been growing due to the specific characteristics which make it a competent alternative to the conventional geotechnical materials. In practice, a material with lower weight but higher shear strength parameters would be ideal as backfill behind retaining walls because of the important roles that these parameters play in decreasing the overall active lateral earth pressure. In this study, two types of Light Expanded Clay Aggregates (LECA) produced in the Leca factory are investigated. LECA is made in a rotary kiln by heating natural clay at different temperatures up to 1200 °C making quasi-spherical aggregates with different sizes ranged from 0 to 25 mm. The loose bulk density of these aggregates is between 300 and 700 kN/m3. The purpose of this research is to determine the stress-strain behavior, shear strength parameters, and the energy absorption of LECA materials. Direct shear tests were conducted at five normal stresses of 25, 50, 75, 100, and 200 kPa. In addition, conventional triaxial compression tests were operated at confining pressures of 50, 100, and 200 kPa to examine stress-strain behavior. The experimental results show a high internal angle of friction and even a considerable amount of nominal cohesion despite the granular structure of LECA. These desirable properties along with the intrinsic low density of these aggregates make LECA as a very proper material in geotechnical applications. Furthermore, the results demonstrate that lightweight aggregates may have high energy absorption that is excellent alternative material in seismic isolations.Keywords: expanded clay, direct shear test, triaxial test, shear properties, energy absorption
Procedia PDF Downloads 1663502 Extraction of Phycocyanin from Spirulina platensis by Isoelectric Point Precipitation and Salting Out for Scale Up Processes
Authors: Velasco-Rendón María Del Carmen, Cuéllar-Bermúdez Sara Paulina, Parra-Saldívar Roberto
Abstract:
Phycocyanin is a blue pigment protein with fluorescent activity produced by cyanobacteria. It has been recently studied to determine its anticancer, antioxidant and antiinflamatory potential. Since 2014 it was approved as a Generally Recognized As Safe (GRAS) proteic pigment for the food industry. Therefore, phycocyanin shows potential for the food, nutraceutical, pharmaceutical and diagnostics industry. Conventional phycocyanin extraction includes buffer solutions and ammonium sulphate followed by chromatography or ATPS for protein separation. Therefore, further purification steps are time-requiring, energy intensive and not suitable for scale-up processing. This work presents an alternative to conventional methods that also allows large scale application with commercially available equipment. The extraction was performed by exposing the dry biomass to mechanical cavitation and salting out with NaCl to use an edible reagent. Also, isoelectric point precipitation was used by addition of HCl and neutralization with NaOH. The results were measured and compared in phycocyanin concentration, purity and extraction yield. Results showed that the best extraction condition was the extraction by salting out with 0.20 M NaCl after 30 minutes cavitation, with a concentration in the supernatant of 2.22 mg/ml, a purity of 3.28 and recovery from crude extract of 81.27%. Mechanical cavitation presumably increased the solvent-biomass contact, making the crude extract visibly dark blue after centrifugation. Compared to other systems, our process has less purification steps, similar concentrations in the phycocyanin-rich fraction and higher purity. The contaminants present in our process edible NaCl or low pHs that can be neutralized. It also can be adapted to a semi-continuous process with commercially available equipment. This characteristics make this process an appealing alternative for phycocyanin extraction as a pigment for the food industry.Keywords: extraction, phycocyanin, precipitation, scale-up
Procedia PDF Downloads 4383501 Control of Oil Content of Fried Zucchini Slices by Partial Predrying and Process Optimization
Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner
Abstract:
Main concern about deep-fat-fried food materials is their high final oil contents absorbed during frying process and/or after cooling period, since diet including high content of oil is accepted unhealthy by consumers. Different methods have been evaluated to decrease oil content of fried food stuffs. One promising method is partially drying of food material before frying. In the present study it was aimed to control and decrease the final oil content of zucchini slices by means of partial drying and to optimize process conditions. Conventional oven drying was used to decrease moisture content of zucchini slices at a certain extent. Process performance in terms of oil uptake was evaluated by comparing oil content of predried and then fried zucchini slices with those determined for directly fried ones. For predrying and frying processes, oven temperature and weight loss and frying oil temperature and time pairs were controlled variables, respectively. Zucchini slices were also directly fried for sensory evaluations revealing preferred properties of final product in terms of surface color, moisture content, texture and taste. These properties of directly fried zucchini slices taking the highest score at the end of sensory evaluation were determined and used as targets in optimization procedure. Response surface methodology was used for process optimization. The properties, determined after sensory evaluation, were selected as targets; meanwhile oil content was aimed to be minimized. Results indicated that final oil content of zucchini slices could be reduced from 58% to 46% by controlling conditions of predrying and frying processes. As a result, it was suggested that predrying could be one choose to reduce oil content of fried zucchini slices for health diet. This project (113R015) has been supported by TUBITAK.Keywords: health process, optimization, response surface methodology, oil uptake, conventional oven
Procedia PDF Downloads 3663500 Analyzing the Impact of DCF and PCF on WLAN Network Standards 802.11a, 802.11b, and 802.11g
Authors: Amandeep Singh Dhaliwal
Abstract:
Networking solutions, particularly wireless local area networks have revolutionized the technological advancement. Wireless Local Area Networks (WLANs) have gained a lot of popularity as they provide location-independent network access between computing devices. There are a number of access methods used in Wireless Networks among which DCF and PCF are the fundamental access methods. This paper emphasizes on the impact of DCF and PCF access mechanisms on the performance of the IEEE 802.11a, 802.11b and 802.11g standards. On the basis of various parameters viz. throughput, delay, load etc performance is evaluated between these three standards using above mentioned access mechanisms. Analysis revealed a superior throughput performance with low delays for 802.11g standard as compared to 802.11 a/b standard using both DCF and PCF access methods.Keywords: DCF, IEEE, PCF, WLAN
Procedia PDF Downloads 4253499 Re-Conceptualizing the Indigenous Learning Space for Children in Bangladesh Placing Built Environment as Third Teacher
Authors: Md. Mahamud Hassan, Shantanu Biswas Linkon, Nur Mohammad Khan
Abstract:
Over the last three decades, the primary education system in Bangladesh has experienced significant improvement, but it has failed to cope with different social and cultural aspects, which present many challenges for children, families, and the public school system. Neglecting our own contextual learning environment, it is a matter of sorrow that much attention has been paid to the more physical outcome-focused model, which is nothing but mere infrastructural development, and less subtle to the environment that suits the child's psychology and improves their social, emotional, physical, and moral competency. In South Asia, the symbol of education was never the little red house of colonial architecture but “A Guru sitting under a tree", whereas a responsive and inclusive design approach could help to create more innovative learning environments. Such an approach incorporates how the built, natural, and cultural environment shapes the learner; in turn, learners shape the learning. This research will be conducted to, i) identify the major issues and drawbacks of government policy for primary education development programs; ii) explore and evaluate the morphology of the conventional model of school, and iii) propose an alternative model in a collaborative design process with the stakeholders for maximizing the relationship between the physical learning environments and learners by treating “the built environment” as “the third teacher.” Based on observation, this research will try to find out to what extent built, and natural environments can be utilized as a teaching tool for a more optimal learning environment. It should also be evident that there is a significant gap in the state policy, predetermined educational specifications, and implementation process in response to stakeholders’ involvement. The outcome of this research will contribute to a people-place sensitive design approach through a more thoughtful and responsive architectural process.Keywords: built environment, conventional planning, indigenous learning space, responsive design
Procedia PDF Downloads 1073498 A Levinasian Perspective on the Field of Applied Ethics
Authors: Payman Tajalli, Steven Segal
Abstract:
Applied ethics is an area of ethics which is looked upon most favorably as the most appropriate and useful for educational purposes; after all if ethics finds no application would any investment of time, effort and finance by the educational institutions be warranted? The current approaches to ethics in business and management often entail appealing to various types of moral theories and to this end almost every major philosophical approach has been enlisted. In this paper, we look at ethics through the philosophy of Emmanuel Levinas to argue that since ethics is ‘first philosophy’ it can neither be rule-based nor rule-governed, not something that can be worked out first and then applied to a given situation, hence the overwhelming emphasis on ‘applied ethics’ as a field of study in business and management education is unjustified. True ethics is not applied ethics. This assertion does not mean that teaching ethical theories and philosophies need to be abandoned rather it is the acceptance of the fact that an increase in cognitive awareness of such theories and ethical models and frameworks, or the mastering of techniques and procedures for ethical decision making, will not affect the desired ethical transformation in our students. Levinas himself argued for an ethics without a foundation, not one that required us to go ‘beyond good and evil’ as Nietzsche contended, rather an ethics which necessitates going ‘before good and evil'. Such an ethics does not provide us with a set of methods or techniques or a decision tree that enable us determine the rightness of an action and what we ought to do, rather it is about a way of being, an ethical posture or approach one takes in the inter-subjective relationship with the other that holds the promise of ethical conduct. Ethics in this Levinasian sense then is one of infinite and unconditional responsibility for the other person in relationship, an ethics which is not subject to negotiation, calculation or reciprocity, and as such it could neither be applied nor taught through conventional pedagogy with its focus on knowledge transfer from the teacher to student, and to this end Levinas offers a non-maieutic, non-conventional approach to pedagogy. The paper concludes that from a Levinasian perspective on ethics and education, we may need to guide our students to move away from the clear and objective professionalism of the management and applied ethics towards the murky individual spiritualism. For Levinas, this is ‘the Copernican revolution’ in ethics.Keywords: business ethics, ethics education, Levinas, maieutic teaching, ethics without foundation
Procedia PDF Downloads 3233497 Design and Tooth Contact Analysis of Face Gear Drive with Modified Tooth Surface in Helicopter Transmission
Authors: Kazumasa Kawasaki, Isamu Tsuji, Hiroshi Gunbara
Abstract:
A face gear drive is actually composed of a spur or helical pinion that is in mesh with a face gear and transfers power and motion between intersecting or skew axes. Due to the peculiarity of the face gear drive in shunt and confluence drive, it shows potential advantages in the application in the helicopter transmission. The advantages of such applications are the possibility of the split of the torque that appears to be significant where a pinion drives two face gears to provide an accurate division of power and motion. This mechanism greatly reduces the weight and cost compared to conventional design. Therefore, this has been led to revived interest and the face gear drive has been utilized in substitution for bevel and hypoid gears in limited cases. The face gear drive with a spur or a helical pinion is newly designed in order to determine an effective meshing area under the design parameters and specific design dimensions. The face gear has two unique dimensions which control the face width of the tooth, and the outside and inside diameters of the face gear. On the other hand, it is necessary to modify the tooth surfaces of face gear drive in order to avoid the influences of alignment errors on the tooth contact patterns in practical use. In this case, the pinion tooth surfaces are usually modified in the conventional method. However, it is hard to control the tooth contact pattern intentionally and adjust the position of the pinion axis in meshing of the gear pair. Therefore, a method of the modification of the tooth surfaces of the face gear is proposed. Moreover, based on tooth contact analysis, the tooth contact pattern and transmission errors of the designed face gear drive are analyzed, and the influences of alignment errors on the tooth contact patterns and transmission errors are investigated. These results showed that the tooth contact patterns and transmission errors were controllable and the face gear drive which is insensitive to alignment errors can be obtained.Keywords: alignment error, face gear, gear design, helicopter transmission, tooth contact analysis
Procedia PDF Downloads 4373496 A Common Automated Programming Platform for Knowledge Based Software Engineering
Authors: Ivan Stanev, Maria Koleva
Abstract:
A common platform for automated programming (CPAP) is defined in details. Two versions of CPAP are described: Cloud-based (including the set of components for classic programming, and the set of components for combined programming) and KBASE based (including the set of components for automated programming, and the set of components for ontology programming). Four KBASE products (module for automated programming of robots, intelligent product manual, intelligent document display, and intelligent form generator) are analyzed and CPAP contributions to automated programming are presented.Keywords: automated programming, cloud computing, knowledge based software engineering, service oriented architecture
Procedia PDF Downloads 3443495 An Efficient Automated Radiation Measuring System for Plasma Monopole Antenna
Authors: Gurkirandeep Kaur, Rana Pratap Yadav
Abstract:
This experimental study is aimed to examine the radiation characteristics of different plasma structures of a surface wave-driven plasma antenna by an automated measuring system. In this study, a 30 cm long plasma column of argon gas with a diameter of 3 cm is excited by surface wave discharge mechanism operating at 13.56 MHz with RF power level up to 100 Watts and gas pressure between 0.01 to 0.05 mb. The study reveals that a single structured plasma monopole can be modified into an array of plasma antenna elements by forming multiple striations or plasma blobs inside the discharge tube by altering the values of plasma properties such as working pressure, operating frequency, input RF power, discharge tube dimensions, i.e., length, radius, and thickness. It is also reported that plasma length, electron density, and conductivity are functions of operating plasma parameters and controlled by changing working pressure and input power. To investigate the antenna radiation efficiency for the far-field region, an automation-based radiation measuring system has been fabricated and presented in detail. This developed automated system involves a combined setup of controller, dc servo motors, vector network analyzer, and computing device to evaluate the radiation intensity, directivity, gain and efficiency of plasma antenna. In this system, the controller is connected to multiple motors for moving aluminum shafts in both elevation and azimuthal plane whereas radiation from plasma monopole antenna is measured by a Vector Network Analyser (VNA) which is further wired up with the computing device to display radiations in polar plot forms. Here, the radiation characteristics of both continuous and array plasma monopole antenna have been studied for various working plasma parameters. The experimental results clearly indicate that the plasma antenna is as efficient as a metallic antenna. The radiation from plasma monopole antenna is significantly influenced by plasma properties which provides a wider range in radiation pattern where desired radiation parameters like beam-width, the direction of radiation, radiation intensity, antenna efficiency, etc. can be achieved in a single monopole. Due to its wide range of selectivity in radiation pattern; this can meet the demands of wider bandwidth to get high data speed in communication systems. Moreover, this developed system provides an efficient and cost-effective solution for measuring the radiation pattern in far-field zone for any kind of antenna system.Keywords: antenna radiation characteristics, dynamically reconfigurable, plasma antenna, plasma column, plasma striations, surface wave
Procedia PDF Downloads 1193494 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 1343493 Experimental Study of an Isobaric Expansion Heat Engine with Hydraulic Power Output for Conversion of Low-Grade-Heat to Electricity
Authors: Maxim Glushenkov, Alexander Kronberg
Abstract:
Isobaric expansion (IE) process is an alternative to conventional gas/vapor expansion accompanied by a pressure decrease typical of all state-of-the-art heat engines. The elimination of the expansion stage accompanied by useful work means that the most critical and expensive parts of ORC systems (turbine, screw expander, etc.) are also eliminated. In many cases, IE heat engines can be more efficient than conventional expansion machines. In addition, IE machines have a very simple, reliable, and inexpensive design. They can also perform all the known operations of existing heat engines and provide usable energy in a very convenient hydraulic or pneumatic form. This paper reports measurement made with the engine operating as a heat-to-shaft-power or electricity converter and a comparison of the experimental results to a thermodynamic model. Experiments were carried out at heat source temperature in the range 30–85 °C and heat sink temperature around 20 °C; refrigerant R134a was used as the engine working fluid. The pressure difference generated by the engine varied from 2.5 bar at the heat source temperature 40 °C to 23 bar at the heat source temperature 85 °C. Using a differential piston, the generated pressure was quadrupled to pump hydraulic oil through a hydraulic motor that generates shaft power and is connected to an alternator. At the frequency of about 0.5 Hz, the engine operates with useful powers up to 1 kW and an oil pumping flowrate of 7 L/min. Depending on the temperature of the heat source, the obtained efficiency was 3.5 – 6 %. This efficiency looks very high, considering such a low temperature difference (10 – 65 °C) and low power (< 1 kW). The engine’s observed performance is in good agreement with the predictions of the model. The results are very promising, showing that the engine is a simple and low-cost alternative to ORC plants and other known energy conversion systems, especially at low temperatures (< 100 °C) and low power range (< 500 kW) where other known technologies are not economic. Thus low-grade solar, geothermal energy, biomass combustion, and waste heat with a temperature above 30 °C can be involved into various energy conversion processes.Keywords: isobaric expansion, low-grade heat, heat engine, renewable energy, waste heat recovery
Procedia PDF Downloads 2263492 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC
Procedia PDF Downloads 2413491 Short-Term Impact of a Return to Conventional Tillage on Soil Microbial Attributes
Authors: Promil Mehra, Nanthi Bolan, Jack Desbiolles, Risha Gupta
Abstract:
Agricultural practices affect the soil physical and chemical properties, which in turn influence the soil microorganisms as a function of the soil biological environment. On the return to conventional tillage (CT) from continuing no-till (NT) cropping system, a very little information is available from the impact caused by the intermittent tillage on the soil biochemical properties from a short-term (2-year) study period. Therefore, the contribution made by different microorganisms (fungal, bacteria) was also investigated in order to find out the effective changes in the soil microbial activity under a South Australian dryland faring system. This study was conducted to understand the impact of microbial dynamics on the soil organic carbon (SOC) under NT and CT systems when treated with different levels of mulching (0, 2.5 and 5 t/ha). Our results demonstrated that from the incubation experiment the cumulative CO2 emitted from CT system was 34.5% higher than NT system. Relatively, the respiration from surface layer (0-10 cm) was significantly (P<0.05) higher by 8.5% and 15.8 from CT; 8% and 18.9% from NT system w.r.t 10-20 and 20-30 cm respectively. Further, the dehydrogenase enzyme activity (DHA) and microbial biomass carbon (MBC) were both significantly lower (P<0.05) under CT, i.e., 7.4%, 7.2%, 6.0% (DHA) and 19.7%, 15.7%, 4% (MBC) across the different mulching levels (0, 2.5, 5 t/ha) respectively. In general, it was found that from both the tillage system the enzyme activity and MBC decreased with the increase in depth (0-10, 10-20 and 20-30 cm) and with the increase in mulching rate (0, 2.5 and 5 t/ha). From the perspective of microbial stress, there was 28.6% higher stress under CT system compared to NT system. Whereas, the microbial activity of different microorganisms like fungal and bacterial activities were determined by substrate-induced inhibition respiration using antibiotics like cycloheximide (16 mg/gm of soil) and streptomycin sulphate (14 mg/gm of soil), by trapping the CO2 using an alkali (0.5 M NaOH) solution. The microbial activities were confirmed through platting technique, where it was that found bacterial activities were 46.2% and 38.9% higher than fungal activity under CT and NT system. In conclusion, it was expected that changes in the relative abundance and activity of different microorganisms (bacteria and fungi) under different tillage systems could significantly affect the C cycling and storage due to its unique structures and differential interactions with the soil physical properties.Keywords: tillage, soil respiration, MBC, fungal-bacterial activity
Procedia PDF Downloads 261