Search results for: Monitoring of Web service composition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2688

Search results for: Monitoring of Web service composition

168 A Refined Application of QFD in SCM, A New Approach

Authors: Nooshin La'l Mohamadi

Abstract:

Due to the fact that in the new century customers tend to express globally increasing demands, networks of interconnected businesses have been established in societies and the management of such networks seems to be a major key through gaining competitive advantages. Supply chain management encompasses such managerial activities. Within a supply chain, a critical role is played by quality. QFD is a widely-utilized tool which serves the purpose of not only bringing quality to the ultimate provision of products or service packages required by the end customer or the retailer, but it can also initiate us into a satisfactory relationship with our initial customer; that is the wholesaler. However, the wholesalers- cooperation is considerably based on the capabilities that are heavily dependent on their locations and existing circumstances. Therefore, it is undeniable that for all companies each wholesaler possesses a specific importance ratio which can heavily influence the figures calculated in the House of Quality in QFD. Moreover, due to the competitiveness of the marketplace today, it-s been widely recognized that consumers- expression of demands has been highly volatile in periods of production. Apparently, such instability and proneness to change has been very tangibly noticed and taking it into account during the analysis of HOQ is widely influential and doubtlessly required. For a more reliable outcome in such matters, this article demonstrates the application viability of Analytic Network Process for considering the wholesalers- reputation and simultaneously introduces a mortality coefficient for the reliability and stability of the consumers- expressed demands in course of time. Following to this, the paper provides further elaboration on the relevant contributory factors and approaches through the calculation of such coefficients. In the end, the article concludes that an empirical application is needed to achieve broader validity.

Keywords: Analytic Network Process, Quality Function Deployment, QFD flaws, Supply Chain Management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
167 Prioritizing the Most Important Information from Contractors’ BIM Handover for Firefighters’ Responsibilities

Authors: Akram Mahdaviparsa, Tamera McCuen, Vahideh Karimimansoob

Abstract:

Fire service is responsible for protecting life, assets, and natural resources from fire and other hazardous incidents. Search and rescue in unfamiliar buildings is a vital part of firefighters’ responsibilities. Providing firefighters with precise building information in an easy-to-understand format is a potential solution for mitigating the negative consequences of fire hazards. The negative effect of insufficient knowledge about a building’s indoor environment impedes firefighters’ capabilities and leads to lost property. A data rich building information modeling (BIM) is a potentially useful source in three-dimensional (3D) visualization and data/information storage for fire emergency response. Therefore, this research’s purpose is prioritizing the required information for firefighters from the most important information to the least important. A survey was carried out with firefighters working in the Norman Fire Department to obtain the importance of each building information item. The results show that “the location of exit doors, windows, corridors, elevators, and stairs”, “material of building elements”, and “building data” are the three most important information specified by firefighters. The results also implied that the 2D model of architectural, structural and way finding is more understandable in comparison with the 3D model, while the 3D model of MEP system could convey more information than the 2D model. Furthermore, color in visualization can help firefighters to understand the building information easier and quicker. Sufficient internal consistency of all responses was proven through developing the Pearson Correlation Matrix and obtaining Cronbach’s alpha of 0.916. Therefore, the results of this study are reliable and could be applied to the population.

Keywords: BIM, building fire response, ranking, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 522
166 Dislocation Modelling of the 1997-2009 High-Precision Global Positioning System Displacements in Darjiling- Sikkim Himalaya, India

Authors: Kutubuddin Ansari, Malay Mukul, Sridevi Jade

Abstract:

We used high-precision Global Positioning System (GPS) to geodetically constrain the motion of stations in the Darjiling-Sikkim Himalayan (DSH) wedge and examine the deformation at the Indian-Tibetan plate boundary using IGS (International GPS Service) fiducial stations. High-precision GPS based displacement and velocity field was measured in the DSH between 1997 and 2009. To obtain additional insight north of the Indo-Tibetan border and in the Darjiling-Sikkim-Tibet (DaSiT) wedge, published velocities from four stations J037, XIGA, J029 and YADO were also included in the analysis. India-fixed velocities or the back-slip was computed relative to the pole of rotation of the Indian Plate (Latitude 52.97 ± 0.22º, Longitude - 0.30 ± 3.76º, and Angular Velocity 0.500 ± 0.008º/ Myr) in the DaSiT wedge. Dislocation modelling was carried out with the back-slip to model the best possible solution of a finite rectangular dislocation or the causative fault based on dislocation theory that produced the observed back-slip using a forward modelling approach. To find the best possible solution, three different models were attempted. First, slip along a single thrust fault, then two thrust faults and in finally, three thrust faults were modelled to simulate the back-slip in the DaSiT wedge. The three-fault case bests the measured displacements and is taken as the best possible solution.

Keywords: Global Positioning System, Darjiling-Sikkim Himalaya, Dislocation modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2088
165 Development of the Maturity Sensor Prototype and Method of Its Placement in the Structure

Authors: Ye. B. Utepov, A. S. Tulebekova, A. B. Kazkeyev

Abstract:

Maturity sensors are used to determine concrete strength by the non-destructive method. The method of placement of the maturity sensors determines their number required for a certain frame of a monolithic building. This paper proposes a cheap prototype of an embedded wireless sensor for monitoring concrete structures, as well as an alternative strategy for placing sensors based on the transitional boundaries of the temperature distribution of concrete curing, which were determined by building a heat map of the temperature distribution, where unknown values are calculated by the method of inverse distance weighing. The developed prototype can simultaneously measure temperature and relative humidity over a smartphone-controlled time interval. It implements a maturity method to assess the in-situ strength of concrete, which is considered an alternative to the traditional shock impulse and compression testing method used in Kazakhstan. The prototype was tested in laboratory and field conditions. The tests were aimed at studying the effect of internal and external temperature and relative humidity on concrete's strength gain. Based on an experimentally poured concrete slab with randomly integrated maturity sensors, it the transition boundaries form elliptical forms were determined. Temperature distribution over the largest diameter of the ellipses was plotted, resulting in correct and inverted parabolas. As a result, the distance between the closest opposite crossing points of the parabolas is accepted as the maximum permissible step for setting the maturity sensors. The proposed placement strategy can be applied to sensors that measure various continuous phenomena such as relative humidity. Prototype testing has also revealed Bluetooth inconvenience due to weak signal and inability to access multiple prototypes simultaneously. For this reason, further prototype upgrades are planned in the future work.

Keywords: Heat map, placement strategy, temperature and relative humidity, wireless embedded sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 336
164 The Reason of Principles of Construction Engineering and Management Being Necessary for Contracting Firms and Their Projects Managers

Authors: Mamoon Mousa Atout

Abstract:

The industries of construction are in continuous growth not only in Middle East rejoin but almost all over the world. For the last fifteen years, big expansion and increase of different types of projects has been observed. Many infrastructural projects have been developed, high rise buildings, big shopping malls, power sub-stations, roads, bridges, schools, universities and developing many of new cities with full and complete facilities. The growth and enlargement of the mentioned developed projects has been accomplished through many international and local contracting organizations. Senior management of these organizations depend on their qualified and experienced team whom are aware of the implications of project management, construction management, engineering management and resource management during tendering till final completion of the project. This research aims to find out why reasons of principles of construction engineering and management are necessary for contracting firms and their managers. Principles of construction management help contracting organizations to accomplish and deliver projects without delay. This can be maintained by establishing guidelines’ details for updating the adopted system of construction management that they have through qualified and experienced project managers. The research focuses on benefits of other essential skills of projects planning, monitoring and control. Defining roles and responsibilities of contractor project managers during tendering and execution is a part of the investigated factors that will be analyzed. Other skills like optimizing and utilizing the obtainable project resources to deliver the project within time, cost and quality will be also investigated to find out how these factors are affecting the performance of contracting firms, projects managers and projects. The conclusion of the research will help senior management team and the contractors project managers about the benefits of implications and benefits construction management system and its effect upon the performance and knowledge of contract values that they have, and the optimal profit margin of the firm it.

Keywords: Construction management, contracting firms, project managers, planning processes, roles and responsibilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
163 Production and Application of Organic Waste Compost for Urban Agriculture in Emerging Cities

Authors: Alemayehu Agizew Woldeamanuel, Mekonnen Maschal Tarekegn, Raj Mohan Balakrishina

Abstract:

Composting is one of the conventional techniques adopted for organic waste management but the practice is very limited in emerging cities despite that most of the waste generated is organic. This paper aims to examine the viability of composting for organic waste management in the emerging city of Addis Ababa, Ethiopia by addressing the composting practice, quality of compost and application of compost in urban agriculture. The study collects data using compost laboratory testing and urban farm households’ survey and uses descriptive analysis on the state of compost production and application, physicochemical analysis of the compost samples, and regression analysis on the urban farmer’s willingness to pay for compost. The findings of the study indicated that there is composting practice at a small scale, most of the producers use unsorted feedstock materials, aerobic composting is dominantly used and the maturation period ranged from four to 10 weeks. The carbon content of the compost ranges from 30.8 to 277.1 due to the type of feedstock applied and this surpasses the ideal proportions for C:N ratio. The total nitrogen, pH, organic matter and moisture content are relatively optimal. The levels of heavy metals measured for Mn, Cu, Pb, Cd and Cr6+ in the compost samples are also insignificant. In the urban agriculture sector, chemical fertilizer is the dominant type of soil input in crop productions but vegetable producers use a combination of both fertilizer and other organic inputs including compost. The willingness to pay for compost depends on income, household size, gender, type of soil inputs, monitoring soil fertility, the main product of the farm, farming method and farm ownership. Finally, this study recommends the need for collaboration among stakeholders along the value chain of waste, awareness creation on the benefits of composting and addressing challenges faced by both compost producers and users.

Keywords: Composting, emerging city, organic waste management, urban agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
162 Heat Transfer Analysis of a Multiphase Oxygen Reactor Heated by a Helical Tube in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

In the thermochemical water splitting process by Cu-Cl cycle, oxygen gas is produced by an endothermic thermolysis process at a temperature of 530oC. Oxygen production reactor is a three-phase reactor involving cuprous chloride molten salt, copper oxychloride solid reactant and oxygen gas. To perform optimal performance, the oxygen reactor requires accurate control of heat transfer to the molten salt and decomposing solid particles within the thermolysis reactor. In this paper, the scale up analysis of the oxygen reactor that is heated by an internal helical tube is performed from the perspective of heat transfer. A heat balance of the oxygen reactor is investigated to analyze the size of the reactor that provides the required heat input for different rates of hydrogen production. It is found that the helical tube wall and the service side constitute the largest thermal resistances of the oxygen reactor system. In the analysis of this paper, the Cu-Cl cycle is assumed to be heated by two types of nuclear reactor, which are HTGR and CANDU SCWR. It is concluded that using CANDU SCWR requires more heat transfer rate by 3-4 times than that when using HTGR. The effect of the reactor aspect ratio is also studied and it is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Comparisons between the results of this study and pervious results of material balances in the oxygen reactor show that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: Heat transfer, Cu-Cl cycle, hydrogen production, oxygen, clean energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289
161 Morphological Interaction of Porcine Oocyte and Cumulus Cells Study on in vitro Oocyte Maturation Using Electron Microscopy

Authors: M. Areekijseree, W. Pongsawat, M. Pumipaiboon, C. Thepsithar, S. Sengsai, T. Chuen-Im

Abstract:

Morphological interaction of porcine cumulus-oocyte complexes (pCOCs) was investigated on in vitro condition using electron microscope (SEM and TEM). The totals of 1,923 oocytes were round in shape, surrounded by Zona pellucida with layer of cumulus cells ranging between 59.29-202.14 μm in size. They were classified into intact-, multi-, partial cumulus cell layer oocyte, and completely denuded oocyte, at the percentage composition of 22.80% 32.70%, 18.60%, and 25.90 % respectively. The pCOCs classified as intact- and multi cumulus cell layer oocytes were further culturing at 37°C with 5% CO2, 95% air atmosphere and high humidity for 44 h in M199 with Earle’s salts supplemented with 10% HTFCS, 2.2 mg/mL NaHCO3, 1 M Hepes, 0.25 mM pyruvate, 15 μg/mL porcine follicle-stimulating hormone, 1 μg/mL LH, 1μg/mL estradiol with ethanol, and 50 μg/mL gentamycin sulfate. On electron microscope study, cumulus cells were found to stick their processes to secrete substance from the sac-shape end into Zona pellucida of the oocyte and also communicated with the neighboring cells through their microvilli on the beginning of incubation period. It is believed that the cumulus cells communicate with the oocyte by inserting the microvilli through this gap and embedded in the oocyte cytoplasm before secreting substance, through the sac-shape end of the microvilli, to inhibit primary oocyte development at the prophase I. Morphological changes of the complexes were observed after culturing for 24-44 h. One hundred percentages of the cumulus layers were expanded and cumulus cells were peeling off from the oocyte surface. In addition, the round-shape cumulus cells transformed themselves into either an elongate shape or a columnar shape, and no communication between cumulus neighboring cells. After 44 h of incubation time, diameter of oocytes surrounded by cumulus cells was larger than 0 h incubation. The effect of hormones in culture medium is exerted by their receptors present in porcine oocyte. It is likely that all morphological changes of the complexes after hormone treatment were to allow maturation of the oocyte. This study demonstrated that the association of hormones in M199 could promote porcine follicle activation in 44 h in vitro condition. This culture system should be useful for studying the regulation of early follicular growth and development, especially because these follicles represent a large source of oocytes that could be used in vitro for cell technology.

Keywords: Cumulus cells, electron microscopy (SEM and TEM), in vitro, porcine oocyte.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2436
160 Optimizing Organizational Performance: The Critical Role of Headcount Budgeting in Strategic Alignment and Financial Stability

Authors: Shobhit Mittal

Abstract:

Headcount budgeting stands as a pivotal element in organizational financial management, extending beyond traditional budgeting to encompass strategic resource allocation for workforce-related expenses. This process is integral to maintaining financial stability and fostering a productive workforce, requiring a comprehensive analysis of factors such as market trends, business growth projections, and evolving workforce skill requirements. It demands a collaborative approach, primarily involving Human Resources (HR) and finance departments, to align workforce planning with an organization's financial capabilities and strategic objectives. The dynamic nature of headcount budgeting necessitates continuous monitoring and adjustment in response to economic fluctuations, business strategy shifts, technological advancements, and market dynamics. Its significance in talent management is also highlighted, aligning financial planning with talent acquisition and retention strategies to ensure a competitive edge in the market. The consequences of incorrect headcount budgeting are explored, showing how it can lead to financial strain, operational inefficiencies, and hindered strategic objectives. Examining case studies like IBM's strategic workforce rebalancing and Microsoft's shift for long-term success, the importance of aligning headcount budgeting with organizational goals is underscored. These examples illustrate that effective headcount budgeting transcends its role as a financial tool, emerging as a strategic element crucial for an organization's success. This necessitates continuous refinement and adaptation to align with evolving business goals and market conditions, highlighting its role as a key driver in organizational success and sustainability.

Keywords: Strategic planning, fiscal budget, headcount planning, resource allocation, financial management, decision-making, operational efficiency, risk management, headcount budget.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88
159 Plasma Spraying of 316 Stainless Steel on Aluminum and Investigation of Coat/Substrate Interface

Authors: P. Abachi, T. W. Coyle, P. S. Musavi Gharavi

Abstract:

By applying coating onto a structural component, the corrosion and/or wear resistance requirements of the surface can be fulfilled. Since the layer adhesion of the coating influences the mechanical integrity of the coat/substrate interface during the service time, it should be examined accurately. At the present work, the tensile bonding strength of the 316 stainless steel plasma sprayed coating on aluminum substrate was determined by using tensile adhesion test, TAT, specimen. The interfacial fracture toughness was specified using four-point bend specimen containing a saw notch and modified chevron-notched short-bar (SB) specimen. The coating microstructure and fractured specimen surface were examined by using scanning electron- and optical-microscopy. The investigation of coated surface after tensile adhesion test indicates that the failure mechanism is mostly cohesive and rarely adhesive type. The calculated value of critical strain energy release rate proposes relatively good interface status. It seems that four-point bending test offers a potentially more sensitive means for evaluation of mechanical integrity of coating/substrate interfaces than is possible with the tensile test. The fracture toughness value reported for the modified chevron-notched short-bar specimen testing cannot be taken as absolute value because its calculation is based on the minimum stress intensity coefficient value which has been suggested for the fracture toughness determination of homogeneous parts in the ASTM E1304-97 standard. 

Keywords: Bonding strength, four-point bend test, interfacial fracture toughness, modified chevron-notched short-bar specimen, plasma sprayed coating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
158 Toward Indoor and Outdoor Surveillance Using an Improved Fast Background Subtraction Algorithm

Authors: A. El Harraj, N. Raissouni

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes invariance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: Video surveillance, background subtraction, Contrast Limited Histogram Equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2069
157 Geochemistry of Natural Radionuclides Associated with Acid Mine Drainage (AMD) in a Coal Mining Area in Southern Brazil

Authors: Juliana A. Galhardi, Daniel M. Bonotto

Abstract:

Coal is an important non-renewable energy source of and can be associated with radioactive elements. In Figueira city, Paraná state, Brazil, it was recorded high uranium activity near the coal mine that supplies a local thermoelectric power plant. In this context, the radon activity (Rn-222, produced by the Ra-226 decay in the U-238 natural series) was evaluated in groundwater, river water and effluents produced from the acid mine drainage in the coal reject dumps. The samples were collected in August 2013 and in February 2014 and analyzed at LABIDRO (Laboratory of Isotope and Hydrochemistry), UNESP, Rio Claro city, Brazil, using an alpha spectrometer (AlphaGuard) adjusted to evaluate the mean radon activity concentration in five cycles of 10 minutes. No radon activity concentration above 100 Bq.L-1, which was a previous critic value established by the World Health Organization. The average radon activity concentration in groundwater was higher than in surface water and in effluent samples, possibly due to the accumulation of uranium and radium in the aquifer layers that favors the radon trapping. The lower value in the river waters can indicate dilution and the intermediate value in the effluents may indicate radon absorption in the coal particles of the reject dumps. The results also indicate that the radon activities in the effluents increase with the sample acidification, possibly due to the higher radium leaching and the subsequent radon transport to the drainage flow. The water samples of Laranjinha River and Ribeirão das Pedras stream, which, respectively, supply Figueira city and receive the mining effluent, exhibited higher pH values upstream the mine, reflecting the acid mine drainage discharge. The radionuclides transport indicates the importance of monitoring their activity concentration in natural waters due to the risks that the radioactivity can represent to human health.

Keywords: Radon, radium, acid mine drainage, coal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2026
156 A Novel Approach to Allocate Channels Dynamically in Wireless Mesh Networks

Authors: Y. Harold Robinson, M. Rajaram

Abstract:

Wireless mesh networking is rapidly gaining in popularity with a variety of users: from municipalities to enterprises, from telecom service providers to public safety and military organizations. This increasing popularity is based on two basic facts: ease of deployment and increase in network capacity expressed in bandwidth per footage; WMNs do not rely on any fixed infrastructure. Many efforts have been used to maximizing throughput of the network in a multi-channel multi-radio wireless mesh network. Current approaches are purely based on either static or dynamic channel allocation approaches. In this paper, we use a hybrid multichannel multi radio wireless mesh networking architecture, where static and dynamic interfaces are built in the nodes. Dynamic Adaptive Channel Allocation protocol (DACA), it considers optimization for both throughput and delay in the channel allocation. The assignment of the channel has been allocated to be codependent with the routing problem in the wireless mesh network and that should be based on passage flow on every link. Temporal and spatial relationship rises to re compute the channel assignment every time when the pattern changes in mesh network, channel assignment algorithms assign channels in network. In this paper a computing path which captures the available path bandwidth is the proposed information and the proficient routing protocol based on the new path which provides both static and dynamic links. The consistency property guarantees that each node makes an appropriate packet forwarding decision and balancing the control usage of the network, so that a data packet will traverse through the right path.

Keywords: Wireless mesh network, spatial time division multiple access, hybrid topology, timeslot allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
155 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro Grids

Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone

Abstract:

Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.

Keywords: Short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, Gain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2583
154 Productivity Effect of Urea Deep Placement Technology: An Empirical Analysis from Irrigation Rice Farmers in the Northern Region of Ghana

Authors: Shaibu Baanni Azumah, Ignatius Tindjina, Stella Obanyi, Tara N. Wood

Abstract:

This study examined the effect of Urea Deep Placement (UDP) technology on the output of irrigated rice farmers in the northern region of Ghana. Multi-stage sampling technique was used to select 142 rice farmers from the Golinga and Bontanga irrigation schemes, around Tamale. A treatment effect model was estimated at two stages; firstly, to determine the factors that influenced farmers’ decision to adopt the UDP technology and secondly, to determine the effect of the adoption of the UDP technology on the output of rice farmers. The significant variables that influenced rice farmers’ adoption of the UPD technology were sex of the farmer, land ownership, off-farm activity, extension service, farmer group participation and training. The results also revealed that farm size and the adoption of UDP technology significantly influenced the output of rice farmers in the northern region of Ghana. In addition to the potential of the technology to improve yields, it also presents an employment opportunity for women and youth, who are engaged in the deep placement of Urea Super Granules (USG), as well as in the transplantation of rice. It is recommended that the government of Ghana work closely with the IFDC to embed the UDP technology in the national agricultural programmes and policies. The study also recommends an effective collaboration between the government, through the Ministry of Food and Agriculture (MoFA) and the International Fertilizer Development Center (IFDC) to train agricultural extension agents on UDP technology in the rice producing areas of the country.

Keywords: Northern Ghana, output, irrigation rice farmers, treatment effect model, urea deep placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1107
153 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death

Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar

Abstract:

In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.

Keywords: Early stage prediction, heart rate variability, linear and non linear analysis, sudden cardiac death.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
152 ZMP Based Reference Generation for Biped Walking Robots

Authors: Kemalettin Erbatur, Özer Koca, Evrim Taşkıran, Metin Yılmaz, Utku Seven

Abstract:

Recent fifteen years witnessed fast improvements in the field of humanoid robotics. The human-like robot structure is more suitable to human environment with its supreme obstacle avoidance properties when compared with wheeled service robots. However, the walking control for bipedal robots is a challenging task due to their complex dynamics. Stable reference generation plays a very important role in control. Linear Inverted Pendulum Model (LIPM) and the Zero Moment Point (ZMP) criterion are applied in a number of studies for stable walking reference generation of biped walking robots. This paper follows this main approach too. We propose a natural and continuous ZMP reference trajectory for a stable and human-like walk. The ZMP reference trajectories move forward under the sole of the support foot when the robot body is supported by a single leg. Robot center of mass trajectory is obtained from predefined ZMP reference trajectories by a Fourier series approximation method. The Gibbs phenomenon problem common with Fourier approximations of discontinuous functions is avoided by employing continuous ZMP references. Also, these ZMP reference trajectories possess pre-assigned single and double support phases, which are very useful in experimental tuning work. The ZMP based reference generation strategy is tested via threedimensional full-dynamics simulations of a 12-degrees-of-freedom biped robot model. Simulation results indicate that the proposed reference trajectory generation technique is successful.

Keywords: Biped robot, Linear Inverted Pendulum Model, Zero Moment Point, Fourier series approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612
151 An Approach for Ensuring Data Flow in Freight Delivery and Management Systems

Authors: Aurelija Burinskienė, Dalė Dzemydienė, Arūnas Miliauskas

Abstract:

This research aims at developing the approach for more effective freight delivery and transportation process management. The road congestions and the identification of causes are important, as well as the context information recognition and management. The measure of many parameters during the transportation period and proper control of driver work became the problem. The number of vehicles per time unit passing at a given time and point for drivers can be evaluated in some situations. The collection of data is mainly used to establish new trips. The flow of the data is more complex in urban areas. Herein, the movement of freight is reported in detail, including the information on street level. When traffic density is extremely high in congestion cases, and the traffic speed is incredibly low, data transmission reaches the peak. Different data sets are generated, which depend on the type of freight delivery network. There are three types of networks: long-distance delivery networks, last-mile delivery networks and mode-based delivery networks; the last one includes different modes, in particular, railways and other networks. When freight delivery is switched from one type of the above-stated network to another, more data could be included for reporting purposes and vice versa. In this case, a significant amount of these data is used for control operations, and the problem requires an integrated methodological approach. The paper presents an approach for providing e-services for drivers by including the assessment of the multi-component infrastructure needed for delivery of freights following the network type. The construction of such a methodology is required to evaluate data flow conditions and overloads, and to minimize the time gaps in data reporting. The results obtained show the possibilities of the proposing methodological approach to support the management and decision-making processes with functionality of incorporating networking specifics, by helping to minimize the overloads in data reporting.

Keywords: Transportation networks, freight delivery, data flow, monitoring, e-services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 582
150 Sustainability Analysis and Quality Assessment of Rainwater Harvested from Green Roofs: A Review

Authors: Mst. Nilufa Sultana, Shatirah Akib, Muhammad Aqeel Ashraf, Mohamed Roseli Zainal Abidin

Abstract:

Most people today are aware that global climate change is not just a scientific theory but also a fact with worldwide consequences. Global climate change is due to rapid urbanization, industrialization, high population growth and current vulnerability of the climatic condition. Water is becoming scarce as a result of global climate change. To mitigate the problem arising due to global climate change and its drought effect, harvesting rainwater from green roofs, an environmentally-friendly and versatile technology, is becoming one of the best assessment criteria and gaining attention in Malaysia. This paper addresses the sustainability of green roofs and examines the quality of water harvested from green roofs in comparison to rainwater. The factors that affect the quality of such water, taking into account, for example, roofing materials, climatic conditions, the frequency of rainfall frequency and the first flush. A green roof was installed on the Humid Tropic Centre (HTC) is a place of the study on monitoring program for urban Stormwater Management Manual for Malaysia (MSMA), Eco-Hydrological Project in Kuala Lumpur, and the rainwater was harvested and evaluated on the basis of four parameters i.e., conductivity, dissolved oxygen (DO), pH and temperature. These parameters were found to fall between Class I and Class III of the Interim National Water Quality Standards (INWQS) and the Water Quality Index (WQI). Some preliminary treatment such as disinfection and filtration could likely to improve the value of these parameters to class I. This review paper clearly indicates that there is a need for more research to address other microbiological and chemical quality parameters to ensure that the harvested water is suitable for use potable water for domestic purposes. The change in all physical, chemical and microbiological parameters with respect to storage time will be a major focus of future studies in this field.

Keywords: Green roofs, INWQS, MSMA-SME, Rainwater harvesting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2879
149 Nanoparticles-Protein Hybrid Based Magnetic Liposome

Authors: Amlan Kumar Das, Avinash Marwal, Vikram Pareek

Abstract:

Liposome plays an important role in medical and pharmaceutical science as e.g. nano scale drug carriers. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment. Magnet-driven liposome used for the targeted delivery of drugs to organs and tissues. These liposome preparations contain encapsulated drug components and finely dispersed magnetic particles. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment that are generated in vitro. These are useful in terms of biocompatibility, biodegradability, and low toxicity, and can control biodistribution by changing the size, lipid composition, and physical characteristics. Furthermore, liposomes can entrap both hydrophobic and hydrophilic drugs and are able to continuously release the entrapped substrate, thus being useful drug carriers. Magnetic liposomes (MLs) are phospholipid vesicles that encapsulate magneticor paramagnetic nanoparticles. They are applied as contrast agents for magnetic resonance imaging (MRI). The biological synthesis of nanoparticles using plant extracts plays an important role in the field of nanotechnology. Green-synthesized magnetite nanoparticles-protein hybrid has been produced by treating Iron (III) / Iron (II) chloride with the leaf extract of Datura inoxia. The phytochemicals present in the leaf extracts act as a reducing as well stabilizing agents preventing agglomeration, which include flavonoids, phenolic compounds, cardiac glycosides, proteins and sugars. The magnetite nanoparticles-protein hybrid has been trapped inside the aqueous core of the liposome prepared by reversed phase evaporation (REV) method using oleic and linoleic acid which has been shown to be driven under magnetic field confirming the formation magnetic liposome (ML). Chemical characterization of stealth magnetic liposome has been performed by breaking the liposome and release of magnetic nanoparticles. The presence iron has been confirmed by colour complex formation with KSCN and UV-Vis study using spectrophotometer Cary 60, Agilent. This magnet driven liposome using nanoparticles-protein hybrid can be a smart vesicles for the targeted drug delivery.

Keywords: Nanoparticles-Protein Hybrid, Magnetic Liposome.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3008
148 Verification and Proposal of Information Processing Model Using EEG-Based Brain Activity Monitoring

Authors: Toshitaka Higashino, Naoki Wakamiya

Abstract:

Human beings perform a task by perceiving information from outside, recognizing them, and responding them. There have been various attempts to analyze and understand internal processes behind the reaction to a given stimulus by conducting psychological experiments and analysis from multiple perspectives. Among these, we focused on Model Human Processor (MHP). However, it was built based on psychological experiments and thus the relation with brain activity was unclear so far. To verify the validity of the MHP and propose our model from a viewpoint of neuroscience, EEG (Electroencephalography) measurements are performed during experiments in this study. More specifically, first, experiments were conducted where Latin alphabet characters were used as visual stimuli. In addition to response time, ERPs (event-related potentials) such as N100 and P300 were measured by using EEG. By comparing cycle time predicted by the MHP and latency of ERPs, it was found that N100, related to perception of stimuli, appeared at the end of the perceptual processor. Furthermore, by conducting an additional experiment, it was revealed that P300, related to decision making, appeared during the response decision process, not at the end. Second, by experiments using Japanese Hiragana characters, i.e. Japan's own phonetic symbols, those findings were confirmed. Finally, Japanese Kanji characters were used as more complicated visual stimuli. A Kanji character usually has several readings and several meanings. Despite the difference, a reading-related task and a meaning-related task exhibited similar results, meaning that they involved similar information processing processes of the brain. Based on those results, our model was proposed which reflects response time and ERP latency. It consists of three processors: the perception processor from an input of a stimulus to appearance of N100, the cognitive processor from N100 to P300, and the decision-action processor from P300 to response. Using our model, an application system which reflects brain activity can be established.

Keywords: Brain activity, EEG, information processing model, model human processor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 670
147 Why Are Entrepreneurs Resistant to E-tools?

Authors: D. Ščeulovs, E. Gaile-Sarkane

Abstract:

Latvia is the fourth in the world by means of broadband internet speed. The total number of internet users in Latvia exceeds 70% of its population. The number of active mailboxes of the local internet e-mail service Inbox.lv accounts for 68% of the population and 97.6% of the total number of internet users. The Latvian portal Draugiem.lv is a phenomenon of social media, because 58.4 % of the population and 83.5% of internet users use it. A majority of Latvian company profiles are available on social networks, the most popular being Twitter.com. These and other parameters prove the fact consumers and companies are actively using the Internet. 

However, after the authors in a number of studies analyzed how enterprises are employing the e-environment, namely, e-environment tools, they arrived to the conclusions that are not as flattering as the aforementioned statistics. There is an obvious contradiction between the statistical data and the actual studies. As a result, the authors have posed a question: Why are entrepreneurs resistant to e-tools? In order to answer this question, the authors have addressed the Technology Acceptance Model (TAM). The authors analyzed each phase and determined several factors affecting the use of e-environment, reaching the main conclusion that entrepreneurs do not have a sufficient level of e-literacy (digital literacy). 

The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistic method, factor analysis in SPSS 20  environment etc. 

The theoretical and methodological background of the research is formed by, scientific researches and publications, that from the mass media and professional literature, statistical information from legal institutions as well as information collected by the author during the survey.

Keywords: E-environment, e-environment tools, technology acceptance model, factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511
146 Analysis of the Operational Performance of Three Unconventional Arterial Intersection Designs: Median U-Turn, Superstreet and Single Quadrant

Authors: Hana Naghawi, Khair Jadaan, Rabab Al-Louzi, Taqwa Hadidi

Abstract:

This paper is aimed to evaluate and compare the operational performance of three Unconventional Arterial Intersection Designs (UAIDs) including Median U-Turn, Superstreet, and Single Quadrant Intersection using real traffic data. For this purpose, the heavily congested signalized intersection of Wadi Saqra in Amman was selected. The effect of implementing each of the proposed UAIDs was not only evaluated on the isolated Wadi Saqra signalized intersection, but also on the arterial road including both surrounding intersections. The operational performance of the isolated intersection was based on the level of service (LOS) expressed in terms of control delay and volume to capacity ratio. On the other hand, the measures used to evaluate the operational performance on the arterial road included traffic progression, stopped delay per vehicle, number of stops and the travel speed. The analysis was performed using SYNCHRO 8 microscopic software. The simulation results showed that all three selected UAIDs outperformed the conventional intersection design in terms of control delay but only the Single Quadrant Intersection design improved the main intersection LOS from F to B. Also, the results indicated that the Single Quadrant Intersection design resulted in an increase in average travel speed by 52%, and a decrease in the average stopped delay by 34% on the selected corridor when compared to the corridor with conventional intersection design. On basis of these results, it can be concluded that the Median U-Turn and the Superstreet do not perform the best under heavy traffic volumes.

Keywords: Median U-turn, single quadrant, superstreet, unconventional arterial intersection design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 845
145 An Intelligent Combined Method Based on Power Spectral Density, Decision Trees and Fuzzy Logic for Hydraulic Pumps Fault Diagnosis

Authors: Kaveh Mollazade, Hojat Ahmadi, Mahmoud Omid, Reza Alimardani

Abstract:

Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. The aim of this work is to investigate the effectiveness of a new fault diagnosis method based on power spectral density (PSD) of vibration signals in combination with decision trees and fuzzy inference system (FIS). To this end, a series of studies was conducted on an external gear hydraulic pump. After a test under normal condition, a number of different machine defect conditions were introduced for three working levels of pump speed (1000, 1500, and 2000 rpm), corresponding to (i) Journal-bearing with inner face wear (BIFW), (ii) Gear with tooth face wear (GTFW), and (iii) Journal-bearing with inner face wear plus Gear with tooth face wear (B&GW). The features of PSD values of vibration signal were extracted using descriptive statistical parameters. J48 algorithm is used as a feature selection procedure to select pertinent features from data set. The output of J48 algorithm was employed to produce the crisp if-then rule and membership function sets. The structure of FIS classifier was then defined based on the crisp sets. In order to evaluate the proposed PSD-J48-FIS model, the data sets obtained from vibration signals of the pump were used. Results showed that the total classification accuracy for 1000, 1500, and 2000 rpm conditions were 96.42%, 100%, and 96.42% respectively. The results indicate that the combined PSD-J48-FIS model has the potential for fault diagnosis of hydraulic pumps.

Keywords: Power Spectral Density, Machine ConditionMonitoring, Hydraulic Pump, Fuzzy Logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2680
144 Heuristics Analysis for Distributed Scheduling using MONARC Simulation Tool

Authors: Florin Pop

Abstract:

Simulation is a very powerful method used for highperformance and high-quality design in distributed system, and now maybe the only one, considering the heterogeneity, complexity and cost of distributed systems. In Grid environments, foe example, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. In addition, Grid test-beds are limited and creating an adequately-sized test-bed is expensive and time consuming. Scalability, reliability and fault-tolerance become important requirements for distributed systems in order to support distributed computation. A distributed system with such characteristics is called dependable. Large environments, like Cloud, offer unique advantages, such as low cost, dependability and satisfy QoS for all users. Resource management in large environments address performant scheduling algorithm guided by QoS constrains. This paper presents the performance evaluation of scheduling heuristics guided by different optimization criteria. The algorithms for distributed scheduling are analyzed in order to satisfy users constrains considering in the same time independent capabilities of resources. This analysis acts like a profiling step for algorithm calibration. The performance evaluation is based on simulation. The simulator is MONARC, a powerful tool for large scale distributed systems simulation. The novelty of this paper consists in synthetic analysis results that offer guidelines for scheduler service configuration and sustain the empirical-based decision. The results could be used in decisions regarding optimizations to existing Grid DAG Scheduling and for selecting the proper algorithm for DAG scheduling in various actual situations.

Keywords: Scheduling, Simulation, Performance Evaluation, QoS, Distributed Systems, MONARC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
143 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: Femtocell networks, game theory, interference mitigation, spectrum allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718
142 Influence of Deficient Materials on the Reliability of Reinforced Concrete Members

Authors: Sami W. Tabsh

Abstract:

The strength of reinforced concrete depends on the member dimensions and material properties. The properties of concrete and steel materials are not constant but random variables. The variability of concrete strength is due to batching errors, variations in mixing, cement quality uncertainties, differences in the degree of compaction and disparity in curing. Similarly, the variability of steel strength is attributed to the manufacturing process, rolling conditions, characteristics of base material, uncertainties in chemical composition, and the microstructure-property relationships. To account for such uncertainties, codes of practice for reinforced concrete design impose resistance factors to ensure structural reliability over the useful life of the structure. In this investigation, the effects of reductions in concrete and reinforcing steel strengths from the nominal values, beyond those accounted for in the structural design codes, on the structural reliability are assessed. The considered limit states are flexure, shear and axial compression based on the ACI 318-11 structural concrete building code. Structural safety is measured in terms of a reliability index. Probabilistic resistance and load models are compiled from the available literature. The study showed that there is a wide variation in the reliability index for reinforced concrete members designed for flexure, shear or axial compression, especially when the live-to-dead load ratio is low. Furthermore, variations in concrete strength have minor effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and sever effect on the reliability of columns in axial compression. On the other hand, changes in steel yield strength have great effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and mild effect on the reliability of columns in axial compression. Based on the outcome, it can be concluded that the reliability of beams is sensitive to changes in the yield strength of the steel reinforcement, whereas the reliability of columns is sensitive to variations in the concrete strength. Since the embedded target reliability in structural design codes results in lower structural safety in beams than in columns, large reductions in material strengths compromise the structural safety of beams much more than they affect columns.

Keywords: Code, flexure, limit states, random variables, reinforced concrete, reliability, reliability index, shear, structural safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2562
141 The Flashnews as a Commercial Session of Political Marketing: The Content Analysis of the Embedded Political Narratives in Non-Political Media Products

Authors: Zsolt Szabolcsi

Abstract:

Political communication in Hungary has undergone a significant change in the 2010s. One element of the transformation is the Flashnews. This media product was launched in March 2015 and since then 40-50 blocks are broadcasted, daily, on 5 channels. Flashnews blocks are condensed news sessions, containing the summary of political narratives. It starts with the introduction of the narrator, then, usually four news topics are presented and, finally, the narrator concludes the block. The block lasts only one minute and, therefore, it provides a blink session into the main narratives of political communication at the time. Beyond its rapid pace, what makes its avoidance difficult is that these blocks are always in the first position in the commercial break of a non-political media product. Although it is only one minute long, its significance is high. The content of the Flashnews reflects the main governmental narratives and, therefore, the Flashnews is part of the agenda-setting capacity of political communication. It reaches media consumers who have limited knowledge and interest in politics, and their use of media products is not politically related. For this audience, the Flashnews pops up in the same way as commercials. Due to its structure and appearance, the impact of Flashnews seems to be similar to commercials, imbedded into the break of media products. It activates existing knowledge constructs, builds up associational links and maintains their presence in a way that the recipient is not aware of the phenomenon. The research aims to examine the extent to which the Flashnews and the main news narratives are identical in their content. This aim is realized with the content analysis of the two news products by examining the Flashnews and the evening news during main sport events from 2016 to 2018. The initial hypothesis of the research is that Flashnews is a contribution to the news management technique for an effective articulation of political narratives in public service media channels.

Keywords: Flashnews, political communication, political marketing, news management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 567
140 Biodegradation of PCP by the Rhizobacteria Isolated from Pentachlorophenol-tolerant Crop Species

Authors: Avita K. Marihal, K.S. Jagadeesh, Sarita Sinha

Abstract:

Pentachlorophenol (PCP) is a polychlorinated aromatic compound that is widespread in industrial effluents and is considered to be a serious pollutant. Among the variety of industrial effluents encountered, effluents from tanning industry are very important and have a serious pollution potential. PCP is also formed unintentionally in effluents of paper and pulp industries. It is highly persistent in soils and is lethal to a wide variety of beneficial microorganisms and insects, human beings and animals. The natural processes that breakdown toxic chemicals in the environment have become the focus of much attention to develop safe and environmentfriendly deactivation technologies. Microbes and plants are among the most important biological agents that remove and degrade waste materials to enable their recycling in the environment. The present investigation was carried out with the aim of developing a microbial system for bioremediation of PCP polluted soils. A number of plant species were evaluated for their ability to tolerate different concentrations of pentachlorophenol (PCP) in the soil. The experiment was conducted for 30 days under pot culture conditions. The toxic effect of PCP on plants was studied by monitoring seed germination, plant growth and biomass. As the concentration of PCP was increased to 50 ppm, the inhibition of seed germination, plant growth and biomass was also increased. Although PCP had a negative effect on all plant species tested, maize and groundnut showed the maximum tolerance to PCP. Other tolerating crops included wheat, safflower, sunflower, and soybean. From the rhizosphere soil of the tolerant seedlings, as many as twenty seven PCP tolerant bacteria were isolated. From soybean, 8; sunflower, 3; safflower 8; maize 2; groundnut and wheat, 3 each isolates were made. They were screened for their PCP degradation potentials. HPLC analyses of PCP degradation revealed that the isolate MAZ-2 degraded PCP completely. The isolate MAZ-1 was the next best isolate with 90 per cent PCP degradation. These strains hold promise to be used in the bioremediation of PCP polluted soils.

Keywords: Biodegradation, pentachlorophenol, rhizobacteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
139 Ingenious Eco-Technology for Transforming Food and Tanneries Waste into a Soil Bio-Conditioner and Fertilizer Product Used for Recovery and Enhancement of the Productive Capacity of the Soil

Authors: Petre Voicu, Mircea Oaida, Radu Vasiu, Catalin Gheorghiu, Aurel Dumitru

Abstract:

The present work deals with the way in which food and tobacco waste can be used in agriculture. As a result of the lack of efficient technologies for their recycling, we are currently faced with the appearance of appreciable quantities of residual organic residues that find their use only very rarely and only after long storage in landfills. The main disadvantages of long storage of organic waste are the unpleasant smell, the high content of pathogenic agents, and the high content in the water. The release of these enormous amounts imperatively demands the finding of solutions to ensure the avoidance of environmental pollution. The measure practiced by us and presented in this paper consists of the processing of this waste in special installations, testing in pilot experimental perimeters, and later administration on agricultural lands without harming the quality of the soil, agricultural crops, and the environment. The current crisis of raw materials and energy also raises special problems in the field of organic waste valorization, an activity that takes place with low energy consumption. At the same time, their composition recommends them as useful secondary sources in agriculture. The transformation of food scraps and other residues concentrated organics thus acquires a new orientation, in which these materials are seen as important secondary resources. The utilization of food and tobacco waste in agriculture is also stimulated by the increasing lack of chemical fertilizers and the continuous increase in their price, under the conditions that the soil requires increased amounts of fertilizers in order to obtain high, stable, and profitable production. The need to maintain and increase the humus content of the soil is also taken into account, as an essential factor of its fertility, as a source and reserve of nutrients and microelements, as an important factor in increasing the buffering capacity of the soil, and the more reserved use of chemical fertilizers, improving the structure and permeability for water with positive effects on the quality of agricultural works and preventing the excess and/or deficit of moisture in the soil.

Keywords: Organic residue, food and tannery waste, fertilizer, soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 139