Search results for: Complex-valued signal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2483

Search results for: Complex-valued signal processing

113 Innovative Waste Management Practices in Remote Areas

Authors: Dolores Hidalgo, Jesús M. Martín-Marroquín, Francisco Corona

Abstract:

Municipal waste consist of a variety of items that are everyday discarded by the population. They are usually collected by municipalities and include waste generated by households, commercial activities (local shops) and public buildings. The composition of municipal waste varies greatly from place to place, being mostly related to levels and patterns of consumption, rates of urbanization, lifestyles, and local or national waste management practices. Each year, a huge amount of resources is consumed in the EU, and according to that, also a huge amount of waste is produced. The environmental problems derived from the management and processing of these waste streams are well known, and include impacts on land, water and air. The situation in remote areas is even worst. Difficult access when climatic conditions are adverse, remoteness of centralized municipal treatment systems or dispersion of the population, are all factors that make remote areas a real municipal waste treatment challenge. Furthermore, the scope of the problem increases significantly because the total lack of awareness of the existing risks in this area together with the poor implementation of advanced culture on waste minimization and recycling responsibly. The aim of this work is to analyze the existing situation in remote areas in reference to the production of municipal waste and evaluate the efficiency of different management alternatives. Ideas for improving waste management in remote areas include, for example: the implementation of self-management systems for the organic fraction; establish door-to-door collection models; promote small-scale treatment facilities or adjust the rates of waste generation thereof.

Keywords: Door to door collection, islands, isolated areas, municipal waste, remote areas, rural communities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073
112 Corrosion Analysis and Interfacial Characterization of Al – Steel Metal Inert Gas Weld - Braze Dissimilar Joints by Micro Area X-Ray Diffraction Technique

Authors: S. S. Sravanthi, Swati Ghosh Acharyya

Abstract:

Automotive light weighting is of major prominence in the current times due to its contribution in improved fuel economy and reduced environmental pollution. Various arc welding technologies are being employed in the production of automobile components with reduced weight. The present study is of practical importance since it involves preferential substitution of Zinc coated mild steel with a light weight alloy such as 6061 Aluminium by means of Gas Metal Arc Welding (GMAW) – Brazing technique at different processing parameters. However, the fabricated joints have shown the generation of Al – Fe layer at the interfacial regions which was confirmed by the Scanning Electron Microscope and Energy Dispersion Spectroscopy. These Al-Fe compounds not only affect the mechanical strength, but also predominantly deteriorate the corrosion resistance of the joints. Hence, it is essential to understand the phases formed in this layer and their crystal structure. Micro area X - ray diffraction technique has been exclusively used for this study. Moreover, the crevice corrosion analysis at the joint interfaces was done by exposing the joints to 5 wt.% FeCl3 solution at regular time intervals as per ASTM G 48-03. The joints have shown a decreased crevice corrosion resistance with increased heat intensity. Inner surfaces of welds have shown severe oxide cracking and a remarkable weight loss when exposed to concentrated FeCl3. The weight loss was enhanced with decreased filler wire feed rate and increased heat intensity. 

Keywords: Automobiles, welding, corrosion, lap joints, Micro XRD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595
111 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions

Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang

Abstract:

Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.

Keywords: Computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 700
110 Creating the Color Panoramic View using Medley of Grayscale and Color Partial Images

Authors: Dr. H. B. Kekre, Sudeep D. Thepade

Abstract:

Panoramic view generation has always offered novel and distinct challenges in the field of image processing. Panoramic view generation is nothing but construction of bigger view mosaic image from set of partial images of the desired view. The paper presents a solution to one of the problems of image seascape formation where some of the partial images are color and others are grayscale. The simplest solution could be to convert all image parts into grayscale images and fusing them to get grayscale image panorama. But in the multihued world, obtaining the colored seascape will always be preferred. This could be achieved by picking colors from the color parts and squirting them in grayscale parts of the seascape. So firstly the grayscale image parts should be colored with help of color image parts and then these parts should be fused to construct the seascape image. The problem of coloring grayscale images has no exact solution. In the proposed technique of panoramic view generation, the job of transferring color traits from reference color image to grayscale image is done by palette based method. In this technique, the color palette is prepared using pixel windows of some degrees taken from color image parts. Then the grayscale image part is divided into pixel windows with same degrees. For every window of grayscale image part the palette is searched and equivalent color values are found, which could be used to color grayscale window. For palette preparation we have used RGB color space and Kekre-s LUV color space. Kekre-s LUV color space gives better quality of coloring. The searching time through color palette is improved over the exhaustive search using Kekre-s fast search technique. After coloring the grayscale image pieces the next job is fusion of all these pieces to obtain panoramic view. For similarity estimation between partial images correlation coefficient is used.

Keywords: Panoramic View, Similarity Estimate, Color Transfer, Color Palette, Kekre's Fast Search, Kekre's LUV

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
109 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Chek, diabetes, neural network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
108 Effects of Reclaimed Agro-Industrial Wastewater for Long-Term Irrigation of Herbaceous Crops on Soil Chemical Properties

Authors: E. Tarantino, G. Disciglio, G. Gatta, L. Frabboni, A. Libutti, A. Tarantino

Abstract:

Worldwide, about two-thirds of industrial and domestic wastewater effluent is discharged without treatment, which can cause contamination and eutrophication of the water. In particular, for Mediterranean countries, irrigation with treated wastewater would mitigate the water stress and support the agricultural sector. Changing global weather patterns will make the situation worse, due to increased susceptibility to drought, which can cause major environmental, social, and economic problems. The study was carried out in open field in an intensive agricultural area of the Apulian region in Southern Italy where freshwater resources are often scarce. As well as providing a water resource, irrigation with treated wastewater represents a significant source of nutrients for soil–plant systems. However, the use of wastewater might have further effects on soil. This study thus investigated the long-term impact of irrigation with reclaimed agro-industrial wastewater on the chemical characteristics of the soil. Two crops (processing tomato and broccoli) were cultivated in succession in Stornarella (Foggia) over four years from 2012 to 2016 using two types of irrigation water: groundwater and tertiary treated agro-industrial wastewater that had undergone an activated sludge process, sedimentation filtration, and UV radiation. Chemical analyses were performed on the irrigation waters and soil samples. The treated wastewater was characterised by high levels of several chemical parameters including TSS, EC, COD, BOD5, Na+, Ca2+, Mg2+, NH4-N, PO4-P, K+, SAR and CaCO3, as compared with the groundwater. However, despite these higher levels, the mean content of several chemical parameters in the soil did not show relevant differences between the irrigation treatments, in terms of the chemical features of the soil.

Keywords: Agro-industrial wastewater, broccoli, long-term re-use, tomato.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1103
107 An Integrated Experimental and Numerical Approach to Develop an Electronic Instrument to Study Apple Bruise Damage

Authors: Paula Pascoal-Faria, Rúben Pereira, Elodie Pinto, Miguel Belbut, Ana Rosa, Inês Sousa, Nuno Alves

Abstract:

Apple bruise damage from harvesting, handling, transporting and sorting is considered to be the major source of reduced fruit quality, resulting in loss of profits for the entire fruit industry. The three factors which can physically cause fruit bruising are vibration, compression load and impact, the latter being the most common source of bruise damage. Therefore, prediction of the level of damage, stress distribution and deformation of the fruits under external force has become a very important challenge. In this study, experimental and numerical methods were used to better understand the impact caused when an apple is dropped from different heights onto a plastic surface and a conveyor belt. Results showed that the extent of fruit damage is significantly higher for plastic surface, being dependent on the height. In order to support the development of a biomimetic electronic device for the determination of fruit damage, the mechanical properties of the apple fruit were determined using mechanical tests. Preliminary results showed different values for the Young’s modulus according to the zone of the apple tested. Along with the mechanical characterization of the apple fruit, the development of the first two prototypes is discussed and the integration of the results obtained to construct the final element model of the apple is presented. This work will help to reduce significantly the bruise damage of fruits or vegetables during the entire processing which will allow the introduction of exportation destines and consequently an increase in the economic profits in this sector.

Keywords: Apple, fruit damage, impact during crop and post-crop, mechanical characterization of the apple, numerical evaluation of fruit bruise damage, electronic device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1506
106 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: Fault detection, inverse simulation, rover, ground robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 898
105 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: Deep learning, data mining, gender predication, MOOCs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
104 The Effect of Porous Alkali Activated Material Composition on Buffer Capacity in Bioreactors

Authors: G. Bumanis, D. Bajare

Abstract:

With demand for primary energy continuously growing, search for renewable and efficient energy sources has been high on agenda of our society. One of the most promising energy sources is biogas technology. Residues coming from dairy industry and milk processing could be used in biogas production; however, low efficiency and high cost impede wide application of such technology. One of the main problems is management and conversion of organic residues through the anaerobic digestion process which is characterized by acidic environment due to the low whey pH (<6) whereas additional pH control system is required. Low buffering capacity of whey is responsible for the rapid acidification in biological treatments; therefore alkali activated material is a promising solution of this problem. Alkali activated material is formed using SiO2 and Al2O3 rich materials under highly alkaline solution. After material structure forming process is completed, free alkalis remain in the structure of materials which are available for leaching and could provide buffer capacity potential. In this research porous alkali activated material was investigated. Highly porous material structure ensures gradual leaching of alkalis during time which is important in biogas digestion process. Research of mixture composition and SiO2/Na2O and SiO2/Al2O ratio was studied to test the buffer capacity potential of alkali activated material. This research has proved that by changing molar ratio of components it is possible to obtain a material with different buffer capacity, and this novel material was seen to have considerable potential for using it in processes where buffer capacity and pH control is vitally important.

Keywords: Alkaline material, buffer capacity, biogas production.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012
103 Ultrasonic System for Diagnosis of Functional Gastrointestinal Disorders: Development, Verification and Clinical Trials

Authors: Eun-Geun Kim, Won-Pil Park, Dae-Gon Woo, Chang-Yong Ko, Yong-Heum Lee, Dohyung Lim, Tae-Min Shin, Han-Sung Kim, Gyoun-Jung Lee

Abstract:

Functional gastrointestinal disorders affect millions of people spread all age regardless of race and sex. There are, however, rare diagnostic methods for the functional gastrointestinal disorders because functional disorders show no evidence of organic and physical causes. Our research group identified recently that the gastrointestinal tract well in the patients with the functional gastrointestinal disorders becomes more rigid than healthy people when palpating the abdominal regions overlaying the gastrointestinal tract. Aim of this study is, therefore, to develop a diagnostic system for the functional gastrointestinal disorders based on ultrasound technique, which can quantify the characteristic above related to the rigidity of the gastrointestinal tract well. Ultrasound system was designed. The system consisted of transmitter, ultrasonic transducer, receiver, TGC, and CPLD, and verified via a phantom test. For the phantom test, ten soft-tissue specimens were harvested from porcine. Five of them were then treated chemically to mimic a rigid condition of gastrointestinal tract well, which was induced by functional gastrointestinal disorders. Additionally, the specimens were tested mechanically to identify if the mimic was reasonable. The customized ultrasound system was finally verified through application to human subjects with/without functional gastrointestinal disorders (Normal and Patient Groups). It was identified from the mechanical test that the chemically treated specimens were more rigid than normal specimen. This finding was favorably compared with the result obtained from the phantom test. The phantom test also showed that ultrasound system well described the specimen geometric characteristics and detected an alteration in the specimens. The maximum amplitude of the ultrasonic reflective signal in the rigid specimens (0.2±0.1Vp-p) at the interface between the fat and muscle layers was explicitly higher than that in the normal specimens (0.1±0.0Vp-p). Clinical tests using our customized ultrasound system for human subject showed that the maximum amplitudes of the ultrasonic reflective signals near to the gastrointestinal tract well for the patient group (2.6±0.3Vp-p) were generally higher than those in normal group (0.1±0.2Vp-p). Here, maximum reflective signals was appeared at 20mm depth approximately from abdominal skin for all human subjects, corresponding to the location of the boundary layer close to gastrointestinal tract well. These results suggest that newly designed diagnostic system based on ultrasound technique may diagnose enough the functional gastrointestinal disorders.

Keywords: Functional Gastrointestinal Disorders, DiagnosticSystem, Phantom Test, Ultrasound System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
102 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel

Authors: F. M. Pisano, M. Ciminello

Abstract:

Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.

Keywords: Interactive dashboards, optical fibers, structural health monitoring, visual analytics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 753
101 Data Hiding in Images in Discrete Wavelet Domain Using PMM

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Over last two decades, due to hostilities of environment over the internet the concerns about confidentiality of information have increased at phenomenal rate. Therefore to safeguard the information from attacks, number of data/information hiding methods have evolved mostly in spatial and transformation domain.In spatial domain data hiding techniques,the information is embedded directly on the image plane itself. In transform domain data hiding techniques the image is first changed from spatial domain to some other domain and then the secret information is embedded so that the secret information remains more secure from any attack. Information hiding algorithms in time domain or spatial domain have high capacity and relatively lower robustness. In contrast, the algorithms in transform domain, such as DCT, DWT have certain robustness against some multimedia processing.In this work the authors propose a novel steganographic method for hiding information in the transform domain of the gray scale image.The proposed approach works by converting the gray level image in transform domain using discrete integer wavelet technique through lifting scheme.This approach performs a 2-D lifting wavelet decomposition through Haar lifted wavelet of the cover image and computes the approximation coefficients matrix CA and detail coefficients matrices CH, CV, and CD.Next step is to apply the PMM technique in those coefficients to form the stego image. The aim of this paper is to propose a high-capacity image steganography technique that uses pixel mapping method in integer wavelet domain with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.

Keywords: Cover Image, Pixel Mapping Method (PMM), StegoImage, Integer Wavelet Tranform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2791
100 Analysis of Linked in Series Servers with Blocking, Priority Feedback Service and Threshold Policy

Authors: Walenty Oniszczuk

Abstract:

The use of buffer thresholds, blocking and adequate service strategies are well-known techniques for computer networks traffic congestion control. This motivates the study of series queues with blocking, feedback (service under Head of Line (HoL) priority discipline) and finite capacity buffers with thresholds. In this paper, the external traffic is modelled using the Poisson process and the service times have been modelled using the exponential distribution. We consider a three-station network with two finite buffers, for which a set of thresholds (tm1 and tm2) is defined. This computer network behaves as follows. A task, which finishes its service at station B, gets sent back to station A for re-processing with probability o. When the number of tasks in the second buffer exceeds a threshold tm2 and the number of task in the first buffer is less than tm1, the fed back task is served under HoL priority discipline. In opposite case, for fed backed tasks, “no two priority services in succession" procedure (preventing a possible overflow in the first buffer) is applied. Using an open Markovian queuing schema with blocking, priority feedback service and thresholds, a closed form cost-effective analytical solution is obtained. The model of servers linked in series is very accurate. It is derived directly from a twodimensional state graph and a set of steady-state equations, followed by calculations of main measures of effectiveness. Consequently, efficient expressions of the low computational cost are determined. Based on numerical experiments and collected results we conclude that the proposed model with blocking, feedback and thresholds can provide accurate performance estimates of linked in series networks.

Keywords: Blocking, Congestion control, Feedback, Markov chains, Performance evaluation, Threshold-base networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1246
99 Achieving Sustainable Agriculture with Treated Municipal Wastewater

Authors: Reshu Yadav, Himanshu Joshi, S. K.Tripathi

Abstract:

A pilot field study was conducted at the Jagjeetpur Municipal Sewage treatment plant situated in the Haridwar town in Uttarakhand state, India. The objectives of the present study were to study the effect of treated wastewater on the production of various paddy varieties (Sharbati, PR-114, PB-1, Menaka, PB1121 and PB 1509) and the emission of GHG gases (CO2, CH4 and N2O) as compared to the same varieties grown in the control plots irrigated with fresh water. Of late, the concept of water footprint assessment has emerged, which explains enumeration of various types of water footprints of an agricultural entity from its production to processing stages. Paddy, the most water demanding staple crop of Uttarakhand state, displayed a high green water footprint value of 2474.12 m3/ Ton. Most of the wastewater irrigated varieties displayed up to 6% increase in production, except Menaka and PB-1121, which showed a reduction in production (6% and 3% respectively), due to pest and insect infestation. The treated wastewater was observed to be rich in Nitrogen (55.94 mg/ml Nitrate), Phosphorus (54.24 mg/ml) and Potassium (9.78 mg/ml), thus rejuvenating the soil quality and not requiring any external nutritional supplements. A Percentage increase of GHG gases of irrigation with treated municipal wastewater as compared to control plots was observed as 0.4% - 8.6% (CH4), 1.1% - 9.2% (CO2), and 0.07% - 5.8% (N2O). The variety, Sharbati, displayed maximum production (5.5 ton/ha) and emerged as the most resistant variety against pests and insects. The emission values of CH4, CO2 and N2O were 729.31 mg/m2/d, 322.10 mg/m2/d and 400.21 mg/m2/d in water stagnant condition. This study highlighted a successful possibility of reuse of wastewater for non-potable purposes offering the potential for exploiting this resource that can replace or reduce the existing use of fresh water sources in agriculture sector.

Keywords: Greenhouse gases, nutrients, water footprint, wastewater irrigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800
98 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics

Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur

Abstract:

Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.

Keywords: Human machine interface, industrial internet of things, internet of things, optical character recognition, video analytic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 680
97 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: Input performance, mobile device, slim keyboard, tactile feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
96 Improving Air Temperature Prediction with Artificial Neural Networks

Authors: Brian A. Smith, Ronald W. McClendon, Gerrit Hoogenboom

Abstract:

The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. Previous work established that the Ward-style artificial neural network (ANN) is a suitable tool for developing such models. The current research focused on developing ANN models with reduced average prediction error by increasing the number of distinct observations used in training, adding additional input terms that describe the date of an observation, increasing the duration of prior weather data included in each observation, and reexamining the number of hidden nodes used in the network. Models were created to predict air temperature at hourly intervals from one to 12 hours ahead. Each ANN model, consisting of a network architecture and set of associated parameters, was evaluated by instantiating and training 30 networks and calculating the mean absolute error (MAE) of the resulting networks for some set of input patterns. The inclusion of seasonal input terms, up to 24 hours of prior weather information, and a larger number of processing nodes were some of the improvements that reduced average prediction error compared to previous research across all horizons. For example, the four-hour MAE of 1.40°C was 0.20°C, or 12.5%, less than the previous model. Prediction MAEs eight and 12 hours ahead improved by 0.17°C and 0.16°C, respectively, improvements of 7.4% and 5.9% over the existing model at these horizons. Networks instantiating the same model but with different initial random weights often led to different prediction errors. These results strongly suggest that ANN model developers should consider instantiating and training multiple networks with different initial weights to establish preferred model parameters.

Keywords: Decision support systems, frost protection, fruit, time-series prediction, weather modeling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2645
95 On the Optimality Assessment of Nanoparticle Size Spectrometry and Its Association to the Entropy Concept

Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani

Abstract:

Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nanoparticles under the influence of electric field in Electrical Mobility Spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined fielddiffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multichannel EMS. The result, a cloud of particles with no uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using Computational Fluid Dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.

Keywords: Aerosol Nano-Particle, CFD, Electrical Mobility Spectrometer, Von Neumann entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
94 Application of Remote Sensing for Monitoring the Impact of Lapindo Mud Sedimentation for Mangrove Ecosystem: Case Study in Sidoarjo, East Java

Authors: Akbar Cahyadhi Pratama Putra, Tantri Utami Widhaningtyas, M. Randy Aswin

Abstract:

Indonesia, as an archipelagic nation, has a very long coastline with significant potential for marine resources, including mangrove ecosystems. The Lapindo mudflow disaster in Sidoarjo, East Java, resulted in mudflow being discharged into the sea through the Brantas and Porong rivers. The mud material transported by the river flow is feared to be dangerous because it contains harmful substances such as heavy metals. This study aims to map the mangrove ecosystem in terms of its density and assess the impact of the Lapindo mud disaster on the mangrove ecosystem, along with efforts to sustain its continuity. The mapping of the coastal mangrove conditions in Sidoarjo was carried out using remote sensing products, specifically Landsat 7 ETM+ images, taken during dry months in 2002, 2006, 2009, and 2014. The density of mangroves was determined using NDVI, which utilizes band 3 (the red channel) and band 4 (the near IR channel). Image processing to generate NDVI was performed using ENVI 5.1 software. The NDVI results were used to assess mangrove density on a scale from 0 to 1. The growth of mangrove ecosystems, both in terms of area and density, showed a significant increase from year to year. The development of mangrove ecosystems was influenced by the deposition of Lapindo mud in the estuaries of the Porong and Brantas rivers, where the silt provided a suitable medium for the growth of the mangrove ecosystem, leading to an increase in its density. The rise in density was supported by public awareness to mitigate heavy metal contamination, allowing for mangrove breeding near the affected areas.

Keywords: Archipelagic nation, Mangrove, Lapindo mudflow disaster, NDVI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 82
93 Assets Integrity Management in Oil and Gas Production Facilities Through Corrosion Mitigation and Inspection Strategy: A Case Study of Sarir Oilfield

Authors: Iftikhar Ahmad, Youssef Elkezza

Abstract:

Sarir oilfield is in North Africa. It has facilities of oil and gas production. The assets of the Sarir oilfield can be divided into five following categories, namely: (i) Well bore and wellheads; (ii) Vessels such as separators, desalters, and gas processing facilities; (iii) Pipelines including all flow lines, trunk lines, and shipping lines; (iv) storage tanks; (v) Other assets such as turbines and compressors, etc. The nature of the petroleum industry recognizes the potential human, environmental and financial consequences that can result from failing to maintain the integrity of wellheads, vessels, tanks, pipelines, and other assets. The importance of effective asset integrity management increases as the industry infrastructure continues to age. The primary objective of assets integrity management (AIM) is to maintain assets in a fit-for-service condition while extending their remaining life in the most reliable, safe, and cost-effective manner. Corrosion management is one of the important aspects of successful asset integrity management. It covers corrosion mitigation, monitoring, inspection, and risk evaluation. External corrosion on pipelines, well bores, buried assets, and bottoms of tanks is controlled with a combination of coatings by cathodic protection, while the external corrosion on surface equipment, wellheads, and storage tanks is controlled by coatings. The periodic cleaning of the pipeline by pigging helps in the prevention of internal corrosion. Further, internal corrosion of pipelines is prevented by chemical treatment and controlled operations. This paper describes the integrity management system used in the Sarir oil field for its oil and gas production facilities based on standard practices of corrosion mitigation and inspection.

Keywords: Assets integrity management, corrosion prevention in oilfield assets, corrosion management in oilfield, corrosion prevention and inspection activities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 86
92 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: Deep neural models, natural language inference, recognizing textual entailment, sentence-to-sentence relation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
91 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, null hypothesis, seismic lines, seismic reflection survey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 547
90 Radon-222 Concentration and Potential Risk to Workers of Al-Jalamid Phosphate Mines, North Province, Saudi Arabia

Authors: El-Said. I. Shabana, Mohammad S. Tayeb, Maher M. T. Qutub, Abdulraheem A. Kinsara

Abstract:

Usually, phosphate deposits contain 238U and 232Th in addition to their decay products. Due to their different pathways in the environment, the 238U/232Th activity concentration ratio usually found to be greater than unity in phosphate sediments. The presence of these radionuclides creates a potential need to control exposure of workers in the mining and processing activities of the phosphate minerals in accordance with IAEA safety standards. The greatest dose to workers comes from exposure to radon, especially 222Rn from the uranium series, and has to be controlled. In this regard, radon (222Rn) was measured in the atmosphere (indoor and outdoor) of Al-Jalamid phosphate-mines working area using a portable radon-measurement instrument RAD7, in a purpose of radiation protection. Radon was measured in 61 sites inside the open phosphate mines, the phosphate upgrading facility (offices and rooms of the workers, and in some open-air sites) and in the dwellings of the workers residence-village that lies at about 3 km from the mines working area. The obtained results indicated that the average indoor radon concentration was about 48.4 Bq/m3. Inside the upgrading facility, the average outdoor concentrations were 10.8 and 9.7 Bq/m3 in the concentrate piles and crushing areas, respectively. It was 12.3 Bq/m3 in the atmosphere of the open mines. These values are comparable with the global average values. Based on the average values, the annual effective dose due to radon inhalation was calculated and risk estimates have been done. The average annual effective dose to workers due to the radon inhalation was estimated by 1.32 mSv. The potential excess risk of lung cancer mortality that could be attributed to radon, when considering the lifetime exposure, was estimated by 53.0x10-4. The results have been discussed in detail.

Keywords: Dosimetry, environmental monitoring, phosphate deposits, radiation protection, radon-22.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1336
89 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations

Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer

Abstract:

In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.

Keywords: Power system, stability, oscillations, power system stabilizer, model reference adaptive control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 560
88 Application of Ultrasonic Assisted Machining Technique for Glass-Ceramic Milling

Authors: S. Y. Lin, C. H. Kuan, C. H. She, W. T. Wang

Abstract:

In this study, ultrasonic assisted machining (UAM) technique is applied in side-surface milling experiment for glass-ceramic workpiece material. The tungsten carbide cutting-tool with diamond coating is used in conjunction with two kinds of cooling/lubrication mediums such as water-soluble (WS) cutting fluid and minimum quantity lubricant (MQL). Full factorial process parameter combinations on the milling experiments are planned to investigate the effect of process parameters on cutting performance. From the experimental results, it tries to search for the better process parameter combination which the edge-indentation and the surface roughness are acceptable. In the machining experiments, ultrasonic oscillator was used to excite a cutting-tool along the radial direction producing a very small amplitude of vibration frequency of 20KHz to assist the machining process. After processing, toolmaker microscope was used to detect the side-surface morphology, edge-indentation and cutting tool wear under different combination of cutting parameters, and analysis and discussion were also conducted for experimental results. The results show that the main leading parameters to edge-indentation of glass ceramic are cutting depth and feed rate. In order to reduce edge-indentation, it needs to use lower cutting depth and feed rate. Water-soluble cutting fluid provides a better cooling effect in the primary cutting area; it may effectively reduce the edge-indentation and improve the surface morphology of the glass ceramic. The use of ultrasonic assisted technique can effectively enhance the surface finish cleanness and reduce cutting tool wear and edge-indentation. 

Keywords: Glass-ceramic, ultrasonic assisted machining, cutting performance, edge-indentation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750
87 Modified Scaling-Free CORDIC Based Pipelined Parallel MDC FFT and IFFT Architecture for Radix 2^2 Algorithm

Authors: C. Paramasivam, K. B. Jayanthi

Abstract:

An innovative approach to develop modified scaling free CORDIC based two parallel pipelined Multipath Delay Commutator (MDC) FFT and IFFT architectures for radix 22 FFT algorithm is presented. Multipliers and adders are the most important data paths in FFT and IFFT architectures. Multipliers occupy high area and consume more power. In order to optimize the area and power overhead, modified scaling-free CORDIC based complex multiplier is utilized in the proposed design. In general twiddle factor values are stored in RAM block. In the proposed work, modified scaling-free CORDIC based twiddle factor generator unit is used to generate the twiddle factor and efficient switching units are used. In addition to this, four point FFT operations are performed without complex multiplication which helps to reduce area and power in the last two stages of the pipelined architectures. The design proposed in this paper is based on multipath delay commutator method. The proposed design can be extended to any radix 2n based FFT/IFFT algorithm to improve the throughput. The work is synthesized using Synopsys design Compiler using TSMC 90-nm library. The proposed method proves to be better compared to the reference design in terms of area, throughput and power consumption. The comparative analysis of the proposed design with Xilinx FPGA platform is also discussed in the paper.

Keywords: Coordinate Rotational Digital Computer(CORDIC), Complex multiplier, Fast Fourier transform (FFT), Inverse fast Fourier transform (IFFT), Multipath delay Commutator (MDC), modified scaling free CORDIC, complex multiplier, pipelining, parallel processing, radix-2^2.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
86 Bacteriological Quality of Commercially Prepared Fermented Ogi (Akamu) Sold in Some Parts of South Eastern Nigeria

Authors: Alloysius C. Ogodo, Ositadinma C. Ugbogu, Uzochukwu G. Ekeleme

Abstract:

Food poisoning and infection by bacteria are of public health significance to both developing and developed countries. Samples of ogi (akamu) prepared from white and yellow variety of maize sold in Uturu and Okigwe were analyzed together with the laboratory prepared ogi for bacterial quality using the standard microbiological methods. The analyses showed that both white and yellow variety had total bacterial counts (cfu/g) of 4.0 ×107 and 3.9 x 107 for the laboratory prepared ogi while the commercial ogi had 5.2 x 107 and 4.9 x107, 4.9 x107 and 4.5 x107, 5.4 x107 and 5.0 x107 for Eke-Okigwe, Up-gate and Nkwo-Achara market respectively. The Staphylococcal counts ranged from 2.0 x 102 to 5.0 x102 and 1.0 x 102 to 4.0 x102 for the white and yellow variety from the different markets while Staphylococcal growth was not recorded on the laboratory prepared ogi. The laboratory prepared ogi had no Coliform growth while the commercially prepared ogi had counts of 0.5 x103 to 1.6 x 103 for white variety and 0.3 x 103 to 1.1 x103 for yellow variety respectively. The Lactic acid bacterial count of 3.5x106 and 3.0x106 was recorded for the laboratory ogi while the commercially prepared ogi ranged from 3.2x106 to 4.2x106 (white variety) and 3.0 x106 to 3.9 x106 (yellow). The presence of bacteria isolates from the commercial and laboratory fermented ogi showed that Lactobacillus sp, Leuconostoc sp and Citrobacter sp were present in all the samples, Micrococcus sp and Klebsiella sp were isolated from Eke- Okigwe and ABSU-up-gate markets varieties respectively, E. coli and Staphylococcus sp were present in Eke-Okigwe and Nkwo- Achara markets while Salmonella sp were isolated from the three markets. Hence, there are chances of contracting food borne diseases from commercially prepared ogi. Therefore, there is the need for sanitary measures in the production of fermented cereals so as to minimize the rate of food borne pathogens during processing and storage.

Keywords: Bacterial quality, fermentation, maize, Ogi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3410
85 Simulated Annealing Algorithm for Data Aggregation Trees in Wireless Sensor Networks and Comparison with Genetic Algorithm

Authors: Ladan Darougaran, Hossein Shahinzadeh, Hajar Ghotb, Leila Ramezanpour

Abstract:

In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.

Keywords: Data aggregation, wireless sensor networks, energy efficiency, simulated annealing algorithm, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1640
84 CBIR Using Multi-Resolution Transform for Brain Tumour Detection and Stages Identification

Authors: H. Benjamin Fredrick David, R. Balasubramanian, A. Anbarasa Pandian

Abstract:

Image retrieval is the most interesting technique which is being used today in our digital world. CBIR, commonly expanded as Content Based Image Retrieval is an image processing technique which identifies the relevant images and retrieves them based on the patterns that are extracted from the digital images. In this paper, two research works have been presented using CBIR. The first work provides an automated and interactive approach to the analysis of CBIR techniques. CBIR works on the principle of supervised machine learning which involves feature selection followed by training and testing phase applied on a classifier in order to perform prediction. By using feature extraction, the image transforms such as Contourlet, Ridgelet and Shearlet could be utilized to retrieve the texture features from the images. The features extracted are used to train and build a classifier using the classification algorithms such as Naïve Bayes, K-Nearest Neighbour and Multi-class Support Vector Machine. Further the testing phase involves prediction which predicts the new input image using the trained classifier and label them from one of the four classes namely 1- Normal brain, 2- Benign tumour, 3- Malignant tumour and 4- Severe tumour. The second research work includes developing a tool which is used for tumour stage identification using the best feature extraction and classifier identified from the first work. Finally, the tool will be used to predict tumour stage and provide suggestions based on the stage of tumour identified by the system. This paper presents these two approaches which is a contribution to the medical field for giving better retrieval performance and for tumour stages identification.

Keywords: Brain tumour detection, content based image retrieval, classification of tumours, image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708