Search results for: source topic detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3194

Search results for: source topic detection

104 Interpretation of Two Indices for the Prediction of Cardiovascular Risk in Pediatric Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity and weight gain are associated with increased risk of developing cardiovascular diseases and the progression of liver fibrosis. Aspartate transaminase–to-platelet count ratio index (APRI) and fibrosis-4 (FIB-4) were primarily considered as the formulas capable of differentiating hepatitis from cirrhosis. However, to the best of our knowledge, their status in children is not clear. The aim of this study is to determine APRI and FIB-4 status in obese (OB) children and compare them with values found in children with normal body mass index (N-BMI). A total of 68 children examined in the outpatient clinics of the Pediatrics Department in Tekirdag Namik Kemal University Medical Faculty were included in the study. Two groups were constituted. In the first group, 35 children with N-BMI, whose age- and sex-dependent BMI indices vary between 15 and 85 percentiles, were evaluated. The second group comprised 33 OB children whose BMI percentile values were between 95 and 99. Anthropometric measurements and routine biochemical tests were performed. Using these parameters, values for the related indices, BMI, APRI, and FIB-4, were calculated. Appropriate statistical tests were used for the evaluation of the study data. The statistical significance degree was accepted as p < 0.05. In the OB group, values found for APRI and FIB-4 were higher than those calculated for the N-BMI group. However, there was no statistically significant difference between the N-BMI and OB groups in terms of APRI and FIB-4. A similar pattern was detected for triglyceride (TRG) values. The correlation coefficient and degree of significance between APRI and FIB-4 were r = 0.336 and p = 0.065 in the N-BMI group. On the other hand, they were r = 0.707 and p = 0.001 in the OB group. Associations of these two indices with TRG have shown that this parameter was strongly correlated (p < 0.001) both with APRI and FIB-4 in the OB group, whereas no correlation was calculated in children with N-BMI. TRG are associated with an increased risk of fatty liver, which can progress to severe clinical problems such as steatohepatitis, which can lead to liver fibrosis. TRG are also independent risk factors for cardiovascular disease. In conclusion, the lack of correlation between TRG and APRI as well as FIB-4 in children with N-BMI, along with the detection of strong correlations of TRG with these indices in OB children, was the indicator of the possible onset of the tendency towards the development of fatty liver in OB children. This finding also pointed out the potential risk for cardiovascular pathologies in OB children. The nature of the difference between APRI vs. FIB-4 correlations in N-BMI and OB groups (no correlation vs. high correlation), respectively, may be the indicator of the importance of involving age and alanine transaminase parameters in addition to AST and PLT in the formula designed for FIB-4.

Keywords: APRI, FIB-4, obesity, triglycerides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215
103 Determinants of the Income of Household Level Coir Yarn Labourers in Sri Lanka

Authors: G. H. B. Dilhari, A. A. D. T. Saparamadu

Abstract:

Sri Lanka is one of the prominent countries for the coir production. The coir is one of the by-products of the coconut and the coir industry is considered to be one of the traditional industries in Sri Lanka. Because of the inherent nature of the coir industry, labourers play a significant role in the coir production process. The study has analyzed the determinants of the income of the household level coir yarn labourers. The study was conducted in the Kumarakanda Grama Niladhari division. Simple random sampling was used to generate a sample of 100 household level coir yarn labourers and structured questionnaire, personal interviews, and discussion were performed to gather the required data. The obtained data were statistically analyzed by using Statistical Package for Social Science (SPSS) software. Mann-Whitney U and Kruskal-Wallis test were performed for mean comparison. The findings revealed that the household level coir yarn industry is dominated by the female workers and it was identified that fewer numbers of workers have engaged in this industry as the main occupation. In addition to that, elderly participation in the industry is higher than the younger participation and most of them have engaged in the industry as a source of extra income. Level of education, the methods of engagement, satisfaction, engagement in the industry by the next generation, support from the government, method of government support, working hours per day, employed as a main job, number of completed units per day, suffering from job related diseases and type of the diseases were related with income level of household level coir yarn laboures. The recommendations as to flourish in future includes, technological transformation for coir yarn production, strengthening the raw material base and regulating the raw material supply, introduction of new technologies, markets and training programmes, the establishment of the labourers’ association, the initiation of micro credit schemes and better consideration about the job oriented diseases.

Keywords: Coir, Income, Sri Lanka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524
102 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake

Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou

Abstract:

Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.

Keywords: Landsat 8, oligotrophic lake, remote sensing, water quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
101 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning

Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds are not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.

Keywords: Structural health monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2743
100 Nonlinear Transformation of Laser Generated Ultrasonic Pulses in Geomaterials

Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas

Abstract:

Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus “GEOSCAN-02M”. Ultrasonic pulses are excited by the pulses of Qswitched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach- Mayergoyz and can be used for the location of cracks in the optically opaque materials.

Keywords: Cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
99 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder

Abstract:

One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.

Keywords: Affective computing, emotion recognition, humanoid robot, Human-Robot-Interaction (HRI), social robots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355
98 Implementing an Intuitive Reasoner with a Large Weather Database

Authors: Yung-Chien Sun, O. Grant Clark

Abstract:

In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.

Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
97 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments

Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis

Abstract:

In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

Keywords: Autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 720
96 Performance Analysis of Organic Rankine Cycle Technology to Exploit Low-Grade Waste Heat to Power Generation in Indian Industry

Authors: Bipul Krishna Saha, Basab Chakraborty, Ashish Alex Sam, Parthasarathi Ghosh

Abstract:

The demand for energy is cumulatively increasing with time.  Since the availability of conventional energy resources is dying out gradually, significant interest is being laid on searching for alternate energy resources and minimizing the wastage of energy in various fields.  In such perspective, low-grade waste heat from several industrial sources can be reused to generate electricity. The present work is to further the adoption of the Organic Rankine Cycle (ORC) technology in Indian industrial sector.  The present paper focuses on extending the previously reported idea to the next level through a comparative review with three different working fluids using practical data from an Indian industrial plant. For comprehensive study in the simulation platform of Aspen Hysys®, v8.6, the waste heat data has been collected from a current coke oven gas plant in India.  A parametric analysis of non-regenerative ORC and regenerative ORC is executed using the working fluids R-123, R-11 and R-21 for subcritical ORC system.  The primary goal is to determine the optimal working fluid considering various system parameters like turbine work output, obtained system efficiency, irreversibility rate and second law efficiency under applied multiple heat source temperature (160 °C- 180 °C).  Selection of the turbo-expanders is one of the most crucial tasks for low-temperature applications in ORC system. The present work is an attempt to make suitable recommendation for the appropriate configuration of the turbine. In a nutshell, this study justifies the proficiency of integrating the ORC technology in Indian perspective and also finds the appropriate parameter of all components integrated in ORC system for building up an ORC prototype.

Keywords: Organic rankine cycle, regenerative organic rankine cycle, waste heat recovery, Indian industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269
95 Main Control Factors of Fluid Loss in Drilling and Completion in Shunbei Oilfield by Unmanned Intervention Algorithm

Authors: Peng Zhang, Lihui Zheng, Xiangchun Wang, Xiaopan Kou

Abstract:

Quantitative research on the main control factors of lost circulation has few considerations and single data source. Using Unmanned Intervention Algorithm to find the main control factors of lost circulation adopts all measurable parameters. The degree of lost circulation is characterized by the loss rate as the objective function. Geological, engineering and fluid data are used as layers, and 27 factors such as wellhead coordinates and Weight on Bit (WOB) used as dimensions. Data classification is implemented to determine function independent variables. The mathematical equation of loss rate and 27 influencing factors is established by multiple regression method, and the undetermined coefficient method is used to solve the undetermined coefficient of the equation. Only three factors in t-test are greater than the test value 40, and the F-test value is 96.557%, indicating that the correlation of the model is good. The funnel viscosity, final shear force and drilling time were selected as the main control factors by elimination method, contribution rate method and functional method. The calculated values of the two wells used for verification differ from the actual values by -3.036 m3/h and -2.374 m3/h, with errors of 7.21% and 6.35%. The influence of engineering factors on the loss rate is greater than that of funnel viscosity and final shear force, and the influence of the three factors is less than that of geological factors. The best combination of funnel viscosity, final shear force and drilling time is obtained through quantitative calculation. The minimum loss rate of lost circulation wells in Shunbei area is 10 m3/h. It can be seen that man-made main control factors can only slow down the leakage, but cannot fundamentally eliminate it. This is more in line with the characteristics of karst caves and fractures in Shunbei fault solution oil and gas reservoir.

Keywords: Drilling fluid, loss rate, main controlling factors, Unmanned Intervention Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 401
94 Application of Various Methods for Evaluation of Heavy Metal Pollution in Soils around Agarak Copper-Molybdenum Mine Complex, Armenia

Authors: K. A. Ghazaryan, H. S. Movsesyan, N. P. Ghazaryan

Abstract:

The present study was aimed in assessing the heavy metal pollution of the soils around Agarak copper-molybdenum mine complex and related environmental risks. This mine complex is located in the south-east part of Armenia, and the present study was conducted in 2013. The soils of the five riskiest sites of this region were studied: surroundings of the open mine, the sites adjacent to processing plant of Agarak copper-molybdenum mine complex, surroundings of Darazam active tailing dump, the recultivated tailing dump of “ravine - 2”, and the recultivated tailing dump of “ravine - 3”. The mountain cambisol was the main soil type in the study sites. The level of soil contamination by heavy metals was assessed by Contamination factors (Cf), Degree of contamination (Cd), Geoaccumulation index (I-geo) and Enrichment factor (EF). The distribution pattern of trace metals in the soil profile according to Cf, Cd, I-geo and EF values shows that the soil is much polluted. Almost in all studied sites, Cu, Mo, Pb, and Cd were the main polluting heavy metals, and this was conditioned by Agarak copper-molybdenum mine complex activity. It is necessary to state that the pollution problem becomes pressing as some parts of these highly polluted region are inhabited by population, and agriculture is highly developed there; therefore, heavy metals can be transferred into human bodies through food chains and have direct influence on public health. Since the induced pollution can pose serious threats to public health, further investigations on soil and vegetation pollution are recommended. Finally, Cf calculating based on distance from the pollution source and the wind direction can provide more reasonable results.

Keywords: Agarak copper-molybdenum mine complex, heavy metals, soil contamination, enrichment factor, Armenia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1249
93 Effect of Loop Diameter, Height and Insulation on a High Temperature CO2 Based Natural Circulation Loop

Authors: S. Sadhu, M. Ramgopal, S. Bhattacharyya

Abstract:

Natural circulation loops (NCLs) are buoyancy driven flow systems without any moving components. NCLs have vast applications in geothermal, solar and nuclear power industry where reliability and safety are of foremost concern. Due to certain favorable thermophysical properties, especially near supercritical regions, carbon dioxide can be considered as an ideal loop fluid in many applications. In the present work, a high temperature NCL that uses supercritical carbon dioxide as loop fluid is analysed. The effects of relevant design and operating variables on loop performance are studied. The system operating under steady state is modelled taking into account the axial conduction through loop fluid and loop wall, and heat transfer with surroundings. The heat source is considered to be a heater with controlled heat flux and heat sink is modelled as an end heat exchanger with water as the external cold fluid. The governing equations for mass, momentum and energy conservation are normalized and are solved numerically using finite volume method. Results are obtained for a loop pressure of 90 bar with the power input varying from 0.5 kW to 6.0 kW. The numerical results are validated against the experimental results reported in the literature in terms of the modified Grashof number (Grm) and Reynolds number (Re). Based on the results, buoyancy and friction dominated regions are identified for a given loop. Parametric analysis has been done to show the effect of loop diameter, loop height, ambient temperature and insulation. The results show that for the high temperature loop, heat loss to surroundings affects the loop performance significantly. Hence this conjugate heat transfer between the loop and surroundings has to be considered in the analysis of high temperature NCLs.

Keywords: Conjugate heat transfer, heat loss, natural circulation loop, supercritical carbon dioxide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1512
92 Accumulation of Pollutants, Self-purification and Impact on Peripheral Urban Areas: A Case Study in Shantytowns in Argentina

Authors: N. Porzionato, M. Mantiñan, E. Bussi, S. Grinberg, R. Gutierrez, G. Curutchet

Abstract:

This work sets out to debate the tensions involved in the processes of contamination and self-purification in the urban space, particularly in the streams that run through the Buenos Aires metropolitan area. For much of their course, those streams are piped; their waters do not come into contact with the outdoors until they have reached deeply impoverished urban areas with high levels of environmental contamination. These are peripheral zones that, until thirty years ago, were marshlands and fields. They are now densely populated areas largely lacking in urban infrastructure. The Cárcova neighborhood, where this project is underway, is in the José León Suárez section of General San Martín county, Buenos Aires province. A stretch of José León Suarez canal crosses the neighborhood. Starting upstream, this canal carries pollutants due to the sewage and industrial waste released into it. Further downstream, in the neighborhood, domestic drainage is poured into the stream. In this paper, we formulate a hypothesis diametrical to the one that holds that these neighborhoods are the primary source of contamination, suggesting instead that in the stretch of the canal that runs through the neighborhood the stream’s waters are actually cleaned and the sediments accumulate pollutants. Indeed, the stretches of water that runs through these neighborhoods act as water processing plants for the metropolis. This project has studied the different organic-load polluting contributions to the water in a certain stretch of the canal, the reduction of that load over the course of the canal, and the incorporation of pollutants into the sediments. We have found that the surface water has considerable ability to self-purify, mostly due to processes of sedimentation and adsorption. The polluting load is accumulated in the sediments where that load stabilizes slowly by means of anaerobic processes. In this study, we also investigated the risks of sediment management and the use of the processes studied here in controlled conditions as tools of environmental restoration.

Keywords: Bioremediation, pollutants, sediments, urban streams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2480
91 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas

Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders

Abstract:

A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.

Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2379
90 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Cheima Ben Soltane, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: Feature Extraction, Speaker Modeling, Feature Matching, Mel Frequency Cepstrum Coefficient (MFCC), Gaussian mixture model (GMM), Vector Quantization (VQ), Linde-Buzo-Gray (LBG), Expectation Maximization (EM), pre-processing, Voice Activity Detection (VAD), Short Time Energy (STE), Background Noise Statistical Modeling, Closed-Set Tex-Independent Speaker Identification System (CISI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
89 Geochemistry of Natural Radionuclides Associated with Acid Mine Drainage (AMD) in a Coal Mining Area in Southern Brazil

Authors: Juliana A. Galhardi, Daniel M. Bonotto

Abstract:

Coal is an important non-renewable energy source of and can be associated with radioactive elements. In Figueira city, Paraná state, Brazil, it was recorded high uranium activity near the coal mine that supplies a local thermoelectric power plant. In this context, the radon activity (Rn-222, produced by the Ra-226 decay in the U-238 natural series) was evaluated in groundwater, river water and effluents produced from the acid mine drainage in the coal reject dumps. The samples were collected in August 2013 and in February 2014 and analyzed at LABIDRO (Laboratory of Isotope and Hydrochemistry), UNESP, Rio Claro city, Brazil, using an alpha spectrometer (AlphaGuard) adjusted to evaluate the mean radon activity concentration in five cycles of 10 minutes. No radon activity concentration above 100 Bq.L-1, which was a previous critic value established by the World Health Organization. The average radon activity concentration in groundwater was higher than in surface water and in effluent samples, possibly due to the accumulation of uranium and radium in the aquifer layers that favors the radon trapping. The lower value in the river waters can indicate dilution and the intermediate value in the effluents may indicate radon absorption in the coal particles of the reject dumps. The results also indicate that the radon activities in the effluents increase with the sample acidification, possibly due to the higher radium leaching and the subsequent radon transport to the drainage flow. The water samples of Laranjinha River and Ribeirão das Pedras stream, which, respectively, supply Figueira city and receive the mining effluent, exhibited higher pH values upstream the mine, reflecting the acid mine drainage discharge. The radionuclides transport indicates the importance of monitoring their activity concentration in natural waters due to the risks that the radioactivity can represent to human health.

Keywords: Radon, radium, acid mine drainage, coal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
88 Processing and Economic Analysis of Rain Tree (Samanea saman) Pods for Village Level Hydrous Bioethanol Production

Authors: Dharell B. Siano, Wendy C. Mateo, Victorino T. Taylan, Francisco D. Cuaresma

Abstract:

Biofuel is one of the renewable energy sources adapted by the Philippine government in order to lessen the dependency on foreign fuel and to reduce carbon dioxide emissions. Rain tree pods were seen to be a promising source of bioethanol since it contains significant amount of fermentable sugars. The study was conducted to establish the complete procedure in processing rain tree pods for village level hydrous bioethanol production. Production processes were done for village level hydrous bioethanol production from collection, drying, storage, shredding, dilution, extraction, fermentation, and distillation. The feedstock was sundried, and moisture content was determined at a range of 20% to 26% prior to storage. Dilution ratio was 1:1.25 (1 kg of pods = 1.25 L of water) and after extraction process yielded a sugar concentration of 22 0Bx to 24 0Bx. The dilution period was three hours. After three hours of diluting the samples, the juice was extracted using extractor with a capacity of 64.10 L/hour. 150 L of rain tree pods juice was extracted and subjected to fermentation process using a village level anaerobic bioreactor. Fermentation with yeast (Saccharomyces cerevisiae) can fasten up the process, thus producing more ethanol at a shorter period of time; however, without yeast fermentation, it also produces ethanol at lower volume with slower fermentation process. Distillation of 150 L of fermented broth was done for six hours at 85 °C to 95 °C temperature (feedstock) and 74 °C to 95 °C temperature of the column head (vapor state of ethanol). The highest volume of ethanol recovered was established at with yeast fermentation at five-day duration with a value of 14.89 L and lowest actual ethanol content was found at without yeast fermentation at three-day duration having a value of 11.63 L. In general, the results suggested that rain tree pods had a very good potential as feedstock for bioethanol production. Fermentation of rain tree pods juice can be done with yeast and without yeast.

Keywords: Fermentation, hydrous bioethanol, rain tree pods, village level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
87 Scenario and Decision Analysis for Solar Energy in Egypt by 2035 Using Dynamic Bayesian Network

Authors: Rawaa H. El-Bidweihy, Hisham M. Abdelsalam, Ihab A. El-Khodary

Abstract:

Bayesian networks are now considered to be a promising tool in the field of energy with different applications. In this study, the aim was to indicate the states of a previous constructed Bayesian network related to the solar energy in Egypt and the factors affecting its market share, depending on the followed data distribution type for each factor, and using either the Z-distribution approach or the Chebyshev’s inequality theorem. Later on, the separate and the conditional probabilities of the states of each factor in the Bayesian network were derived, either from the collected and scrapped historical data or from estimations and past studies. Results showed that we could use the constructed model for scenario and decision analysis concerning forecasting the total percentage of the market share of the solar energy in Egypt by 2035 and using it as a stable renewable source for generating any type of energy needed. Also, it proved that whenever the use of the solar energy increases, the total costs decreases. Furthermore, we have identified different scenarios, such as the best, worst, 50/50, and most likely one, in terms of the expected changes in the percentage of the solar energy market share. The best scenario showed an 85% probability that the market share of the solar energy in Egypt will exceed 10% of the total energy market, while the worst scenario showed only a 24% probability that the market share of the solar energy in Egypt will exceed 10% of the total energy market. Furthermore, we applied policy analysis to check the effect of changing the controllable (decision) variable’s states acting as different scenarios, to show how it would affect the target nodes in the model. Additionally, the best environmental and economical scenarios were developed to show how other factors are expected to be, in order to affect the model positively. Additional evidence and derived probabilities were added for the weather dynamic nodes whose states depend on time, during the process of converting the Bayesian network into a dynamic Bayesian network.

Keywords: Bayesian network, Chebyshev, decision variable, dynamic Bayesian network, Z-distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 504
86 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).

Keywords: Mininet, OpenFlow, POX controller, SDN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2970
85 A Preliminary X-Ray Study on Human-Hair Microstructures for a Health-State Indicator

Authors: Phannee Saengkaew, Weerasak Ussawawongaraya, Sasiphan Khaweerat, Supagorn Rugmai, Sirisart Ouajai, Jiraporn Luengviriya, Sakuntam Sanorpim, Manop Tirarattanasompot, Somboon Rhianphumikarakit

Abstract:

We present a preliminary x-ray study on human-hair microstructures for a health-state indicator, in particular a cancer case. As an uncomplicated and low-cost method of x-ray technique, the human-hair microstructure was analyzed by wide-angle x-ray diffractions (XRD) and small-angle x-ray scattering (SAXS). The XRD measurements exhibited the simply reflections at the d-spacing of 28 Å, 9.4 Å and 4.4 Å representing to the periodic distance of the protein matrix of the human-hair macrofibrous and the diameter and the repeated spacing of the polypeptide alpha helixes of the photofibrils of the human-hair microfibrous, respectively. When compared to the normal cases, the unhealthy cases including to the breast- and ovarian-cancer cases obtained higher normalized ratios of the x-ray diffracting peaks of 9.4 Å and 4.4 Å. This likely resulted from the varied distributions of microstructures by a molecular alteration. As an elemental analysis by x-ray fluorescence (XRF), the normalized quantitative ratios of zinc(Zn)/calcium(Ca) and iron(Fe)/calcium(Ca) were determined. Analogously, both Zn/Ca and Fe/Ca ratios of the unhealthy cases were obtained higher than both of the normal cases were. Combining the structural analysis by XRD measurements and the elemental analysis by XRF measurements exhibited that the modified fibrous microstructures of hair samples were in relation to their altered elemental compositions. Therefore, these microstructural and elemental analyses of hair samples will be benefit to associate with a diagnosis of cancer and genetic diseases. This functional method would lower a risk of such diseases by the early diagnosis. However, the high-intensity x-ray source, the highresolution x-ray detector, and more hair samples are necessarily desired to develop this x-ray technique and the efficiency would be enhanced by including the skin and fingernail samples with the human-hair analysis.

Keywords: Human-hair analysis, XRD, SAXS, breast cancer, health-state indicator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2574
84 Similitude for Thermal Scale-up of a Multiphase Thermolysis Reactor in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

The thermochemical copper-chlorine (Cu-Cl) cycle is considered as a sustainable and efficient technology for a hydrogen production, when linked with clean-energy systems such as nuclear reactors or solar thermal plants. In the Cu-Cl cycle, water is decomposed thermally into hydrogen and oxygen through a series of intermediate reactions. This paper investigates the thermal scale up analysis of the three phase oxygen production reactor in the Cu-Cl cycle, where the reaction is endothermic and the temperature is about 530 oC. The paper focuses on examining the size and number of oxygen reactors required to provide enough heat input for different rates of hydrogen production. The type of the multiphase reactor used in this paper is the continuous stirred tank reactor (CSTR) that is heated by a half pipe jacket. The thermal resistance of each section in the jacketed reactor system is studied to examine its effect on the heat balance of the reactor. It is found that the dominant contribution to the system thermal resistance is from the reactor wall. In the analysis, the Cu-Cl cycle is assumed to be driven by a nuclear reactor where two types of nuclear reactors are examined as the heat source to the oxygen reactor. These types are the CANDU Super Critical Water Reactor (CANDU-SCWR) and High Temperature Gas Reactor (HTGR). It is concluded that a better heat transfer rate has to be provided for CANDU-SCWR by 3-4 times than HTGR. The effect of the reactor aspect ratio is also examined in this paper and is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Finally, a comparison between the results of heat balance and existing results of mass balance is performed and is found that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: Clean energy, Cu-Cl cycle, heat transfer, sustainable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
83 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: Multi-objective decision support, analysis, data validation, freight delivery, multi-modal transportation, genetic programming methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 484
82 Reinforcement of Calcium Phosphate Cement with E-Glass Fibre

Authors: Sudip Dasgupta, Debosmita Pani, Kanchan Maji

Abstract:

Calcium Phosphate Cement (CPC) due to its high bioactivity and optimum bioresorbability shows excellent bone regeneration capability. Despite it has limited applications as bone implant due to its macro-porous microstructure causing its poor mechanical strength. The reinforcement of apatitic CPCs with biocompatible fibre glass phase is an attractive area of research to improve upon its mechanical strength. Here, we study the setting behaviour of Si-doped and un-doped α tri calcium phosphate (α - TCP) based CPC and its reinforcement with addition of E-glass fibre. Alpha Tri calcium phosphate powders were prepared by solid state sintering of CaCO3 , CaHPO4 and Tetra Ethyl Ortho Silicate (TEOS) was used as silicon source to synthesize Si doped α-TCP powders. Both initial and final setting time of the developed cement was delayed because of Si addition. Crystalline phases of HA (JCPDS 9- 432), α-TCP (JCPDS 29-359) and β-TCP (JCPDS 9-169) were detected in the X-ray diffraction (XRD) pattern after immersion of CPC in simulated body fluid (SBF) for 0 hours to 10 days. As Si incorporation in the crystal lattice stabilized the TCP phase, Si doped CPC showed little slower rate of conversion into HA phase as compared to un-doped CPC. The SEM image of the microstructure of hardened CPC showed lower grain size of HA in un-doped CPC because of premature setting and faster hydrolysis of un-doped CPC in SBF as compared that in Si-doped CPC. Premature setting caused generation of micro and macro porosity in un-doped CPC structure which resulted in its lower mechanical strength as compared to that in Si-doped CPC. It was found that addition of 10 wt% of E-glass fibre into Si-doped α-TCP increased the average DTS of CPC from 8 MPa to 15 MPa as the fibres could resists the propagation of crack by deflecting the crack tip. Our study shows that biocompatible E-glass fibre in optimum proportion in CPC matrix can enhance the mechanical strength of CPC without affecting its biocompatibility. 

Keywords: Calcium phosphate cement, biocompatibility, e-glass fibre, diametral tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2213
81 Verifying the Supremacy of Volume Modulated Arc Therapy Over Intensity Modulated Radiation Therapy: Pelvis Malignancies’ Perspective

Authors: M. Umar Farooq, T. Ahmad Afridi, M. Zia-Ul-Islam Arsalan, U. Hussain Haider, S. Ullah

Abstract:

Cancer, a leading fatal disease worldwide, can be treated with various techniques including radiation therapy. It involves the use of ionizing radiation to target cancer cells. On basis of source placement, radiation therapy is of two types i.e., Brachytherapy and External Beam Radiotherapy (EBRT). EBRT has evolved from 2-D conventional therapy to 3-D Conformal radiotherapy (3D-CRT) and then Intensity-Modulated Radiotherapy (IMRT). IMRT improves dose conformity and sparing of organs at risk. Volumetric Modulated Arc Therapy (VMAT) is a modern technique that uses treatment delivery in arcs with rotation of the gantry. In this report, a dosimetry comparison was performed between IMRT and VMAT. This study was conducted in the Radiotherapy Department of the Institute of Nuclear Medicine and Oncology Lahore (INMOL). Ten patients with Prostate Carcinoma were selected for this study to compare the methods. Simulation of these patients was done with help of a CT Simulator. All target volumes and organs were delineated by the oncologists. Then suitable fields/arcs were applied which cover volumes effectively. This was followed by the optimization of plans for both techniques for every patient. Finally, a comparison of evaluating parameters e.g., Conformity Index (CI), Volume Coverage, Homogeneity Index (HI), Organ Doses, and MUs (Monitor Units) was performed. We obtained better results of target conformity indices from VMAT (CI = 1.16) than IMRT (CI = 1.24). VMAT was better in organ sparing too. Also, VMAT shows fewer MUs (733 MUs) as compared to IMRT (2149 MUs). From this study, it is concluded that VMAT is a better treatment technique than IMRT. This technique will enhance treatment efficiency as it takes less time in obtaining the required results. Also, a very less scatter dose will be delivered to the patient.

Keywords: 2-D Conventional Radiotherapy, 3-D Conformal Radiotherapy, Intensity Modulated Radiotherapy, Prostate Carcinoma, Radiotherapy, Volumetric Modulated Arc Therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 371
80 Language Politics and Identity in Translation: From a Monolingual Text to Multilingual Text in Chinese Translations

Authors: Chu-Ching Hsu

Abstract:

This paper focuses on how the government-led language policies and the political changes in Taiwan manipulate the languages choice in translations and what translation strategies are employed by the translator to show his or her language ideology behind the power struggles and decision-making. Therefore, framed by Lefevere’s theoretical concept of translating as rewriting, and carried out a diachronic and chronological study, this paper specifically sets out to investigate the language ideology and translator’s idiolect of Chinese language translations of Anglo-American novels. The examples drawn to explore these issues were taken from different versions of Chinese renditions of Mark Twain’s English-language novel The Adventures of Huckleberry Finn in which there are several different dialogues originally written in the colloquial language and dialect used in the American state of Mississippi and reproduced in Mark Twain’s works. Also, adapted corpus methodology, many examples are extracted as instances from the translated texts and source text, to illuminate how the translators in Taiwan deal with the dialectal features encoded in Twain’s works, and how different versions of Chinese translations are employed by Taiwanese translators to confirm the language polices and to express their language identity textually in different periods of the past five decades, from the 1960s onward. The finding of this study suggests that the use of Taiwanese dialect and language patterns in translations does relate to the movement of the mother-tongue language and language ideology of the translator as well as to the issue of language identity raised in the island of Taiwan. Furthermore, this study confirms that the change of political power in Taiwan does bring significantly impact in language policy-- assimilationism, pluralism or multiculturalism, which also makes Taiwan from a monolingual to multilingual society, where the language ideology and identity can be revealed not only in people’s daily communication but also in written translations.

Keywords: Language politics and policies, literary translation, mother-tongue, multiculturalism, translator’s ideology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1126
79 Detecting Tomato Flowers in Greenhouses Using Computer Vision

Authors: Dor Oppenheim, Yael Edan, Guy Shani

Abstract:

This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.

Keywords: Agricultural engineering, computer vision, image processing, flower detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367
78 Automatic Distance Compensation for Robust Voice-based Human-Computer Interaction

Authors: Randy Gomez, Keisuke Nakamura, Kazuhiro Nakadai

Abstract:

Distant-talking voice-based HCI system suffers from performance degradation due to mismatch between the acoustic speech (runtime) and the acoustic model (training). Mismatch is caused by the change in the power of the speech signal as observed at the microphones. This change is greatly influenced by the change in distance, affecting speech dynamics inside the room before reaching the microphones. Moreover, as the speech signal is reflected, its acoustical characteristic is also altered by the room properties. In general, power mismatch due to distance is a complex problem. This paper presents a novel approach in dealing with distance-induced mismatch by intelligently sensing instantaneous voice power variation and compensating model parameters. First, the distant-talking speech signal is processed through microphone array processing, and the corresponding distance information is extracted. Distance-sensitive Gaussian Mixture Models (GMMs), pre-trained to capture both speech power and room property are used to predict the optimal distance of the speech source. Consequently, pre-computed statistic priors corresponding to the optimal distance is selected to correct the statistics of the generic model which was frozen during training. Thus, model combinatorics are post-conditioned to match the power of instantaneous speech acoustics at runtime. This results to an improved likelihood in predicting the correct speech command at farther distances. We experiment using real data recorded inside two rooms. Experimental evaluation shows voice recognition performance using our method is more robust to the change in distance compared to the conventional approach. In our experiment, under the most acoustically challenging environment (i.e., Room 2: 2.5 meters), our method achieved 24.2% improvement in recognition performance against the best-performing conventional method.

Keywords: Human Machine Interaction, Human Computer Interaction, Voice Recognition, Acoustic Model Compensation, Acoustic Speech Enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1885
77 Wind Energy Status in Turkey

Authors: Mustafa Engin Başoğlu, Bekir Çakir

Abstract:

Since large part of electricity is generated by using fossil based resources, energy is an important agenda for countries. In this context, renewable energy sources are alternative to conventional sources due to the depletion of fossil resources, increasing awareness of climate change and global warming concerns. Solar, wind and hydropower energy are the main renewable energy sources. Among of them, since installed capacity of wind power has increased approximately eight times between 2008 - November of 2014, wind energy is a promising source for Turkey. Furthermore, signing of Kyoto Protocol can be accepted as a milestone for Turkey's energy policy. Turkish Government has announced Vision 2023 (energy targets by 2023) in 2010-2014 Strategic Plan prepared by Ministry of Energy and Natural Resources (MENR). Energy targets in this plan can be summarized as follows: Share of renewable energy sources in electricity generation is 30% of total electricity generation by 2023. Installed capacity of wind energy will be 20 GW by 2023. Other renewable energy sources such as solar, hydropower and geothermal are encouraged with new incentive mechanisms. Dependence on foreign energy is reduced for sustainability and energy security. On the other hand, since Turkey is surrounded by three coastal areas, wind energy potential is convenient for wind power application. As of November of 2014, total installed capacity of wind power plants is 3.51 GW and a lot of wind power plants are under construction with capacity 1.16 GW. Turkish government also encourages the locally manufactured equipments. In this context, one of the projects funded by private sector, universities and TUBİTAK names as MILRES is an important project aimed to promote the use wind energy in electricity generation. Within this project, wind turbine with 500 kW power has been produced and will be installed at the beginning of the 2015. After that, by using the experience obtained from the first phase of the project, a wind turbine with 2.5 MW power will be manufactured in an industrial scale.

Keywords: Wind energy, wind speed, Vision 2023, MILRES (national wind energy system), wind energy potential, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3272
76 64 bit Computer Architectures for Space Applications – A study

Authors: Niveditha Domse, Kris Kumar, K. N. Balasubramanya Murthy

Abstract:

The more recent satellite projects/programs makes extensive usage of real – time embedded systems. 16 bit processors which meet the Mil-Std-1750 standard architecture have been used in on-board systems. Most of the Space Applications have been written in ADA. From a futuristic point of view, 32 bit/ 64 bit processors are needed in the area of spacecraft computing and therefore an effort is desirable in the study and survey of 64 bit architectures for space applications. This will also result in significant technology development in terms of VLSI and software tools for ADA (as the legacy code is in ADA). There are several basic requirements for a special processor for this purpose. They include Radiation Hardened (RadHard) devices, very low power dissipation, compatibility with existing operational systems, scalable architectures for higher computational needs, reliability, higher memory and I/O bandwidth, predictability, realtime operating system and manufacturability of such processors. Further on, these may include selection of FPGA devices, selection of EDA tool chains, design flow, partitioning of the design, pin count, performance evaluation, timing analysis etc. This project deals with a brief study of 32 and 64 bit processors readily available in the market and designing/ fabricating a 64 bit RISC processor named RISC MicroProcessor with added functionalities of an extended double precision floating point unit and a 32 bit signal processing unit acting as co-processors. In this paper, we emphasize the ease and importance of using Open Core (OpenSparc T1 Verilog RTL) and Open “Source" EDA tools such as Icarus to develop FPGA based prototypes quickly. Commercial tools such as Xilinx ISE for Synthesis are also used when appropriate.

Keywords: RISC MicroProcessor, RPC – RISC Processor Core, PBX – Processor to Block Interface part of the Interconnection Network, BPX – Block to Processor Interface part of the Interconnection Network, FPU – Floating Point Unit, SPU – Signal Processing Unit, WB – Wishbone Interface, CTU – Clock and Test Unit

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2248
75 Study of the Energy Efficiency of Buildings under Tropical Climate with a View to Sustainable Development: Choice of Material Adapted to the Protection of the Environment

Authors: Guarry Montrose, Ted Soubdhan

Abstract:

In the context of sustainable development and climate change, the adaptation of buildings to the climatic context in hot climates is a necessity if we want to improve living conditions in housing and reduce the risks to the health and productivity of occupants due to thermal discomfort in buildings. One can find a wide variety of efficient solutions but with high costs. In developing countries, especially tropical countries, we need to appreciate a technology with a very limited cost that is affordable for everyone, energy efficient and protects the environment. Biosourced insulation is a product based on plant fibers, animal products or products from recyclable paper or clothing. Their development meets the objectives of maintaining biodiversity, reducing waste and protecting the environment. In tropical or hot countries, the aim is to protect the building from solar thermal radiation, a source of discomfort. The aim of this work is in line with the logic of energy control and environmental protection, the approach is to make the occupants of buildings comfortable, reduce their carbon dioxide emissions (CO2) and decrease their energy consumption (energy efficiency). We have chosen to study the thermo-physical properties of banana leaves and sawdust, especially their thermal conductivities, direct measurements were made using the flash method and the hot plate method. We also measured the heat flow on both sides of each sample by the hot box method. The results from these different experiences show that these materials are very efficient used as insulation. We have also conducted a building thermal simulation using banana leaves as one of the materials under Design Builder software. Air-conditioning load as well as CO2 release was used as performance indicator. When the air-conditioned building cell is protected on the roof by banana leaves and integrated into the walls with solar protection of the glazing, it saves up to 64.3% of energy and avoids 57% of CO2 emissions.

Keywords: Plant fibers, tropical climates, sustainable development, waste reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552