Search results for: single error upset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6283

Search results for: single error upset

5413 Evaluation of Ceres Wheat and Rice Model for Climatic Conditions in Haryana, India

Authors: Mamta Rana, K. K. Singh, Nisha Kumari

Abstract:

The simulation models with its soil-weather-plant atmosphere interacting system are important tools for assessing the crops in changing climate conditions. The CERES-Wheat & Rice vs. 4.6 DSSAT was calibrated and evaluated for one of the major producers of wheat and rice state- Haryana, India. The simulation runs were made under irrigated conditions and three fertilizer applications dose of N-P-K to estimate crop yield and other growth parameters along with the phenological development of the crop. The genetic coefficients derived by iteratively manipulating the relevant coefficients that characterize the phenological process of wheat and rice crop to the best fit match between the simulated and observed anthesis, physological maturity and final grain yield. The model validated by plotting the simulated and remote sensing derived LAI. LAI product from remote sensing provides the edge of spatial, timely and accurate assessment of crop. For validating the yield and yield components, the error percentage between the observed and simulated data was calculated. The analysis shows that the model can be used to simulate crop yield and yield components for wheat and rice cultivar under different management practices. During the validation, the error percentage was less than 10%, indicating the utility of the calibrated model for climate risk assessment in the selected region.

Keywords: simulation model, CERES-wheat and rice model, crop yield, genetic coefficient

Procedia PDF Downloads 297
5412 The Role of MAOA Gene in the Etiology of Autism Spectrum Disorder in Males

Authors: Jana Kisková, Dana Gabriková

Abstract:

Monoamine oxidase A gene (MAOA) is suggested to be a candidate gene implicated in many neuropsychiatric disorders, including autism spectrum disorder (ASD). This meta-analytic review evaluates the relationship between ASD and MAOA markers such as 30 bp variable number tandem repeats in the promoter region (uVNTR) and single nucleotide polymorphisms (SNPs) by using findings from recently published studies. It seems that in Caucasian males, the risk of developing ASD increase with the presence of 4-repeat allele in the promoter region of MAOA gene whereas no differences were found between autistic patients and controls in Egyptian, West Bengal and Korean population. Some studies point to the importance specific haplotype groups of SNPs and interaction of MAOA with others genes (e.g. FOXP2 or SRY). The results of existing studies are insufficient and further research is needed.

Keywords: autism spectrum disorder, MAOA, uVNTR, single nucleotide polymorphism

Procedia PDF Downloads 381
5411 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies

Authors: Paolino Di Felice

Abstract:

The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.

Keywords: quality of life, distance measurement error, Italian administrative units, spatial database

Procedia PDF Downloads 367
5410 Optimal Replacement Period for a One-Unit System with Double Repair Cost Limits

Authors: Min-Tsai Lai, Taqwa Hariguna

Abstract:

This paper presents a periodical replacement model for a system, considering the concept of single and cumulative repair cost limits simultaneously. The failures are divided into two types. Minor failure can be corrected by minimal repair and serious failure makes the system breakdown completely. When a minor failure occurs, if the repair cost is less than a single repair cost limit L1 and the accumulated repair cost is less than a cumulative repair cost limit L2, then minimal repair is executed, otherwise, the system is preventively replaced. The system is also replaced at time T or at serious failure. The optimal period T minimizing the long-run expected cost per unit time is verified to be finite and unique under some specific conditions.

Keywords: repair-cost limit, cumulative repair-cost limit, minimal repair, periodical replacement policy

Procedia PDF Downloads 359
5409 The Biomechanical Assessment of Balance and Gait for Stroke Patients and the Implications in the Diagnosis and Rehabilitation

Authors: A. Alzahrani, G. Arnold, W. Wang

Abstract:

Background: Stroke commonly occurs in middle-aged and elderly populations, and the diagnosis of early stroke is still difficult. Patients who have suffered a stroke have different balance and gait patterns from healthy people. Advanced techniques of motion analysis have been routinely used in the clinical assessment of cerebral palsy. However, so far, little research has been done on the direct diagnosis of early stroke patients using motion analysis. Objectives: The aim of this study was to investigate whether patients with stroke have different balance and gait from healthy people and which biomechanical parameters could be used to predict and diagnose potential patients who are at a potential risk to stroke. Methods: Thirteen patients with stroke were recruited as subjects whose gait and balance was analysed. Twenty normal subjects at the matched age participated in this study as a control group. All subjects’ gait and balance were collected using Vicon Nexus® to obtain the gait parameters, kinetic, and kinematic parameters of the hip, knee, and ankle joints in three planes of both limbs. Participants stood on force platforms to perform a single leg balance test. Then, they were asked to walk along a 10 m walkway at their comfortable speed. Participants performed 6 trials of single-leg balance for each side and 10 trials of walking. From the recorded trials, three good ones were analysed using the Vicon Plug-in-Gait model to obtain gait parameters, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Result: The temporal-spatial variables of Stroke subjects were compared with the healthy subjects; it was found that there was a significant difference (p < 0.05) between the groups. The step length, speed, cadence were lower in stroke subjects as compared to the healthy groups. The stroke patients group showed significantly decreased in gait speed (mean and SD: 0.85 ± 0.33 m/s), cadence ( 96.71 ± 16.14 step/min), and step length (0.509 ± 017 m) in compared to healthy people group whereas the gait speed was 1.2 ± 0.11 m/s, cadence 112 ± 8.33 step/min, and step length 0.648 ± 0.43 m. Moreover, it was observed that patients with stroke have significant differences in the ankle, hip, and knee joints’ kinematics in the sagittal and coronal planes. Also, the result showed that there was a significant difference between groups in the single-leg balance test, e.g., maintaining single-leg stance time in the stroke patients showed shorter duration (5.97 ± 6.36 s) in compared to healthy people group (14.36 ± 10.20 s). Conclusion: Our result showed that there are significantly differences between stroke patients and healthy subjects in the various aspects of gait analysis and balance test, as a consequences of these findings some of the biomechanical parameters such as joints kinematics, gait parameters, and single-leg stance balance test could be used in clinical practice to predict and diagnose potential patients who are at a high risk of further stroke.

Keywords: gait analysis, kinetics, kinematics, single-leg stance, Stroke

Procedia PDF Downloads 136
5408 The Effects of Kicking Leg Preference on the Bilateral Balance Ability Asymmetries in Collegian Football Players

Authors: Mehmet Yildiz, Mehmet Kale

Abstract:

The primary aim of the present study was to identify the bilateral balance asymmetries when comparing the dominant (DL) vs. the non-dominant leg (NDL) in the collegian soccer players. The secondary aim was to compare the inter-limb asymmetry index (ASI) when differentiating by kicking preference (right-dominant vs. left-dominant). 34 right-dominant leg (RightDL) (age:21.12±1.85, height:174.50±5.18, weight:69.42±6.86) and 23 left-dominant leg (LeftDL), (age:21.70±2.03, height:176.2±6.27, weight:68.73±5.96) collegian football players were tested for bilateral static and dynamic balance. Balance ability was assessed by measuring centre of pressure deviation on a single leg. Single leg static and dynamic balance scores and inter-limb asymmetry index (ASI) were determined. Student t tests were used for the comparison of dominant and nondominant leg balance scores and RightDL and LeftDL football players’ inter-limb asymmetry index of the balance scores. The results showed that there were significant differences in the dynamic balance scores in favour of the nondominant leg, (DL:738±211 vs. NDL:606±226, p < 0.01). Also, it has been seen that LeftDL players have significantly higher inter-limb asymmetry index when compared with rightDL players for both static (rightDL:-7.07±94.91 vs. leftDL:-183.19±354.05, p < 0.01) and dynamic (rightDL: 1.73±49.65 vs. leftDL:27.08±23.34, p < 0.05) balance scores. In conclusion, bilateral dynamic balance asymmetries may be affected using single leg predominantly in the mobilization workouts. Because of having higher inter-limb asymmetry index, left-dominant leg players may be screened and trained to minimize balance asymmetry.

Keywords: bilateral balance, asymmetries, dominant leg, leg preference

Procedia PDF Downloads 416
5407 Antihyperglycemic Effect of Aqueous Extract of Foeniculum vulgare Miller in Diabetic Mice

Authors: Singh Baljinder, Sharma Navneet

Abstract:

Foeniculum vulgare Miller is a biennial medicinal and aromatic plant belonging to the family Apiaceae (Umbelliferaceae). It is a hardy, perennial–umbelliferous herb with yellow flowers and feathery leaves. The aim is to study the control of blood glucose in alloxan induced diabetic mice.Method used for extraction was continuous hot percolation method in which Soxhlet apparatus was used.95%ethanol was used as solvent. Male albino mice weighing about 20-25 g obtained from Guru Angad Dev University of Veterinary Science, Ludhiana were used for the study. Diabetes was induced by a single i.p. injection of 125 mg/kg of alloxan monohydrate in sterile saline (11). After 48 h, animals with serum glucose level above 200 mg/dl (diabetic) were selected for the study. Blood samples from mice were collected by retro-orbital puncture (ROP) technique. Serum glucose levels were determined by glucose oxidase and peroxidase method. Single administration (single dose) of aqueous extract of fennel (25, 50, and 100 mg/kg, p.o.) in diabetic Swiss albino mice, showed reduction in serum glucose level after 45 min. Maximum reduction in serum glucose level was seen at doses of 100 mg/kg. Aqueous extract of fennel in all doses except 25 mg/kg did not cause any significant decrease in blood glucose. It may be said that the aqueous extract of fennel decreased the serum glucose level and improved glucose tolerance owing to the presence of aldehyde moiety. The aqueous extract of fennel has antihyperglycemic activity as it lowers serum glucose level in diabetic mice.

Keywords: Foeniculum vulgare Miller, antihyperglycemic, diabetic mice, Umbelliferaceae

Procedia PDF Downloads 279
5406 Thermal Transformation and Structural on Se90Te7Cu3 Chalcogenide Glass

Authors: Farid M. Abdel-Rahim

Abstract:

In this study, Se90Te7Cu3 chalcogenide glass was prepared using the melt quenching technique. The amorphous nature of the as prepared samples was confirmed by scanning electron microscope (SEM). Result of differential scanning calorimetric (DSC) under nonisothermal condition on composition bulk materials are reported and discussed. It shows that these glasses exhibit a single-stage glass transition and a single-stage crystallization on heating rates. The glass transition temperature (Tg), the onset crystallization (Tc), the crystallization temperature (Tp), were found by dependent on the composition and heating rates. Activation energy for glass transition (Et), activation energy of the amorphous –crystalline transformation (Ec), crystallization reaction rate constant (Kp), (n) and (m) are constants related to crystallization mechanism of the bulk samples have been determined by different formulations.

Keywords: chalcogenides, heat treatment, DSC, SEM, glass transition, thermal analysis

Procedia PDF Downloads 388
5405 Control Scheme for Single-Stage Boost Inverter for Grid-Connected Photovoltaic

Authors: Mohammad Reza Ebrahimi, Behnaz Mahdaviani

Abstract:

Increasing renewable sources such photovoltaic are the reason of environmental pollution. Because photovoltaic generates power in low voltage, first, generated power should increase. Usually, distributed generation injects their power to AC-Grid, hence after voltage increasing an inverter is needed to convert DC power to AC power. This results in utilization two series converter that grows cost, complexity, and low efficiency. In this paper a single stage inverter is utilized to boost and invert in one stage. Control of this scheme is easier, and its initial cost decreases comparing to conventional double stage inverters. A simple control scheme is used to control active power as well as minimum total harmonic distortion (THD) in injected current. Simulations in MATLAB demonstrate better outputs comparing with conventional approaches.

Keywords: maximum power point tracking, boost inverter, control strategy, three phase inverter

Procedia PDF Downloads 365
5404 Classification of Barley Varieties by Artificial Neural Networks

Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran

Abstract:

In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.

Keywords: physical properties, artificial neural networks, barley, classification

Procedia PDF Downloads 175
5403 Of an 80 Gbps Passive Optical Network Using Time and Wavelength Division Multiplexing

Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Faizan Khan, Xiaodong Yang

Abstract:

Internet Service Providers are driving endless demands for higher bandwidth and data throughput as new services and applications require higher bandwidth. Users want immediate and accurate data delivery. This article focuses on converting old conventional networks into passive optical networks based on time division and wavelength division multiplexing. The main focus of this research is to use a hybrid of time-division multiplexing and wavelength-division multiplexing to improve network efficiency and performance. In this paper, we design an 80 Gbps Passive Optical Network (PON), which meets the need of the Next Generation PON Stage 2 (NGPON2) proposed in this paper. The hybrid of the Time and Wavelength division multiplexing (TWDM) is said to be the best solution for the implementation of NGPON2, according to Full-Service Access Network (FSAN). To co-exist with or replace the current PON technologies, many wavelengths of the TWDM can be implemented simultaneously. By utilizing 8 pairs of wavelengths that are multiplexed and then transmitted over optical fiber for 40 Kms and on the receiving side, they are distributed among 256 users, which shows that the solution is reliable for implementation with an acceptable data rate. From the results, it can be concluded that the overall performance, Quality Factor, and bandwidth of the network are increased, and the Bit Error rate is minimized by the integration of this approach.

Keywords: bit error rate, fiber to the home, passive optical network, time and wavelength division multiplexing

Procedia PDF Downloads 64
5402 A New Approach for Improving Accuracy of Multi Label Stream Data

Authors: Kunal Shah, Swati Patel

Abstract:

Many real world problems involve data which can be considered as multi-label data streams. Efficient methods exist for multi-label classification in non streaming scenarios. However, learning in evolving streaming scenarios is more challenging, as the learners must be able to adapt to change using limited time and memory. Classification is used to predict class of unseen instance as accurate as possible. Multi label classification is a variant of single label classification where set of labels associated with single instance. Multi label classification is used by modern applications, such as text classification, functional genomics, image classification, music categorization etc. This paper introduces the task of multi-label classification, methods for multi-label classification and evolution measure for multi-label classification. Also, comparative analysis of multi label classification methods on the basis of theoretical study, and then on the basis of simulation was done on various data sets.

Keywords: binary relevance, concept drift, data stream mining, MLSC, multiple window with buffer

Procedia PDF Downloads 577
5401 Integrated Location-Allocation Planning in Multi Product Multi Echelon Single Period Closed Loop Supply Chain Network Design

Authors: Santhosh Srinivasan, Vipul Garhiya, Shahul Hamid Khan

Abstract:

Environmental performance along with social performance is becoming vital factors for industries to achieve global standards. With a good environmental policy global industries are differentiating them from their competitors. This paper concentrates on multi stage, multi product and multi period manufacturing network. Single objective mathematical models for a total cost for the entire forward supply chain and reverse chain are considered. Here five different problems are considered by varying the number of facilities for illustration. M-MOGA, Shuffle Frog Leaping algorithm (SFLA) and CPLEX are used for finding the optimal solution for the mathematical model.

Keywords: closed loop supply chain, genetic algorithm, random search, multi period, green supply chain

Procedia PDF Downloads 387
5400 Impact Position Method Based on Distributed Structure Multi-Agent Coordination with JADE

Authors: YU Kaijun, Liang Dong, Zhang Yarong, Jin Zhenzhou, Yang Zhaobao

Abstract:

For the impact monitoring of distributed structures, the traditional positioning methods are based on the time difference, which includes the four-point arc positioning method and the triangulation positioning method. But in the actual operation, these two methods have errors. In this paper, the Multi-Agent Blackboard Coordination Principle is used to combine the two methods. Fusion steps: (1) The four-point arc locating agent calculates the initial point and records it to the Blackboard Module.(2) The triangulation agent gets its initial parameters by accessing the initial point.(3) The triangulation agent constantly accesses the blackboard module to update its initial parameters, and it also logs its calculated point into the blackboard.(4) When the subsequent calculation point and the initial calculation point are within the allowable error, the whole coordination fusion process is finished. This paper presents a Multi-Agent collaboration method whose agent framework is JADE. The JADE platform consists of several agent containers, with the agent running in each container. Because of the perfect management and debugging tools of the JADE, it is very convenient to deal with complex data in a large structure. Finally, based on the data in Jade, the results show that the impact location method based on Multi-Agent coordination fusion can reduce the error of the two methods.

Keywords: impact monitoring, structural health monitoring(SHM), multi-agent system(MAS), black-board coordination, JADE

Procedia PDF Downloads 171
5399 Big Brain: A Single Database System for a Federated Data Warehouse Architecture

Authors: X. Gumara Rigol, I. Martínez de Apellaniz Anzuola, A. Garcia Serrano, A. Franzi Cros, O. Vidal Calbet, A. Al Maruf

Abstract:

Traditional federated architectures for data warehousing work well when corporations have existing regional data warehouses and there is a need to aggregate data at a global level. Schibsted Media Group has been maturing from a decentralised organisation into a more globalised one and needed to build both some of the regional data warehouses for some brands at the same time as the global one. In this paper, we present the architectural alternatives studied and why a custom federated approach was the notable recommendation to go further with the implementation. Although the data warehouses are logically federated, the implementation uses a single database system which presented many advantages like: cost reduction and improved data access to global users allowing consumers of the data to have a common data model for detailed analysis across different geographies and a flexible layer for local specific needs in the same place.

Keywords: data integration, data warehousing, federated architecture, Online Analytical Processing (OLAP)

Procedia PDF Downloads 231
5398 Relationship between Electricity Consumption and Economic Growth: Evidence from Nigeria (1971-2012)

Authors: N. E Okoligwe, Okezie A. Ihugba

Abstract:

Few scholars disagrees that electricity consumption is an important supporting factor for economy growth. However, the relationship between electricity consumption and economy growth has different manifestation in different countries according to previous studies. This paper examines the causal relationship between electricity consumption and economic growth for Nigeria. In an attempt to do this, the paper tests the validity of the modernization or depending hypothesis by employing various econometric tools such as Augmented Dickey Fuller (ADF) and Johansen Co-integration test, the Error Correction Mechanism (ECM) and Granger Causality test on time series data from 1971-2012. The Granger causality is found not to run from electricity consumption to real GDP and from GDP to electricity consumption during the year of study. The null hypothesis is accepted at the 5 per cent level of significance where the probability value (0.2251 and 0.8251) is greater than five per cent level of significance because both of them are probably determined by some other factors like; increase in urban population, unemployment rate and the number of Nigerians that benefit from the increase in GDP and increase in electricity demand is not determined by the increase in GDP (income) over the period of study because electricity demand has always been greater than consumption. Consequently; the policy makers in Nigeria should place priority in early stages of reconstruction on building capacity additions and infrastructure development of the electric power sector as this would force the sustainable economic growth in Nigeria.

Keywords: economic growth, electricity consumption, error correction mechanism, granger causality test

Procedia PDF Downloads 301
5397 Up-Flow Sponge Submerged Biofilm Reactor for Municipal Sewage Treatment

Authors: Saber A. El-Shafai, Waleed M. Zahid

Abstract:

An up-flow submerged biofilm reactor packed with sponge was investigated for sewage treatment. The reactor was operated two cycles as single aerobic (1-1 at 3.5 L/L.d HLR and 1-2 at 3.8 L/L.day HLR) and four cycles as single anaerobic/aerobic reactor; 2-1 and 2-2 at low HLR (3.7 and 3.5 L/L.day) and 2-3 and 2-4 at high HLR (5.1 and 5.4 L/L.day). During the aerobic cycles, 50% effluent recycling significantly reduces the system performance except for phosphorous. In case of the anaerobic/aerobic reactor, the effluent recycling, significantly improves system performance at low HLR while at high HLR only phosphorous removal was improved. Excess sludge production was limited to 0.133 g TSS/g COD with better sludge volume index (SVI) in case of anaerobic/aerobic cycles; (54.7 versus 58.5 ml/g).

Keywords: aerobic, anaerobic/aerobic, up-flow, submerged biofilm, sponge

Procedia PDF Downloads 292
5396 Research on Pilot Sequence Design Method of Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing System Based on High Power Joint Criterion

Authors: Linyu Wang, Jiahui Ma, Jianhong Xiang, Hanyu Jiang

Abstract:

For the pilot design of the sparse channel estimation model in Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems, the observation matrix constructed according to the matrix cross-correlation criterion, total correlation criterion and other optimization criteria are not optimal, resulting in inaccurate channel estimation and high bit error rate at the receiver. This paper proposes a pilot design method combining high-power sum and high-power variance criteria, which can more accurately estimate the channel. First, the pilot insertion position is designed according to the high-power variance criterion under the condition of equal power. Then, according to the high power sum criterion, the pilot power allocation is converted into a cone programming problem, and the power allocation is carried out. Finally, the optimal pilot is determined by calculating the weighted sum of the high power sum and the high power variance. Compared with the traditional pilot frequency, under the same conditions, the constructed MIMO-OFDM system uses the optimal pilot frequency for channel estimation, and the communication bit error rate performance obtains a gain of 6~7dB.

Keywords: MIMO-OFDM, pilot optimization, compressed sensing, channel estimation

Procedia PDF Downloads 142
5395 Special Single Mode Fiber Tests of Polarization Mode Dispersion Changes in a Harsh Environment

Authors: Jan Bohata, Stanislav Zvanovec, Matej Komanec, Jakub Jaros, David Hruby

Abstract:

Even though there is a rapid development in new optical networks, still optical communication infrastructures remain composed of thousands of kilometers of aging optical cables. Many of them are located in a harsh environment which contributes to an increased attenuation or induced birefringence of the fibers leading to the increase of polarization mode dispersion (PMD). In this paper, we report experimental results from environmental optical cable tests and characterization in the climate chamber. We focused on the evaluation of optical network reliability in a harsh environment. For this purpose, a special thermal chamber was adopted, targeting to the large temperature changes between -60 °C and 160 C° with defined humidity. Single mode optical cable 230 meters long, having six tubes and a total number of 72 single mode optical fibers was spliced together forming one fiber link, which was afterward tested in the climate chamber. The main emphasis was put to the polarization mode dispersion (PMD) changes, which were evaluated by three different PMD measuring methods (general interferometry technique, scrambled state-of-polarization analysis and polarization optical time domain reflectometer) in order to fully validate obtained results. Moreover, attenuation and chromatic dispersion (CD), as well as the PMD, were monitored using 17 km long single mode optical cable. Results imply a strong PMD dependence on thermal changes, imposing the exceeding 200 % of its value during the exposure to extreme temperatures and experienced more than 20 dB insertion losses in the optical system. The derived statistic is provided in the paper together with an evaluation of such as optical system reliability, which could be a crucial tool for the optical network designers. The environmental tests are further taken in context to our previously published results from long-term monitoring of fundamental parameters within an optical cable placed in a harsh environment in a special outdoor testbed. Finally, we provide a correlation between short-term and long-term monitoring campaigns and statistics, which are necessary for optical network safety and reliability.

Keywords: optical fiber, polarization mode dispersion, harsh environment, aging

Procedia PDF Downloads 376
5394 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 106
5393 Gaussian Particle Flow Bernoulli Filter for Single Target Tracking

Authors: Hyeongbok Kim, Lingling Zhao, Xiaohong Su, Junjie Wang

Abstract:

The Bernoulli filter is a precise Bayesian filter for single target tracking based on the random finite set theory. The standard Bernoulli filter often underestimates the number of targets. This study proposes a Gaussian particle flow (GPF) Bernoulli filter employing particle flow to migrate particles from prior to posterior positions to improve the performance of the standard Bernoulli filter. By employing the particle flow filter, the computational speed of the Bernoulli filters is significantly improved. In addition, the GPF Bernoulli filter provides a more accurate estimation compared with that of the standard Bernoulli filter. Simulation results confirm the improved tracking performance and computational speed in two- and three-dimensional scenarios compared with other algorithms.

Keywords: Bernoulli filter, particle filter, particle flow filter, random finite sets, target tracking

Procedia PDF Downloads 84
5392 Experimental Investigation on the Effect of Bond Thickness on the Interface Behaviour of Fibre Reinforced Polymer Sheet Bonded to Timber

Authors: Abbas Vahedian, Rijun Shrestha, Keith Crews

Abstract:

The bond mechanism between timber and fibre reinforced polymer (FRP) is relatively complex and is influenced by a number of variables including bond thickness, bond width, bond length, material properties, and geometries. This study investigates the influence of bond thickness on the behaviour of interface, failure mode, and bond strength of externally bonded FRP-to-timber interface. In the present study, 106 single shear joint specimens have been investigated. Experiment results showed that higher layers of FRP increase the ultimate load carrying capacity of interface; conversely, such increase led to decrease the slip of interface. Moreover, samples with more layers of FRPs may fail in a brittle manner without noticeable warning that collapse is imminent.

Keywords: fibre reinforced polymer, FRP, single shear test, bond thickness, bond strength

Procedia PDF Downloads 220
5391 Usage the Point Analysis Algorithm (SANN) on Drought Analysis

Authors: Khosro Shafie Motlaghi, Amir Reza Salemian

Abstract:

In arid and semi-arid regions like our country Evapotranspiration is the greatestportion of water resource. Therefor knowlege of its changing and other climate parameters plays an important role for planning, development, and management of water resource. In this search the Trend of long changing of Evapotranspiration (ET0), average temprature, monthly rainfall were tested. To dose, all synoptic station s in iran were divided according to the climate with Domarton climate. The present research was done in semi-arid climate of Iran, and in which 14 synoptic with 30 years period of statistics were investigated with 3 methods of minimum square error, Mann Kendoll, and Vald-Volfoytz Evapotranspiration was calculated by using the method of FAO-Penman. The results of investigation in periods of statistic has shown that the process Evapotranspiration parameter of 24 percent of stations is positive, and for 2 percent is negative, and for 47 percent. It was without any Trend. Similary for 22 percent of stations was positive the Trend of parameter of temperature for 19 percent , the trend was negative and for 64 percent, it was without any Trend. The results of rainfall trend has shown that the amount of rainfall in most stations was not considered as a meaningful trend. The result of Mann-kendoll method similar to minimum square error method. regarding the acquired result was can admit that in future years Some regions will face increase of temperature and Evapotranspiration.

Keywords: analysis, algorithm, SANN, ET0

Procedia PDF Downloads 292
5390 Association of Single Nucleotide Polymorphisms in Leptin and Leptin Receptors with Oral Cancer

Authors: Chiung-Man Tsai, Chia-Jui Weng

Abstract:

Leptin (LEP) and leptin receptor (LEPR) both play a crucial role in the mediation of physiological reactions and carcinogenesis and may serve as a candidate biomarker of oral cancer. The present case-control study aimed to examine the effects of single nucleotide polymorphisms (SNPs) of LEP -2548 G/A (rs7799039), LEPR K109R (rs1137100), and LEPR Q223R (rs1137101) with or without interacting to environmental carcinogens on the risk for oral squamous cell carcinoma (OSCC). The SNPs of three genetic allele, from 567 patients with oral cancer and 560 healthy controls in Taiwan were analyzed. All of The three genetic polymorphisms exhibited insignificant (P > .05) effects on the risk to have oral cancer. However, the patients with polymorphic allele of LEP -2548 have a significant low risk for the development of clinical stage (A/G, AOR = 0.670, 95% CI = 0.454–0.988, P < .05; A/G+G/G, AOR = 0.676, 95% CI = 0.467–0.978, P < .05) compared to patients with ancestral homozygous A/A genotype. Additionally, an interesting result was found that the impact of LEP -2548 G/A SNP on oral carcinogenesis in subjects without tobacco consumption (A/G, AOR=2.078, 95% CI: 1.161-3.720, p=0.014; A/G+G/G, AOR=2.002, 95% CI: 1.143-3.505, p=0.015) is higher than subjects with tobacco consumption. These results suggest that the genetic polymorphism of LEP -2548 G/A (rs7799039), LEPR K109R (rs1137100), and LEPR Q223R (rs1137101) were not associated with the susceptibility of oral cancer; SNP in LEP -2548 G/A showed a poor clinicopathological development of oral cancer; Population without tobacco consumption and with polymorphic LEP -2548 G/A gene may significantly increase the risk to have oral cancer.

Keywords: carcinogen, leptin, leptin receptor, oral squamous cell carcinoma, single nucleotide polymorphism

Procedia PDF Downloads 181
5389 Investigating the Dose Effect of Electroacupuncture on Mice Inflammatory Pain Model

Authors: Wan-Ting Shen, Ching-Liang Hsieh, Yi-Wen Lin

Abstract:

Electroacupuncture (EA) has been reported effective for many kinds of pain and is a common treatment for acute or chronic pain. However, to date, there are limited studies examining the effect of acupuncture dosage. In our experiment, after injecting mice with Complete Freund’s Adjuvant (CFA) to induce inflammatory pain, two groups of mice were administered two different 15 min EA treatments at 2Hz. The first group received EA at a single acupuncture point (ST36, Zusanli) in both legs (two points), whereas the second group received two acupuncture points in both legs (four points) and the analgesic effect was compared. It was found that double points (ST36, Zusanli and SP6, Sanyinjiao) were significantly superior to single points (ST36, Zusanli) when evaluated using the electronic von Frey Test (mechanic) and Hargreaves’ Test (thermal). Through this study, it is expected more novel physiological mechanisms of acupuncture analgesia will be discovered.

Keywords: anti-inflammation, dose effect, electroacupuncture, pain control

Procedia PDF Downloads 168
5388 A Single Stage Cleft Rhinoplasty Technique for Primary Unilateral Cleft Lip and Palate 'The Gujrat Technique'

Authors: Diaa Othman, Muhammad Adil Khan, Muhammad Riaz

Abstract:

Without an early intervention to correct the unilateral complete cleft lip and palate deformity, nasal architecture can progress to an exaggerated cleft nose deformity. We present the results of a modified unilateral cleft rhinoplasty procedure ‘the Gujrat technique’ to correct this deformity. Ninety pediatric and adult patients with non-syndromic unilateral cleft lip underwent primary and secondary composite cleft rhinoplasty using the Gujrat technique as a single stage operation over a 10-year period. The technique involved an open rhinoplasty with Tennison lip repair, and employed a combination of three autologous cartilage grafts, seven cartilage-molding sutures and a prolene mesh graft for alar base support. Post-operative evaluation of nasal symmetry was undertaken using the validated computer program ‘SymNose’. Functional outcome and patient satisfaction were assessed using the NOSE scale and ROE (rhinoplasty outcome evaluation) questionnaires. The single group study design used the non-parametric matching pairs Wilcoxon Sign test (p < 0.001), and showed overall good to excellent functional and aesthetic outcomes, including nasal projection and tip definition, and higher scores of the digital SymNose grading system. Objective assessment of the Gujrat cleft rhinoplasty technique demonstrates its aesthetic appeal and functional versatility. Overall it is a simple and reproducible technique, with no significant complications.

Keywords: cleft lip and palate, congenital rhinoplasty, nasal deformity, secondary rhinoplasty

Procedia PDF Downloads 197
5387 Error Analysis of Pronunciation of French by Sinhala Speaking Learners

Authors: Chandeera Gunawardena

Abstract:

The present research analyzes the pronunciation errors encountered by thirty Sinhala speaking learners of French on the assumption that the pronunciation errors were systematic and they reflect the interference of the native language of the learners. The thirty participants were selected using random sampling method. By the time of the study, the subjects were studying French as a foreign language for their Bachelor of Arts Degree at University of Kelaniya, Sri Lanka. The participants were from a homogenous linguistics background. All participants speak the same native language (Sinhala) thus they had completed their secondary education in Sinhala medium and during which they had also learnt French as a foreign language. A battery operated audio tape recorder and a 120-minute blank cassettes were used for recording. A list comprised of 60 words representing all French phonemes was used to diagnose pronunciation difficulties. Before the recording process commenced, the subjects were requested to familiarize themselves with the words through reading them several times. The recording was conducted individually in a quiet classroom and each recording approximately took fifteen minutes. Each subject was required to read at a normal speed. After the completion of recording, the recordings were replayed to identify common errors which were immediately transcribed using the International Phonetic Alphabet. Results show that Sinhala speaking learners face problems with French nasal vowels and French initial consonants clusters. The learners also exhibit errors which occur because of their second language (English) interference.

Keywords: error analysis, pronunciation difficulties, pronunciation errors, Sinhala speaking learners of French

Procedia PDF Downloads 205
5386 Quantitative Characterization of Single Orifice Hydraulic Flat Spray Nozzle

Authors: Y. C. Khoo, W. T. Lai

Abstract:

The single orifice hydraulic flat spray nozzle was evaluated with two global imaging techniques to characterize various aspects of the resulting spray. The two techniques were high resolution flow visualization and Particle Image Velocimetry (PIV). A CCD camera with 29 million pixels was used to capture shadowgraph images to realize ligament formation and collapse as well as droplet interaction. Quantitative analysis was performed to give the sizing information of the droplets and ligaments. This camera was then applied with a PIV system to evaluate the overall velocity field of the spray, from nozzle exit to droplet discharge. PIV images were further post-processed to determine the inclusion angle of the spray. The results from those investigations provided significant quantitative understanding of the spray structure. Based on the quantitative results, detailed understanding of the spray behavior was achieved.

Keywords: spray, flow visualization, PIV, shadowgraph, quantitative sizing, velocity field

Procedia PDF Downloads 373
5385 Investigating Breakdowns in Human Robot Interaction: A Conversation Analysis Guided Single Case Study of a Human-Robot Communication in a Museum Environment

Authors: B. Arend, P. Sunnen, P. Caire

Abstract:

In a single case study, we show how a conversation analysis (CA) approach can shed light onto the sequential unfolding of human-robot interaction. Relying on video data, we are able to show that CA allows us to investigate the respective turn-taking systems of humans and a NAO robot in their dialogical dynamics, thus pointing out relevant differences. Our fine grained video analysis points out occurring breakdowns and their overcoming, when humans and a NAO-robot engage in a multimodally uttered multi-party communication during a sports guessing game. Our findings suggest that interdisciplinary work opens up the opportunity to gain new insights into the challenging issues of human robot communication in order to provide resources for developing mechanisms that enable complex human-robot interaction (HRI).

Keywords: human robot interaction, conversation analysis, dialogism, breakdown, museum

Procedia PDF Downloads 298
5384 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 258