Search results for: root mean square error
2893 The Influence of Using Soft Knee Pads on Static and Dynamic Balance among Male Athletes and Non-Athletes
Authors: Yaser Kazemzadeh, Keyvan Molanoruzy, Mojtaba Izady
Abstract:
The balance is the key component of motor skills to maintain postural control and the execution of complex skills. The present study was designed to evaluate the impact of soft knee pads on static and dynamic balance of male athletes. For this aim, thirty young athletes in different sport fields with 3 years professional sport training background and thirty healthy young men nonathletic (age: 24.5 ± 2.9, 24.3 ± 2.4, weight: 77.2 ± 4.3 and 80/9 ± 6/3 and height: 175 ± 2/84, 172 ± 5/44 respectively) as subjects selected. Then, subjects in two manner (without knee and with soft knee pads made of neoprene) execute standard error test (BESS) to assess static balance and star test to assess dynamic balance. For analyze of data, t-tests and one-way ANOVA were significant 05/0 ≥ α statistical analysis. The results showed that the use of soft knee significantly reduced error rate in static balance test (p ≥ 0/05). Also, use a soft knee pads decreased score of athlete group and increased score of nonathletic group in star test (p ≥ 0/05). These findings, indicates that use of knees affects static and dynamic balance in athletes and nonathletic in different manner and may increased athletic performance in sports that rely on static balance and decreased performance in sports that rely on dynamic balance.Keywords: static balance, dynamic balance, soft knee, athletic men, non athletic men
Procedia PDF Downloads 2902892 Developing a Health Promotion Program to Prevent and Solve Problem of the Frailty Elderly in the Community
Authors: Kunthida Kulprateepunya, Napat Boontiam, Bunthita Phuasa, Chatsuda Kankayant, Bantoeng Polsawat, Sumran Poontong
Abstract:
Frailty is the thin line between good health and illness. The syndrome is more common in the elderly who transition from strong to weak. (Vulnerability). Fragility can prevent and promote healthy recovery before it goes into disability. This research and development aim to analyze the situation analysis of frailty of the elderly, develop a program, and evaluate the effect of a health promotion program to prevent and solve the problem of frailty among the elderly. The research consisted of 3 phases: 1) analysis of the frailty situation, 2) development of a model, 3) evaluation of the effectiveness of the model. Samples were 328, 122 elderlies using the multi-stage random sampling method. The research instrument was a frailty questionnaire use of the five symptoms, the main characteristics were muscle weakness, slow walking, low physical activity. Fatigue and unintentional weight loss, criteria frailty use more than or equal to three or more symptoms are frailty. Data were analyzed by descriptive and t-test dependent test statistics. The findings showed three parts. First, frailty in the elderly was 23.05 percentage and 56.70% pre-frailty. Second, it was development of a health promotion program to prevent and solve the problem of frailty the elderly with a combination of Nine-Square Exercise, Elastic Band Exercise, Elastic Coconut Shell. Third, evaluation of the effectiveness of the model by comparison of the elderly's get up and go test, the average time before using the program was 14.42 and after using the program was 8.57. It was statistically significant at the .05 level. In conclusion, the findings can used to develop guidelines to promote the health of the frailty elderly.Keywords: elderly, fragile, nine-square exercise, elastic coconut shell
Procedia PDF Downloads 1052891 Human’s Sensitive Reactions during Different Geomagnetic Activity: An Experimental Study in Natural and Simulated Conditions
Authors: Ketevan Janashia, Tamar Tsibadze, Levan Tvildiani, Nikoloz Invia, Elguja Kubaneishvili, Vasili Kukhianidze, George Ramishvili
Abstract:
This study considers the possible effects of geomagnetic activity (GMA) on humans situated on Earth by performing experiments concerning specific sensitive reactions in humans in both: natural conditions during different GMA and by the simulation of different GMA in the lab. The measurements of autonomic nervous system (ANS) responses to different GMA via measuring the heart rate variability (HRV) indices and stress index (SI) and their comparison with the K-index of GMA have been presented and discussed. The results of experiments indicate an intensification of the sympathetic part of the ANS as a stress reaction of the human organism when it is exposed to high level of GMA as natural as well as in simulated conditions. Aim: We tested the hypothesis whether the GMF when disturbed can have effects on human ANS causing specific sensitive stress-reactions depending on the initial type of ANS. Methods: The study focuses on the effects of different GMA on ANS by comparing of HRV indices and stress index (SI) of n= 78, 18-24 years old healthy male volunteers. Experiments were performed as natural conditions on days of low (K= 1-3) and high (K= 5-7) GMA as well as in the lab by the simulation of different GMA using the device of geomagnetic storm (GMS) compensation and simulation. Results: In comparison with days of low GMA (K=1-3) the initial values of HRV shifted towards the intensification of the sympathetic part (SP) of the ANS during days of GMSs (K=5-7) with statistical significance p-values: HR (heart rate, p= 0.001), SDNN (Standard deviation of all Normal to Normal intervals, p= 0.0001), RMSSD (The square root of the arithmetical mean of the sum of the squares of differences between adjacent NN intervals, p= 0.0001). In comparison with conditions during GMSs compensation mode (K= 0, B= 0-5nT), the ANS balance was observed to shift during exposure to simulated GMSs with intensities in the range of natural GMSs (K= 7, B= 200nT). However, the initial values of the ANS resulted in different dynamics in its variation depending of GMA level. In the case of initial balanced regulation type (HR > 80) significant intensification of SP was observed with p-values: HR (p= 0.0001), SDNN (p= 0.047), RMSSD (p= 0.28), LF/HF (p=0.03), SI (p= 0.02); while in the case of initial parasympathetic regulation type (HR < 80), an insignificant shift to the intensification of the parasympathetic part (PP) was observed. Conclusions: The results indicate an intensification of SP as a stress reaction of the human organism when it is exposed to high level of GMA in both natural and simulated conditions.Keywords: autonomic nervous system, device of magneto compensation/simulation, geomagnetic storms, heart rate variability
Procedia PDF Downloads 1412890 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information
Authors: Haifeng Wang, Haili Zhang
Abstract:
Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.Keywords: computational social science, movie preference, machine learning, SVM
Procedia PDF Downloads 2602889 Antimicrobial, Antioxidant and Cytotoxic Activities of Cleoma viscosa Linn. Crude Extracts
Authors: Suttijit Sriwatcharakul
Abstract:
The bioactivity studies from the weed ethanolic crude extracts from leaf, stem, pod and root of wild spider flower; Cleoma viscosa Linn. were analyzed for the growth inhibition of 6 bacterial species; Salmonella typhimurium TISTR 5562, Pseudomonas aeruginosa ATCC 27853, Staphylococcus aureus TISTR 1466, Streptococcus epidermidis ATCC 1228, Escherichia coli DMST 4212 and Bacillus subtilis ATCC 6633 with initial concentration crude extract of 50 mg/ml. The agar well diffusion results found that the extracts inhibit only gram positive bacteria species; S. aureus, S. epidermidis and B. subtilis. The minimum inhibition concentration study with gram positive strains revealed that leaf crude extract give the best result of the lowest concentration compared with other plant parts to inhibit the growth of S. aureus, S. epidermidis and B. subtilis at 0.78, 0.39 and lower than 0.39 mg/ml, respectively. The determination of total phenolic compounds in the crude extracts exhibited the highest phenolic content was 10.41 mg GAE/g dry weight in leaf crude extract. Analyzed the efficacy of free radical scavenging by using DPPH radical scavenging assay with all crude extracts showed value of IC50 of leaf, stem, pod and root crude extracts were 8.32, 12.26, 21.62 and 35.99 mg/ml, respectively. Studied cytotoxicity of crude extracts on human breast adenocarcinoma cell line by MTT assay found that pod extract had the most cytotoxicity CC50 value, 32.41 µg/ml. Antioxidant activity and cytotoxicity of crude extracts exhibited that the more increase of extract concentration, the more activities indicated. According to the bioactivities results, the leaf crude extract of Cleoma viscosa Linn. is the most interesting plant part for further work to search the beneficial of this weed.Keywords: antimicrobial, antioxidant activity, Cleoma viscosa Linn., cytotoxicity test, total phenolic compound
Procedia PDF Downloads 2722888 Evaluation of Heat Transfer and Entropy Generation by Al2O3-Water Nanofluid
Authors: Houda Jalali, Hassan Abbassi
Abstract:
In this numerical work, natural convection and entropy generation of Al2O3–water nanofluid in square cavity have been studied. A two-dimensional steady laminar natural convection in a differentially heated square cavity of length L, filled with a nanofluid is investigated numerically. The horizontal walls are considered adiabatic. Vertical walls corresponding to x=0 and x=L are respectively maintained at hot temperature, Th and cold temperature, Tc. The resolution is performed by the CFD code "FLUENT" in combination with GAMBIT as mesh generator. These simulations are performed by maintaining the Rayleigh numbers varied as 103 ≤ Ra ≤ 106, while the solid volume fraction varied from 1% to 5%, the particle size is fixed at dp=33 nm and a range of the temperature from 20 to 70 °C. We used models of thermophysical nanofluids properties based on experimental measurements for studying the effect of adding solid particle into water in natural convection heat transfer and entropy generation of nanofluid. Such as models of thermal conductivity and dynamic viscosity which are dependent on solid volume fraction, particle size and temperature. The average Nusselt number is calculated at the hot wall of the cavity in a different solid volume fraction. The most important results is that at low temperatures (less than 40 °C), the addition of nanosolids Al2O3 into water leads to a decrease in heat transfer and entropy generation instead of the expected increase, whereas at high temperature, heat transfer and entropy generation increase with the addition of nanosolids. This behavior is due to the contradictory effects of viscosity and thermal conductivity of the nanofluid. These effects are discussed in this work.Keywords: entropy generation, heat transfer, nanofluid, natural convection
Procedia PDF Downloads 2772887 Effect of Crown Gall and Phylloxera Resistant Rootstocks on Grafted Vitis Vinifera CV. Sultana Grapevine
Authors: Hassan Mahmoudzadeh
Abstract:
The bacterium of Agrobacterium vitis causes crown and root gall disease, an important disease of grapevine, Vitis vinifera L. Also, Phylloxera is one of the most important pests in viticulture. Grapevine rootstocks were developed to provide increased resistance to soil-borne pests and diseases, but rootstock effects on some traits remain unclear. The interaction between rootstock, scion and environment can induce different responses to the grapevine physiology. 'Sultsna' (Vitis vinifera L.) is one of the most valuable raisin grape cultivars in Iran. Thus, the aim of this study was to determine the rootstock effect on the growth characteristics and yield components and quality of 'Sultana' grapevine grown in the Urmia viticulture region. The experimental design was completely randomized blocks, with four treatments, four replicates and 10 vines per plot. The results show that all variables evaluated were significantly affected by the rootstock. The Sultana/110R and Sultana/Nazmieh were among other combinations influenced by the year and had a higher significant yield/vine (13.25 and 12.14, respectively). Indeed, they were higher than that of Sultana/5BB (10.56 kg/vine) and Sultana/Spota (10.25 kg/vine). The number of clusters per burst bud and per vine and the weight of clusters were affected by the rootstock as well. Pruning weight/vine, yield/pruning weight, leaf area/vine and leaf area index are variables related to the physiology of grapevine, which was also affected by the rootstocks. In general, rootstocks had adapted well to the environment where the experiment was carried out, giving vigor and high yield to Sultana grapevine, which means that they may be used by grape growers in this region. In sum, the study found the best rootstocks for 'Sultana' to be Nazmieh and 110R in terms of root and shoot growth. However, the choice of the right rootstock depends on various aspects, such as those related to soil characteristics, climate conditions, grape varieties, and even clones, and production purposes.Keywords: grafting, vineyards, grapevine, succeptability
Procedia PDF Downloads 1262886 An Observer-Based Direct Adaptive Fuzzy Sliding Control with Adjustable Membership Functions
Authors: Alireza Gholami, Amir H. D. Markazi
Abstract:
In this paper, an observer-based direct adaptive fuzzy sliding mode (OAFSM) algorithm is proposed. In the proposed algorithm, the zero-input dynamics of the plant could be unknown. The input connection matrix is used to combine the sliding surfaces of individual subsystems, and an adaptive fuzzy algorithm is used to estimate an equivalent sliding mode control input directly. The fuzzy membership functions, which were determined by time consuming try and error processes in previous works, are adjusted by adaptive algorithms. The other advantage of the proposed controller is that the input gain matrix is not limited to be diagonal, i.e. the plant could be over/under actuated provided that controllability and observability are preserved. An observer is constructed to directly estimate the state tracking error, and the nonlinear part of the observer is constructed by an adaptive fuzzy algorithm. The main advantage of the proposed observer is that, the measured outputs is not limited to the first entry of a canonical-form state vector. The closed-loop stability of the proposed method is proved using a Lyapunov-based approach. The proposed method is applied numerically on a multi-link robot manipulator, which verifies the performance of the closed-loop control. Moreover, the performance of the proposed algorithm is compared with some conventional control algorithms.Keywords: adaptive algorithm, fuzzy systems, membership functions, observer
Procedia PDF Downloads 2062885 Mapping Vulnerabilities: A Social and Political Study of Disasters in Eastern Himalayas, Region of Darjeeling
Authors: Shailendra M. Pradhan, Upendra M. Pradhan
Abstract:
Disasters are perennial features of human civilization. The recurring earthquakes, floods, cyclones, among others, that result in massive loss of lives and devastation, is a grim reminder of the fact that, despite all our success stories of development, and progress in science and technology, human society is perennially at risk to disasters. The apparent threat of climate change and global warming only severe our disaster risks. Darjeeling hills, situated along Eastern Himalayan region of India, and famous for its three Ts – tea, tourism and toy-train – is also equally notorious for its disasters. The recurring landslides and earthquakes, the cyclone Aila, and the Ambootia landslides, considered as the largest landslide in Asia, are strong evidence of the vulnerability of Darjeeling hills to natural disasters. Given its geographical location along the Hindu-Kush Himalayas, the region is marked by rugged topography, geo-physically unstable structure, high-seismicity, and fragile landscape, making it prone to disasters of different kinds and magnitudes. Most of the studies on disasters in Darjeeling hills are, however, scientific and geographical in orientation that focuses on the underlying geological and physical processes to the neglect of social and political conditions. This has created a tendency among the researchers and policy-makers to endorse and promote a particular type of discourse that does not consider the social and political aspects of disasters in Darjeeling hills. Disaster, this paper argues, is a complex phenomenon, and a result of diverse factors, both physical and human. The hazards caused by the physical and geological agents, and the vulnerabilities produced and rooted in political, economic, social and cultural structures of a society, together result in disasters. In this sense, disasters are as much a result of political and economic conditions as it is of physical environment. The human aspect of disasters, therefore, compels us to address intricating social and political challenges that ultimately determine our resilience and vulnerability to disasters. Set within the above milieu, the aims of the paper are twofold: a) to provide a political and sociological account of disasters in Darjeeling hills; and, b) to identify and address the root causes of its vulnerabilities to disasters. In situating disasters in Darjeeling Hills, the paper adopts the Pressure and Release Model (PAR) that provides a theoretical insight into the study of social and political aspects of disasters, and to examine myriads of other related issues therein. The PAR model conceptualises risk as a complex combination of vulnerabilities, on the one hand, and hazards, on the other. Disasters, within the PAR framework, occur when hazards interact with vulnerabilities. The root causes of vulnerability, in turn, could be traced to social and political structures such as legal definitions of rights, gender relations, and other ideological structures and processes. In this way, the PAR model helps the present study to identify and unpack the root causes of vulnerabilities and disasters in Darjeeling hills that have largely remained neglected in dominant discourses, thereby providing a more nuanced and sociologically sensitive understanding of disasters.Keywords: Darjeeling, disasters, PAR, vulnerabilities
Procedia PDF Downloads 2732884 Evaluation of Ceres Wheat and Rice Model for Climatic Conditions in Haryana, India
Authors: Mamta Rana, K. K. Singh, Nisha Kumari
Abstract:
The simulation models with its soil-weather-plant atmosphere interacting system are important tools for assessing the crops in changing climate conditions. The CERES-Wheat & Rice vs. 4.6 DSSAT was calibrated and evaluated for one of the major producers of wheat and rice state- Haryana, India. The simulation runs were made under irrigated conditions and three fertilizer applications dose of N-P-K to estimate crop yield and other growth parameters along with the phenological development of the crop. The genetic coefficients derived by iteratively manipulating the relevant coefficients that characterize the phenological process of wheat and rice crop to the best fit match between the simulated and observed anthesis, physological maturity and final grain yield. The model validated by plotting the simulated and remote sensing derived LAI. LAI product from remote sensing provides the edge of spatial, timely and accurate assessment of crop. For validating the yield and yield components, the error percentage between the observed and simulated data was calculated. The analysis shows that the model can be used to simulate crop yield and yield components for wheat and rice cultivar under different management practices. During the validation, the error percentage was less than 10%, indicating the utility of the calibrated model for climate risk assessment in the selected region.Keywords: simulation model, CERES-wheat and rice model, crop yield, genetic coefficient
Procedia PDF Downloads 3052883 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies
Authors: Paolino Di Felice
Abstract:
The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.Keywords: quality of life, distance measurement error, Italian administrative units, spatial database
Procedia PDF Downloads 3712882 Reliability and Maintainability Optimization for Aircraft’s Repairable Components Based on Cost Modeling Approach
Authors: Adel A. Ghobbar
Abstract:
The airline industry is continuously challenging how to safely increase the service life of the aircraft with limited maintenance budgets. Operators are looking for the most qualified maintenance providers of aircraft components, offering the finest customer service. Component owner and maintenance provider is offering an Abacus agreement (Aircraft Component Leasing) to increase the efficiency and productivity of the customer service. To increase the customer service, the current focus on No Fault Found (NFF) units must change into the focus on Early Failure (EF) units. Since the effect of EF units has a significant impact on customer satisfaction, this needs to increase the reliability of EF units at minimal cost, which leads to the goal of this paper. By identifying the reliability of early failure (EF) units with regards to No Fault Found (NFF) units, in particular, the root cause analysis with an integrated cost analysis of EF units with the use of a failure mode analysis tool and a cost model, there will be a set of EF maintenance improvements. The data used for the investigation of the EF units will be obtained from the Pentagon system, an Enterprise Resource Planning (ERP) system used by Fokker Services. The Pentagon system monitors components, which needs to be repaired from Fokker aircraft owners, Abacus exchange pool, and commercial customers. The data will be selected on several criteria’s: time span, failure rate, and cost driver. When the selected data has been acquired, the failure mode and root cause analysis of EF units are initiated. The failure analysis approach tool was implemented, resulting in the proposed failure solution of EF. This will lead to specific EF maintenance improvements, which can be set-up to decrease the EF units and, as a result of this, increasing the reliability. The investigated EFs, between the time period over ten years, showed to have a significant reliability impact of 32% on the total of 23339 unscheduled failures. Since the EFs encloses almost one-third of the entire population.Keywords: supportability, no fault found, FMEA, early failure, availability, operational reliability, predictive model
Procedia PDF Downloads 1272881 Identification of Peroxisome Proliferator-Activated Receptors α/γ Dual Agonists for Treatment of Metabolic Disorders, Insilico Screening, and Molecular Dynamics Simulation
Authors: Virendra Nath, Vipin Kumar
Abstract:
Background: TypeII Diabetes mellitus is a foremost health problem worldwide, predisposing to increased mortality and morbidity. Undesirable effects of the current medications have prompted the researcher to develop more potential drug(s) against the disease. The peroxisome proliferator-activated receptors (PPARs) are members of the nuclear receptors family and take part in a vital role in the regulation of metabolic equilibrium. They can induce or repress genes associated with adipogenesis, lipid, and glucose metabolism. Aims: Investigation of PPARα/γ agonistic hits were screened by hierarchical virtual screening followed by molecular dynamics simulation and knowledge-based structure-activity relation (SAR) analysis using approved PPAR α/γ dual agonist. Methods: The PPARα/γ agonistic activity of compounds was searched by using Maestro through structure-based virtual screening and molecular dynamics (MD) simulation application. Virtual screening of nuclear-receptor ligands was done, and the binding modes with protein-ligand interactions of newer entity(s) were investigated. Further, binding energy prediction, Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit along with the structural comparative analysis of approved PPARα/γ agonists with screened hit was done for knowledge-based SAR. Results and Discussion: The silicone chip-based approach recognized the most capable nine hits and had better predictive binding energy as compared to the reference drug compound (Tesaglitazar). In this study, the key amino acid residues of binding pockets of both targets PPARα/γ were acknowledged as essential and were found to be associated in the key interactions with the most potential dual hit (ChemDiv-3269-0443). Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit and found root mean square deviation (RMSD) stabile around 2Å and 2.1Å, respectively. Frequency distribution data also revealed that the key residues of both proteins showed maximum contacts with a potent hit during the MD simulation of 20 nanoseconds (ns). The knowledge-based SAR studies of PPARα/γ agonists were studied using 2D structures of approved drugs like aleglitazar, tesaglitazar, etc. for successful designing and synthesis of compounds PPARγ agonistic candidates with anti-hyperlipidimic potential.Keywords: computational, diabetes, PPAR, simulation
Procedia PDF Downloads 1032880 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Authors: Feng Tao, Han Ye, Shaoyi Liao
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI
Procedia PDF Downloads 3002879 Of an 80 Gbps Passive Optical Network Using Time and Wavelength Division Multiplexing
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Faizan Khan, Xiaodong Yang
Abstract:
Internet Service Providers are driving endless demands for higher bandwidth and data throughput as new services and applications require higher bandwidth. Users want immediate and accurate data delivery. This article focuses on converting old conventional networks into passive optical networks based on time division and wavelength division multiplexing. The main focus of this research is to use a hybrid of time-division multiplexing and wavelength-division multiplexing to improve network efficiency and performance. In this paper, we design an 80 Gbps Passive Optical Network (PON), which meets the need of the Next Generation PON Stage 2 (NGPON2) proposed in this paper. The hybrid of the Time and Wavelength division multiplexing (TWDM) is said to be the best solution for the implementation of NGPON2, according to Full-Service Access Network (FSAN). To co-exist with or replace the current PON technologies, many wavelengths of the TWDM can be implemented simultaneously. By utilizing 8 pairs of wavelengths that are multiplexed and then transmitted over optical fiber for 40 Kms and on the receiving side, they are distributed among 256 users, which shows that the solution is reliable for implementation with an acceptable data rate. From the results, it can be concluded that the overall performance, Quality Factor, and bandwidth of the network are increased, and the Bit Error rate is minimized by the integration of this approach.Keywords: bit error rate, fiber to the home, passive optical network, time and wavelength division multiplexing
Procedia PDF Downloads 702878 Evaluation of Forming Properties on AA 5052 Aluminium Alloy by Incremental Forming
Authors: A. Anbu Raj, V. Mugendiren
Abstract:
Sheet metal forming is a vital manufacturing process used in automobile, aerospace, agricultural industries, etc. Incremental forming is a promising process providing a short and inexpensive way of forming complex three-dimensional parts without using die. The aim of this research is to study the forming behaviour of AA 5052, Aluminium Alloy, using incremental forming and also to study the FLD of cone shape AA 5052 Aluminium Alloy at room temperature and various annealing temperature. Initially the surface roughness and wall thickness through incremental forming on AA 5052 Aluminium Alloy sheet at room temperature is optimized by controlling the effects of forming parameters. The central composite design (CCD) was utilized to plan the experiment. The step depth, feed rate, and spindle speed were considered as input parameters in this study. The surface roughness and wall thickness were used as output response. The process performances such as average thickness and surface roughness were evaluated. The optimized results are taken for minimum surface roughness and maximum wall thickness. The optimal results are determined based on response surface methodology and the analysis of variance. Formability Limit Diagram is constructed on AA 5052 Aluminium Alloy at room temperature and various annealing temperature by using optimized process parameters from the response surface methodology. The cone has higher formability than the square pyramid and higher wall thickness distribution. Finally the FLD on cone shape and square pyramid shape at room temperature and the various annealing temperature is compared experimentally and simulated with Abaqus software.Keywords: incremental forming, response surface methodology, optimization, wall thickness, surface roughness
Procedia PDF Downloads 3382877 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 5712876 The Effect of Multiple Environmental Conditions on Acacia senegal Seedling’s Carbon, Nitrogen, and Hydrogen Contents: An Experimental Investigation
Authors: Abdelmoniem A. Attaelmanan, Ahmed A. H. Siddig
Abstract:
This study was conducted in light of continual global climate changes that projected increasing aridity, changes in soil fertility, and pollution. Plant growth and development largely depend on the combination of availing water and nutrients in the soil. Changes in the climate and atmospheric chemistry can cause serious effects on these growth factors. Plant carbon (C), nitrogen (N), and hydrogen (H) play a fundamental role in the maintenance of ecosystem structure and function. Hashab (Acacia senegal), which produces gum Arabic, supports dryland ecosystems in tropical zones by its potentiality to restore degraded soils; hence it is ecologically and economically important for the dry areas of sub-Saharan Africa. The study aims at investigating the effects of water stress (simulated drought) and poor soil type on Acacia senegal C, N, and H contents. Seven days old seedlings were assigned to the treatments in Split- plot design for four weeks. The main plot is irrigation interval (well-watered and water-stressed), and the subplot is soil types (silt and sand soils). Seedling's C%, N%, and H% were measured using CHNS-O Analyzer and applying Standard Test Method. Irrigation intervals and soil types had no effects on seedlings and leaves C%, N%, and H%, irrigation interval had affected stem C and H%, both irrigation intervals and soil types had affected root N% and interaction effect of water and soil was found on leaves and root's N%. Synthesis application of well-watered irrigation with soil that is rich in N and other nutrients would result in the greatest seedling C, N, and H content which will enhance growth and biomass accumulation and can play a crucial role in ecosystem productivity and services in the dryland regions.Keywords: Acacia senegal, Africa, climate change, drylands, nutrients biomass, Sub-Saharan, Sudan
Procedia PDF Downloads 1162875 Impact Position Method Based on Distributed Structure Multi-Agent Coordination with JADE
Authors: YU Kaijun, Liang Dong, Zhang Yarong, Jin Zhenzhou, Yang Zhaobao
Abstract:
For the impact monitoring of distributed structures, the traditional positioning methods are based on the time difference, which includes the four-point arc positioning method and the triangulation positioning method. But in the actual operation, these two methods have errors. In this paper, the Multi-Agent Blackboard Coordination Principle is used to combine the two methods. Fusion steps: (1) The four-point arc locating agent calculates the initial point and records it to the Blackboard Module.(2) The triangulation agent gets its initial parameters by accessing the initial point.(3) The triangulation agent constantly accesses the blackboard module to update its initial parameters, and it also logs its calculated point into the blackboard.(4) When the subsequent calculation point and the initial calculation point are within the allowable error, the whole coordination fusion process is finished. This paper presents a Multi-Agent collaboration method whose agent framework is JADE. The JADE platform consists of several agent containers, with the agent running in each container. Because of the perfect management and debugging tools of the JADE, it is very convenient to deal with complex data in a large structure. Finally, based on the data in Jade, the results show that the impact location method based on Multi-Agent coordination fusion can reduce the error of the two methods.Keywords: impact monitoring, structural health monitoring(SHM), multi-agent system(MAS), black-board coordination, JADE
Procedia PDF Downloads 1782874 Evaluation Demografical Factors for Suicide Attempts among Hazrat Abolfazl Hospital of Minab City during 1389-1390 Years
Authors: Zahra khaksari, Mahboobeh Mehrabi
Abstract:
One of the biggest health problems in communities, suicide is now one of the top ten causes of death in the world. This study aimed to investigate the risk factors of suicide attempt in Minab city over two years. This was a descriptive and cross-sectional study. Subjects of this study were cases who admitted in Hazrat Abolfazl hospital of minab city over two years Since the beginning of 1389 to end of 1390. During this period,their cases were reviewed. To analyze data, descriptive statistics was applied. During this two-year period, 275 patients who had attempted suicide, of which 65 percent are female and most of them were 15-24 years. 60% of them were single and 70 % of rural areas. 51% of suicides in 1390 and most of the suicide attempts occurred at a rate of 37 percent in summer. and the most common way of attempting suicide was medication poisoning (74%). The suicide rate leading to death was 1.5% The chi-square test showes that there were significant relationship between suicide by gender and residence status. This means that more women are committing suicide of rural areas.based on The chi-square test there were significant relation between the gender and method, gender and the result of suicide and means that more women than men commit suicide with the use of drugs. Males were more successful suicide. who had attempted with organophosphates and hanging were more successful in suicide. The finding of the current study showed that most of the suicide victims were female ,rural deweller,14-24 years old,single, serious attention must be paid to the problems of this group. It also extends the field of education professionals and health centers, and psychological therapy that focuses specifically on this topic.Keywords: attempt to suicide, minab, risk factors, suicide
Procedia PDF Downloads 3552873 Relationship between Electricity Consumption and Economic Growth: Evidence from Nigeria (1971-2012)
Authors: N. E Okoligwe, Okezie A. Ihugba
Abstract:
Few scholars disagrees that electricity consumption is an important supporting factor for economy growth. However, the relationship between electricity consumption and economy growth has different manifestation in different countries according to previous studies. This paper examines the causal relationship between electricity consumption and economic growth for Nigeria. In an attempt to do this, the paper tests the validity of the modernization or depending hypothesis by employing various econometric tools such as Augmented Dickey Fuller (ADF) and Johansen Co-integration test, the Error Correction Mechanism (ECM) and Granger Causality test on time series data from 1971-2012. The Granger causality is found not to run from electricity consumption to real GDP and from GDP to electricity consumption during the year of study. The null hypothesis is accepted at the 5 per cent level of significance where the probability value (0.2251 and 0.8251) is greater than five per cent level of significance because both of them are probably determined by some other factors like; increase in urban population, unemployment rate and the number of Nigerians that benefit from the increase in GDP and increase in electricity demand is not determined by the increase in GDP (income) over the period of study because electricity demand has always been greater than consumption. Consequently; the policy makers in Nigeria should place priority in early stages of reconstruction on building capacity additions and infrastructure development of the electric power sector as this would force the sustainable economic growth in Nigeria.Keywords: economic growth, electricity consumption, error correction mechanism, granger causality test
Procedia PDF Downloads 3092872 Research on Pilot Sequence Design Method of Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing System Based on High Power Joint Criterion
Authors: Linyu Wang, Jiahui Ma, Jianhong Xiang, Hanyu Jiang
Abstract:
For the pilot design of the sparse channel estimation model in Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems, the observation matrix constructed according to the matrix cross-correlation criterion, total correlation criterion and other optimization criteria are not optimal, resulting in inaccurate channel estimation and high bit error rate at the receiver. This paper proposes a pilot design method combining high-power sum and high-power variance criteria, which can more accurately estimate the channel. First, the pilot insertion position is designed according to the high-power variance criterion under the condition of equal power. Then, according to the high power sum criterion, the pilot power allocation is converted into a cone programming problem, and the power allocation is carried out. Finally, the optimal pilot is determined by calculating the weighted sum of the high power sum and the high power variance. Compared with the traditional pilot frequency, under the same conditions, the constructed MIMO-OFDM system uses the optimal pilot frequency for channel estimation, and the communication bit error rate performance obtains a gain of 6~7dB.Keywords: MIMO-OFDM, pilot optimization, compressed sensing, channel estimation
Procedia PDF Downloads 1492871 Stock Market Integration of Emerging Markets around the Global Financial Crisis: Trends and Explanatory Factors
Authors: Najlae Bendou, Jean-Jacques Lilti, Khalid Elbadraoui
Abstract:
In this paper, we examine stock market integration of emerging markets around the global financial turmoil of 2007-2008. Following Pukthuanthong and Roll (2009), we measure the integration of 46 emerging countries using the adjusted R-square from the regression of each country's daily index returns on global factors extracted from the covariance matrix computed using dollar-denominated daily index returns of 17 developed countries. Our sample surrounds the global financial crisis and ranges between 2000 and 2018. We analyze results using four cohorts of emerging countries: East Asia & Pacific and South Asia, Europe & Central Asia, Latin America & Caribbean, Middle East & Africa. We find that the level of integration of emerging countries increases at the commencement of the crisis and during the booming phase of the business cycles. It reaches a maximum point in the middle of the crisis and then tends to revert to its pre-crisis level. This pattern tends to be common among the four geographic zones investigated in this study. Finally, we investigate the determinants of stock market integration of emerging countries in our sample using panel regressions. Our results suggest that the degree of stock market integration of these countries should be put into perspective by some macro-economic factors, such as the size of the equity market, school enrollment rate, international liquidity level, stocks traded volume, tax revenue level, imports and exports volumes.Keywords: correlations, determinants of integration, diversification, emerging markets, financial crisis, integration, markets co-movement, panel regressions, r-square, stock markets
Procedia PDF Downloads 1832870 Status of Mangrove Wetlands and Implications for Sustainable Livelihood of Coastal Communities on the Lagos Coast (West Africa)
Authors: I. Agboola Julius, Christopher A. Kumolu-Johnson, O. Kolade Rafiu, A. Saba Abdulwakil
Abstract:
This work elucidates on mangrove diversity, trends of change, factors responsible for loss over the years and implications for sustainable livelihoods of locals in four villages (Ajido (L1), Tarkwa bay (L2), University of Lagos (L3), and Ikosi (L4)) along the coast of Lagos, Nigeria. Primary data were collected through field survey, questionnaires, interviews, and review of existing literature. Field observation and data analysis reveals mangrove diversity as low and varied on a spatial scale, where Margalef’s Diversity Index (D) was 0.368, 0.269, 0.326, and 0.333, respectively for L1, L2, L3, and L4. Shannon Weiner’s Index (H) was estimated to be 1.003, 1.460, 1.160, 1.046, and Specie Richness (E) 0.913, 0.907, 0.858, and 0.015, respectively, for the four villages. Also, The Simpson’s index of diversity was analyzed to be 0.632, 0. 731, 0.647, 0.667, and Simpson’s reciprocal index 2.717, 3.717, 3.060, and 3.003, respectively, for the four villages. Chi-square test was used to analyze the impact of mangrove loss on the sustainable livelihood of coastal communities. Calculated Chi-square (X2) value (5) was higher than tabulated value (4.30), suggesting that loss of mangrove wetlands impacted on local communities’ livelihood at the four villages. Analyses of causes and trends of mangrove wetland loss over the years suggest that urbanization, fuel wood and agricultural activities are major causes. Current degradation observed in mangrove wetlands on the Lagos coast suggest a reduction in mangroves biodiversity and associated fauna with potential cascading effects on higher trophic levels such as fisheries. Low yield in fish catch, reduction in income and increasing cases of natural disaster has culminated in threats to sustainable livelihoods of local communities along the coast of Lagos.Keywords: Mangroves, lagos coast, fisheries, management
Procedia PDF Downloads 6472869 Error Analysis of Pronunciation of French by Sinhala Speaking Learners
Authors: Chandeera Gunawardena
Abstract:
The present research analyzes the pronunciation errors encountered by thirty Sinhala speaking learners of French on the assumption that the pronunciation errors were systematic and they reflect the interference of the native language of the learners. The thirty participants were selected using random sampling method. By the time of the study, the subjects were studying French as a foreign language for their Bachelor of Arts Degree at University of Kelaniya, Sri Lanka. The participants were from a homogenous linguistics background. All participants speak the same native language (Sinhala) thus they had completed their secondary education in Sinhala medium and during which they had also learnt French as a foreign language. A battery operated audio tape recorder and a 120-minute blank cassettes were used for recording. A list comprised of 60 words representing all French phonemes was used to diagnose pronunciation difficulties. Before the recording process commenced, the subjects were requested to familiarize themselves with the words through reading them several times. The recording was conducted individually in a quiet classroom and each recording approximately took fifteen minutes. Each subject was required to read at a normal speed. After the completion of recording, the recordings were replayed to identify common errors which were immediately transcribed using the International Phonetic Alphabet. Results show that Sinhala speaking learners face problems with French nasal vowels and French initial consonants clusters. The learners also exhibit errors which occur because of their second language (English) interference.Keywords: error analysis, pronunciation difficulties, pronunciation errors, Sinhala speaking learners of French
Procedia PDF Downloads 2102868 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing
Authors: Kedar Hardikar, Joe Varghese
Abstract:
Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applicationsKeywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.
Procedia PDF Downloads 1352867 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 2402866 Synthesis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises
Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov
Abstract:
We have conducted the optimal synthesis of root-mean-squared objective filter to estimate the state vector in the case if within the observation channel with memory the anomalous noises with unknown mathematical expectation are complement in the function of the regular noises. The synthesis has been carried out for linear stochastic systems of continuous-time.Keywords: mathematical expectation, filtration, anomalous noise, memory
Procedia PDF Downloads 2472865 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 2642864 An Application of Vector Error Correction Model to Assess Financial Innovation Impact on Economic Growth of Bangladesh
Authors: Md. Qamruzzaman, Wei Jianguo
Abstract:
Over the decade, it is observed that financial development, through financial innovation, not only accelerated development of efficient and effective financial system but also act as a catalyst in the economic development process. In this study, we try to explore insight about how financial innovation causes economic growth in Bangladesh by using Vector Error Correction Model (VECM) for the period of 1990-2014. Test of Cointegration confirms the existence of a long-run association between financial innovation and economic growth. For investigating directional causality, we apply Granger causality test and estimation explore that long-run growth will be affected by capital flow from non-bank financial institutions and inflation in the economy but changes of growth rate do not have any impact on Capital flow in the economy and level of inflation in long-run. Whereas, growth and Market capitalization, as well as market capitalization and capital flow, confirm feedback hypothesis. Variance decomposition suggests that any innovation in the financial sector can cause GDP variation fluctuation in both long run and short run. Financial innovation promotes efficiency and cost in financial transactions in the financial system, can boost economic development process. The study proposed two policy recommendations for further development. First, innovation friendly financial policy should formulate to encourage adaption and diffusion of financial innovation in the financial system. Second, operation of financial market and capital market should be regulated with implementation of rules and regulation to create conducive environment.Keywords: financial innovation, economic growth, GDP, financial institution, VECM
Procedia PDF Downloads 272