Search results for: transmission error
2897 Development of a General Purpose Computer Programme Based on Differential Evolution Algorithm: An Application towards Predicting Elastic Properties of Pavement
Authors: Sai Sankalp Vemavarapu
Abstract:
This paper discusses the application of machine learning in the field of transportation engineering for predicting engineering properties of pavement more accurately and efficiently. Predicting the elastic properties aid us in assessing the current road conditions and taking appropriate measures to avoid any inconvenience to commuters. This improves the longevity and sustainability of the pavement layer while reducing its overall life-cycle cost. As an example, we have implemented differential evolution (DE) in the back-calculation of the elastic modulus of multi-layered pavement. The proposed DE global optimization back-calculation approach is integrated with a forward response model. This approach treats back-calculation as a global optimization problem where the cost function to be minimized is defined as the root mean square error in measured and computed deflections. The optimal solution which is elastic modulus, in this case, is searched for in the solution space by the DE algorithm. The best DE parameter combinations and the most optimum value is predicted so that the results are reproducible whenever the need arises. The algorithm’s performance in varied scenarios was analyzed by changing the input parameters. The prediction was well within the permissible error, establishing the supremacy of DE.Keywords: cost function, differential evolution, falling weight deflectometer, genetic algorithm, global optimization, metaheuristic algorithm, multilayered pavement, pavement condition assessment, pavement layer moduli back calculation
Procedia PDF Downloads 1642896 Evaluation of Ceres Wheat and Rice Model for Climatic Conditions in Haryana, India
Authors: Mamta Rana, K. K. Singh, Nisha Kumari
Abstract:
The simulation models with its soil-weather-plant atmosphere interacting system are important tools for assessing the crops in changing climate conditions. The CERES-Wheat & Rice vs. 4.6 DSSAT was calibrated and evaluated for one of the major producers of wheat and rice state- Haryana, India. The simulation runs were made under irrigated conditions and three fertilizer applications dose of N-P-K to estimate crop yield and other growth parameters along with the phenological development of the crop. The genetic coefficients derived by iteratively manipulating the relevant coefficients that characterize the phenological process of wheat and rice crop to the best fit match between the simulated and observed anthesis, physological maturity and final grain yield. The model validated by plotting the simulated and remote sensing derived LAI. LAI product from remote sensing provides the edge of spatial, timely and accurate assessment of crop. For validating the yield and yield components, the error percentage between the observed and simulated data was calculated. The analysis shows that the model can be used to simulate crop yield and yield components for wheat and rice cultivar under different management practices. During the validation, the error percentage was less than 10%, indicating the utility of the calibrated model for climate risk assessment in the selected region.Keywords: simulation model, CERES-wheat and rice model, crop yield, genetic coefficient
Procedia PDF Downloads 3052895 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies
Authors: Paolino Di Felice
Abstract:
The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.Keywords: quality of life, distance measurement error, Italian administrative units, spatial database
Procedia PDF Downloads 3712894 Drug Susceptibility and Genotypic Assessment of Mycobacterial Isolates from Pulmonary Tuberculosis Patients in North East Ethiopia
Authors: Minwuyelet Maru, Solomon Habtemariam, Endalamaw Gadissa, Abraham Aseffa
Abstract:
Background: Tuberculosis is a major public health problem in Ethiopia. The burden of TB is aggravated by emergence and expansion of drug resistant tuberculosis and different lineages of Mycobacterium tuberculosis (M. tuberculosis) have been reported in many parts of the country. Describing strains of Mycobacterial isolates and drug susceptibility pattern is necessary. Method: Sputum samples were collected from smear positive pulmonary TB patients age >= 7 years between October 1, 2012 to September 30, 2013 and Mycobacterial strains isolated on Loweensten Jensen (LJ) media. Each strain was characterized by deletion typing and Spoligotyping. Drug sensitivity testing was determined with the indirect proportion method using Middle brook 7H10 media and association to determine possible risk factors to drug resistance was done. Result: A total of 144 smear positive pulmonary tuberculosis patients were enrolled. The age of participants ranged from 7 to 78 with mean age of 29.22 (±10.77) years. In this study 82.2% (n=97) of the isolates were sensitive to the four first line anti-tuberculosis drugs and resistance to any of the four drugs tested was 17.8% (n=21). A high frequency of any resistance was observed in isoniazid, 13.6%, (n=16) followed by streptomycin, 11.8% (n=14). No significant association of isoniazid resistance with HIV, sex and history of previous TB treatment was observed but there was significant association with age, high between 31-35 years of age (p=0.01). Majority, 89.9% (n=128) of participants were new cases and only 11.1% (n=16) had history of previous TB treatment. No MDR-TB from new cases and 2 MDRTB (13.3%) was isolated from re-treatment cases which was significantly associated with previous TB treatment (p<0.01). Thirty two different types of spoligotype patterns were identified and 74.1% were grouped in to 13 clusters. The dominant strains were SIT 25, 18.1% (n=21), SIT 53, 17.2% (n=20) and SIT 149, 8.6% (n=10). Lineage 4 is the predominant lineage followed by lineage 3 and lineage 7 comprising 65.5% (n=76), 28.4% (n=33) and 6% (n=7) respectively. Majority of strains from lineage 3 and 4 were SIT 25 (63.6%) and SIT 53 (26.3%) whereas SIT 343 was the dominant strain from lineage 7 (71.4%). Conclusion: Wide spread of lineage 3 and lineage 4 of the modern lineage and high number of strain cluster indicates high ongoing transmission. The high proportion resistance to any of the first line anti-tuberculosis drugs may be a potential source in the emergence of MDR-TB. Wide spread of SIT 25 and SIT 53 having a tendency of ease transmission and presence of higher resistance of isoniazid in working and mobile age group, 31-35 years of age may increase risk of drug resistant strains transmission.Keywords: tuberculosis, drug susceptibility, strain diversity, lineage, Ethiopia, spoligotyping
Procedia PDF Downloads 3752893 Convergence of Media in New Era
Authors: Mohamad Reza Asariha
Abstract:
The development and extension of modern communication innovations at an extraordinary speed has caused crucial changes in all financial, social, social and political areas of the world. The improvement of toady and cable innovations, in expansion to expanding the generation and dissemination needs of worldwide programs; the financial defense made it more appealing. The alter of the administration of mechanical economy to data economy and benefit economy in created nations brought approximately uncommon advancements within the standards of world exchange and as a result, it caused the extension of media organizations in outside measurements, and the advancement of financial speculations in many Asian nations, beside the worldwide demand for the utilization of media merchandise, made new markets, and the media both within the household scene of the nations and within the universal field. Universal and financial are of great significance and have and viable and compelling nearness within the condition of picking up, keeping up and expanding financial control and riches within the world. Moreover, mechanical progresses and mechanical joining are critical components in media auxiliary alter. This auxiliary alter took put beneath the impact of digitalization. That’s, the method that broke the boundaries between electronic media administrations. Until presently, the direction of mass media was totally subordinate on certain styles of data transmission that were for the most part utilized. Digitization made it conceivable for any content to be effortlessly transmitted through distinctive electronic transmission styles, and this media merging has had clear impacts on media approaches and the way mass media are controlled.Keywords: media, digital era, digital ages, media convergence
Procedia PDF Downloads 742892 Systems for Air Renewal Inside Bus Bodies Importance in the Prevention of Disease Transmission
Authors: Giovanni Matheus Rech, Gilberto Zan, Filipe P. Aguiar
Abstract:
The current pandemic scenario raises questions that many times would have previously gone unnoticed. One of these issues is the quality of the air we breathe in the most diverse environments in which we are inserted in an everyday. It is plausible to suppose that, at times like this, there is apprehension regarding the possibility of contamination by pathological agents such as viruses and bacterias through the airways. However, the renewal of indoor air, combined with a properly sanitized air conditioning system, are important tools for the prevention of viral diseases, as is the case with COVID-19. The bus is an example of an environment where renovation is applied to improve the quality of indoor air, helping to reduce the possibility of spreading pathological agents. Together with other care, such as an alcohol gel dispenser, curtains to separate the passengers, cleaning the environment more frequently, and mandatory use of masks, help to reduce the transmission of pathologies, such as COVID-19. Knowing the reality of a large part of the population regarding the need for public transport, there are standards and devices dedicated to promoting air quality, ensuring greater comfort and safety for users. This paper seeks to present such standards and recommendations to improve the quality of indoor air, as well as the equipment responsible for the renewal of the air in the body of a bus. Experimental measurement of the flow rates of the renewal devices present in the bus body allows quantifying the average volume of external air admitted in each type of body. This way, it was possible to compare, in terms of airflow per person, the values of a bus in relation to a series of other environments, using recommendations for air renewal are described through the Brazilian standard ABNT NBR 16401.Keywords: air quality, air renewal, buses, Covid-19
Procedia PDF Downloads 1512891 Classification of Barley Varieties by Artificial Neural Networks
Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran
Abstract:
In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.Keywords: physical properties, artificial neural networks, barley, classification
Procedia PDF Downloads 1782890 Of an 80 Gbps Passive Optical Network Using Time and Wavelength Division Multiplexing
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Faizan Khan, Xiaodong Yang
Abstract:
Internet Service Providers are driving endless demands for higher bandwidth and data throughput as new services and applications require higher bandwidth. Users want immediate and accurate data delivery. This article focuses on converting old conventional networks into passive optical networks based on time division and wavelength division multiplexing. The main focus of this research is to use a hybrid of time-division multiplexing and wavelength-division multiplexing to improve network efficiency and performance. In this paper, we design an 80 Gbps Passive Optical Network (PON), which meets the need of the Next Generation PON Stage 2 (NGPON2) proposed in this paper. The hybrid of the Time and Wavelength division multiplexing (TWDM) is said to be the best solution for the implementation of NGPON2, according to Full-Service Access Network (FSAN). To co-exist with or replace the current PON technologies, many wavelengths of the TWDM can be implemented simultaneously. By utilizing 8 pairs of wavelengths that are multiplexed and then transmitted over optical fiber for 40 Kms and on the receiving side, they are distributed among 256 users, which shows that the solution is reliable for implementation with an acceptable data rate. From the results, it can be concluded that the overall performance, Quality Factor, and bandwidth of the network are increased, and the Bit Error rate is minimized by the integration of this approach.Keywords: bit error rate, fiber to the home, passive optical network, time and wavelength division multiplexing
Procedia PDF Downloads 702889 Impact Position Method Based on Distributed Structure Multi-Agent Coordination with JADE
Authors: YU Kaijun, Liang Dong, Zhang Yarong, Jin Zhenzhou, Yang Zhaobao
Abstract:
For the impact monitoring of distributed structures, the traditional positioning methods are based on the time difference, which includes the four-point arc positioning method and the triangulation positioning method. But in the actual operation, these two methods have errors. In this paper, the Multi-Agent Blackboard Coordination Principle is used to combine the two methods. Fusion steps: (1) The four-point arc locating agent calculates the initial point and records it to the Blackboard Module.(2) The triangulation agent gets its initial parameters by accessing the initial point.(3) The triangulation agent constantly accesses the blackboard module to update its initial parameters, and it also logs its calculated point into the blackboard.(4) When the subsequent calculation point and the initial calculation point are within the allowable error, the whole coordination fusion process is finished. This paper presents a Multi-Agent collaboration method whose agent framework is JADE. The JADE platform consists of several agent containers, with the agent running in each container. Because of the perfect management and debugging tools of the JADE, it is very convenient to deal with complex data in a large structure. Finally, based on the data in Jade, the results show that the impact location method based on Multi-Agent coordination fusion can reduce the error of the two methods.Keywords: impact monitoring, structural health monitoring(SHM), multi-agent system(MAS), black-board coordination, JADE
Procedia PDF Downloads 1782888 Human Immunodeficiency Virus Infection/AIDS Abandoned Children in Kenya
Authors: Ruth Muturi Wanjiku
Abstract:
HIV/AIDS in Kenya for unborn and young kids. HIV/AIDS is a significant health concern in Kenya, with an estimated 1.5 million people living with the disease. Unfortunately, many of these individuals are unaware of their HIV status, and the disease continues to spread among the population or unborn kids. HIV/AIDS can be transmitted from an infected mother during pregnancy, childbirth, or breastfeeding. However, with early testing and treatment, the risk of mother-to-child transmission can be significantly reduced. Therefore, it is crucial for pregnant women to get tested and receive appropriate medical care. For young kids, HIV/AIDS education is critical to preventing the spread of the disease. It is essential to teach children about the importance of safe sex practices, avoiding risky behaviors such as sharing needles and getting tested regularly. Additionally, children should be taught about the stigma surrounding HIV/AIDS and encouraged to treat individuals living with the disease with compassion and respect. In conclusion, HIV/AIDS is a significant health concern in Kenya that affects individuals of all ages. For unborn kids, early testing and treatment are critical to reducing the risk of mother-to-child transmission. For young kids, education about HIV/AIDS and safe sex practices is essential to preventing the spread of the disease and reducing stigma. It is essential to promote awareness and encourage individuals to get tested and seek medical care if they believe they may be infected with HIV/AIDS.Keywords: AIDS, HIV, children, pregnant
Procedia PDF Downloads 702887 Relationship between Electricity Consumption and Economic Growth: Evidence from Nigeria (1971-2012)
Authors: N. E Okoligwe, Okezie A. Ihugba
Abstract:
Few scholars disagrees that electricity consumption is an important supporting factor for economy growth. However, the relationship between electricity consumption and economy growth has different manifestation in different countries according to previous studies. This paper examines the causal relationship between electricity consumption and economic growth for Nigeria. In an attempt to do this, the paper tests the validity of the modernization or depending hypothesis by employing various econometric tools such as Augmented Dickey Fuller (ADF) and Johansen Co-integration test, the Error Correction Mechanism (ECM) and Granger Causality test on time series data from 1971-2012. The Granger causality is found not to run from electricity consumption to real GDP and from GDP to electricity consumption during the year of study. The null hypothesis is accepted at the 5 per cent level of significance where the probability value (0.2251 and 0.8251) is greater than five per cent level of significance because both of them are probably determined by some other factors like; increase in urban population, unemployment rate and the number of Nigerians that benefit from the increase in GDP and increase in electricity demand is not determined by the increase in GDP (income) over the period of study because electricity demand has always been greater than consumption. Consequently; the policy makers in Nigeria should place priority in early stages of reconstruction on building capacity additions and infrastructure development of the electric power sector as this would force the sustainable economic growth in Nigeria.Keywords: economic growth, electricity consumption, error correction mechanism, granger causality test
Procedia PDF Downloads 3092886 Research on Pilot Sequence Design Method of Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing System Based on High Power Joint Criterion
Authors: Linyu Wang, Jiahui Ma, Jianhong Xiang, Hanyu Jiang
Abstract:
For the pilot design of the sparse channel estimation model in Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems, the observation matrix constructed according to the matrix cross-correlation criterion, total correlation criterion and other optimization criteria are not optimal, resulting in inaccurate channel estimation and high bit error rate at the receiver. This paper proposes a pilot design method combining high-power sum and high-power variance criteria, which can more accurately estimate the channel. First, the pilot insertion position is designed according to the high-power variance criterion under the condition of equal power. Then, according to the high power sum criterion, the pilot power allocation is converted into a cone programming problem, and the power allocation is carried out. Finally, the optimal pilot is determined by calculating the weighted sum of the high power sum and the high power variance. Compared with the traditional pilot frequency, under the same conditions, the constructed MIMO-OFDM system uses the optimal pilot frequency for channel estimation, and the communication bit error rate performance obtains a gain of 6~7dB.Keywords: MIMO-OFDM, pilot optimization, compressed sensing, channel estimation
Procedia PDF Downloads 1492885 Design of a Permanent Magnet Based Focusing Lens for a Miniature Klystron
Authors: Kumud Singh, Janvin Itteera, Priti Ukarde, Sanjay Malhotra, P. PMarathe, Ayan Bandyopadhay, Rakesh Meena, Vikram Rawat, L. M. Joshi
Abstract:
Application of Permanent magnet technology to high frequency miniature klystron tubes to be utilized for space applications improves the efficiency and operational reliability of these tubes. But nevertheless the task of generating magnetic focusing forces to eliminate beam divergence once the beam crosses the electrostatic focusing regime and enters the drift region in the RF section of the tube throws several challenges. Building a high quality magnet focusing lens to meet beam optics requirement in cathode gun and RF interaction region is considered to be one of the critical issues for these high frequency miniature tubes. In this paper, electromagnetic design and particle trajectory studies in combined electric and magnetic field for optimizing the magnetic circuit using 3D finite element method (FEM) analysis software is presented. A rectangular configuration of the magnet was constructed to accommodate apertures for input and output waveguide sections and facilitate coupling of electromagnetic fields into the input klystron cavity and out from output klystron cavity through coupling loops. Prototype lenses have been built and have been tested after integration with the klystron tube. We discuss the design requirements and challenges, and the results from beam transmission of the prototype lens.Keywords: beam transmission, Brillouin, confined flow, miniature klystron
Procedia PDF Downloads 4452884 Numerical and Simulation Analysis of Composite Friction Materials Using Single Plate Clutch Pad in Agricultural Tractors
Authors: Ravindra Raju, Vidhu Kampurath
Abstract:
For smooth transition of the power from the engine to the transmission system, a clutch is used. In agricultural tractors, friction clutches are widely used in power transmission applications. To transmit the maximum torque in friction clutches, selection of materials is one of the important tasks. The present used material for friction disc is Asbestos, Ceramic etc. In this study, analysis is performed using composites materials. The composite materials are considered due to their high strength to weight ratio. Composite materials like kevlar49, kevlar 29U were used in the study. The paper presents a systematic approach to optimize the structural and thermal characteristics of the clutch friction pad. A single plate clutch is modeled using Creo 2.0 software and analyzed using ANSYS. Thermal analysis considers the reduction of heat generated between the friction surfaces and reducing the temperature rise during the steady state period. Structural analysis is done to minimize the stresses developed as a result of the loading contact between friction surfaces. Also, modal analysis is done to optimize the natural frequency of the friction plate to avoid being in resonance with the engine frequency range. The analysis carried out on ANSYS workbench to get the foremost appropriate friction material for clutch. From the analyzed results stress, strain / total deformation values and natural frequency of the materials were compared for all the composite materials and the best one was taken out. For the study purpose, specifications of the clutch are obtained from the MF1035 (47KW) Tractor model.Keywords: ANSYS, clutch, composite materials, creo
Procedia PDF Downloads 2992883 The Prevalence of Blood-Borne Viral Infections among Autopsy Cases in Jordan
Authors: Emad Al-Abdallat, Faris G. Bakri, Azmi Mahafza, Rayyan Al Ali, Nidaa Ababneh, Ahmed Idhair
Abstract:
Background: Morgues are high-risk areas for the spread of infection from the cadavers to the staff during the postmortem examination. Infection can spread from corpses to workers by the airborne route, by direct contact, or from needle and sharp object injuries. Objective: Knowledge about the prevalence of these infections among autopsies is prudent to appreciate any risk of transmission and to further enforce safety measures. Method: A total of 242 autopsies were tested. Age ranged from 3 days to 94 years (median 75.5 years, mean 45.3 (21.9 ± SD)). There were 172 (71%) males. Results: The cause of death was considered natural in 137 (56.6%) cases, accidental in 89 (36.8%), homicidal in 9 (3.7%), suicidal in 4 (1.7%), and unknown in 3 (1.2%). Hepatitis B surface antigen was positive in 5 (2.1%) cases. Hepatitis C virus antibody was detected in 5 (2.1%) cases and the hepatitis C virus polymerase chain reaction was positive in 2 of them (0.8%). HIV antibody was not detected in any of the cases. Conclusions: Autopsies can be associated with exposure to blood borne viruses. Autopsies performed during the study period were tested for hepatitis B surface antigen, hepatitis C virus antibody, and human immunodeficiency virus antibody. Positive tests were subsequently confirmed by polymerase chain reaction. There is low prevalence of infections with these viruses in our autopsy cases. However, the risk of transmission remains a threat. Healthcare workers in the forensic departments should adhere to standard precautions.Keywords: autopsy, hepatitis B virus, hepatitis C virus, human immunodeficiency virus, Jordan
Procedia PDF Downloads 3802882 Usage the Point Analysis Algorithm (SANN) on Drought Analysis
Authors: Khosro Shafie Motlaghi, Amir Reza Salemian
Abstract:
In arid and semi-arid regions like our country Evapotranspiration is the greatestportion of water resource. Therefor knowlege of its changing and other climate parameters plays an important role for planning, development, and management of water resource. In this search the Trend of long changing of Evapotranspiration (ET0), average temprature, monthly rainfall were tested. To dose, all synoptic station s in iran were divided according to the climate with Domarton climate. The present research was done in semi-arid climate of Iran, and in which 14 synoptic with 30 years period of statistics were investigated with 3 methods of minimum square error, Mann Kendoll, and Vald-Volfoytz Evapotranspiration was calculated by using the method of FAO-Penman. The results of investigation in periods of statistic has shown that the process Evapotranspiration parameter of 24 percent of stations is positive, and for 2 percent is negative, and for 47 percent. It was without any Trend. Similary for 22 percent of stations was positive the Trend of parameter of temperature for 19 percent , the trend was negative and for 64 percent, it was without any Trend. The results of rainfall trend has shown that the amount of rainfall in most stations was not considered as a meaningful trend. The result of Mann-kendoll method similar to minimum square error method. regarding the acquired result was can admit that in future years Some regions will face increase of temperature and Evapotranspiration.Keywords: analysis, algorithm, SANN, ET0
Procedia PDF Downloads 2962881 Error Analysis of Pronunciation of French by Sinhala Speaking Learners
Authors: Chandeera Gunawardena
Abstract:
The present research analyzes the pronunciation errors encountered by thirty Sinhala speaking learners of French on the assumption that the pronunciation errors were systematic and they reflect the interference of the native language of the learners. The thirty participants were selected using random sampling method. By the time of the study, the subjects were studying French as a foreign language for their Bachelor of Arts Degree at University of Kelaniya, Sri Lanka. The participants were from a homogenous linguistics background. All participants speak the same native language (Sinhala) thus they had completed their secondary education in Sinhala medium and during which they had also learnt French as a foreign language. A battery operated audio tape recorder and a 120-minute blank cassettes were used for recording. A list comprised of 60 words representing all French phonemes was used to diagnose pronunciation difficulties. Before the recording process commenced, the subjects were requested to familiarize themselves with the words through reading them several times. The recording was conducted individually in a quiet classroom and each recording approximately took fifteen minutes. Each subject was required to read at a normal speed. After the completion of recording, the recordings were replayed to identify common errors which were immediately transcribed using the International Phonetic Alphabet. Results show that Sinhala speaking learners face problems with French nasal vowels and French initial consonants clusters. The learners also exhibit errors which occur because of their second language (English) interference.Keywords: error analysis, pronunciation difficulties, pronunciation errors, Sinhala speaking learners of French
Procedia PDF Downloads 2102880 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 2402879 Two-Phase Flow Study of Airborne Transmission Control in Dental Practices
Authors: Mojtaba Zabihi, Stephen Munro, Jonathan Little, Ri Li, Joshua Brinkerhoff, Sina Kheirkhah
Abstract:
Occupational Safety and Health Administration (OSHA) identified dental workers at the highest risk of contracting COVID-19. This is because aerosol-generating procedures (AGP) during dental practices generate aerosols ( < 5µm) and droplets. These particles travel at varying speeds, in varying directions, and for varying durations. If these particles bear infectious viruses, their spreading causes airborne transmission of the virus in the dental room, exposing dentists, hygienists, dental assistants, and even other dental clinic clients to the infection risk. Computational fluid dynamics (CFD) simulation of two-phase flows based on a discrete phase model (DPM) is carried out to study the spreading of aerosol and droplets in a dental room. The simulation includes momentum, heat, and mass transfers between the particles and the airflow. Two simulations are conducted and compared. One simulation focuses on the effects of room ventilation in winter and summer on the particles' travel. The other simulation focuses on the control of aerosol and droplets' spreading. A suction collector is added near the source of aerosol and droplets, creating a flow sink in order to remove the particles. The effects of the suction flow on the aerosol and droplet travel are studied. The suction flow can remove aerosols and also reduce the spreading of droplets.Keywords: aerosols, computational fluid dynamics, COVID-19, dental, discrete phase model, droplets, two-phase flow
Procedia PDF Downloads 2652878 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 2642877 An Application of Vector Error Correction Model to Assess Financial Innovation Impact on Economic Growth of Bangladesh
Authors: Md. Qamruzzaman, Wei Jianguo
Abstract:
Over the decade, it is observed that financial development, through financial innovation, not only accelerated development of efficient and effective financial system but also act as a catalyst in the economic development process. In this study, we try to explore insight about how financial innovation causes economic growth in Bangladesh by using Vector Error Correction Model (VECM) for the period of 1990-2014. Test of Cointegration confirms the existence of a long-run association between financial innovation and economic growth. For investigating directional causality, we apply Granger causality test and estimation explore that long-run growth will be affected by capital flow from non-bank financial institutions and inflation in the economy but changes of growth rate do not have any impact on Capital flow in the economy and level of inflation in long-run. Whereas, growth and Market capitalization, as well as market capitalization and capital flow, confirm feedback hypothesis. Variance decomposition suggests that any innovation in the financial sector can cause GDP variation fluctuation in both long run and short run. Financial innovation promotes efficiency and cost in financial transactions in the financial system, can boost economic development process. The study proposed two policy recommendations for further development. First, innovation friendly financial policy should formulate to encourage adaption and diffusion of financial innovation in the financial system. Second, operation of financial market and capital market should be regulated with implementation of rules and regulation to create conducive environment.Keywords: financial innovation, economic growth, GDP, financial institution, VECM
Procedia PDF Downloads 2722876 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece
Authors: Panagiotis Karadimos, Leonidas Anthopoulos
Abstract:
Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA
Procedia PDF Downloads 1342875 Generalized Up-downlink Transmission using Black-White Hole Entanglement Generated by Two-level System Circuit
Authors: Muhammad Arif Jalil, Xaythavay Luangvilay, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin
Abstract:
Black and white holes form the entangled pair⟨BH│WH⟩, where a white hole occurs when the particle moves at the same speed as light. The entangled black-white hole pair is at the center with the radian between the gap. When the speed of particle motion is slower than light, the black hole is gravitational (positive gravity), where the white hole is smaller than the black hole. On the downstream side, the entangled pair appears to have a black hole outside the gap increases until the white holes disappear, which is the emptiness paradox. On the upstream side, when moving faster than light, white holes form times tunnels, with black holes becoming smaller. It will continue to move faster and further when the black hole disappears and becomes a wormhole (Singularity) that is only a white hole in emptiness (Emptiness). This research studies use of black and white holes generated by a two-level circuit for communication transmission carriers, in which high ability and capacity of data transmission can be obtained. The black and white hole pair can be generated by the two-level system circuit when the speech of a particle on the circuit is equal to the speed of light. The black hole forms when the particle speed has increased from slower to equal to the light speed, while the white hole is established when the particle comes down faster than light. They are bound by the entangled pair, signal and idler, ⟨Signal│Idler⟩, and the virtual ones for the white hole, which has an angular displacement of half of π radian. A two-level system is made from an electronic circuit to create black and white holes bound by the entangled bits that are immune or cloning-free from thieves. Start by creating a wave-particle behavior when its speed is equal to light black hole is in the middle of the entangled pair, which is the two bit gate. The required information can be input into the system and wrapped by the black hole carrier. A timeline (Tunnel) occurs when the wave-particle speed is faster than light, from which the entangle pair is collapsed. The transmitted information is safely in the time tunnel. The required time and space can be modulated via the input for the downlink operation. The downlink is established when the particle speed is given by a frequency(energy) form is down and entered into the entangled gap, where this time the white hole is established. The information with the required destination is wrapped by the white hole and retrieved by the clients at the destination. The black and white holes are disappeared, and the information can be recovered and used.Keywords: cloning free, time machine, teleportation, two-level system
Procedia PDF Downloads 752874 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 652873 Low Complexity Carrier Frequency Offset Estimation for Cooperative Orthogonal Frequency Division Multiplexing Communication Systems without Cyclic Prefix
Authors: Tsui-Tsai Lin
Abstract:
Cooperative orthogonal frequency division multiplexing (OFDM) transmission, which possesses the advantages of better connectivity, expanded coverage, and resistance to frequency selective fading, has been a more powerful solution for the physical layer in wireless communications. However, such a hybrid scheme suffers from the carrier frequency offset (CFO) effects inherited from the OFDM-based systems, which lead to a significant degradation in performance. In addition, insertion of a cyclic prefix (CP) at each symbol block head for combating inter-symbol interference will lead to a reduction in spectral efficiency. The design on the CFO estimation for the cooperative OFDM system without CP is a suspended problem. This motivates us to develop a low complexity CFO estimator for the cooperative OFDM decode-and-forward (DF) communication system without CP over the multipath fading channel. Especially, using a block-type pilot, the CFO estimation is first derived in accordance with the least square criterion. A reliable performance can be obtained through an exhaustive two-dimensional (2D) search with a penalty of heavy computational complexity. As a remedy, an alternative solution realized with an iteration approach is proposed for the CFO estimation. In contrast to the 2D-search estimator, the iterative method enjoys the advantage of the substantially reduced implementation complexity without sacrificing the estimate performance. Computer simulations have been presented to demonstrate the efficacy of the proposed CFO estimation.Keywords: cooperative transmission, orthogonal frequency division multiplexing (OFDM), carrier frequency offset, iteration
Procedia PDF Downloads 2652872 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter
Authors: Jisun Lee, Jay Hyoun Kwon
Abstract:
As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain
Procedia PDF Downloads 3492871 Novel Animal Drawn Wheel-Axle Mechanism Actuated Knapsack Boom Sprayer
Authors: Ibrahim O. Abdulmalik, Michael C. Amonye, Mahdi Makoyo
Abstract:
Manual knapsack sprayer is the most popular means of farm spraying in Nigeria. It has its limitations. Apart from the human fatigue, which leads to unsteady walking steps, their field capacities are small. They barely cover about 0.2hectare per hour. Their small swath implies that a sizeable farm would take several days to cover. Weather changes are erratic and often it is desired to spray a large farm within hours or few days for even effect, uniformity and to avoid adverse weather interference. It is also often required that a large farm be covered within a short period to avoid re-emergence of weeds before crop emergence. Deployment of many knapsack operators to large farms has not been successful. Human error in taking equally spaced swaths usually result in over dosage of overlaps and in unapplied areas due to error at edges overlaps. Large farm spraying require boom equipment with larger swath. Reduced error in swath overlaps and spraying within the shortest possible time are then assured. Tractor boom sprayers would readily overcome these problems and achieve greater coverage, but they are not available in the country. Tractor hire for cultivation is very costly with the attendant lack of spare parts and specialized technicians for maintenance wherefore farmers find it difficult to engage tractors for cultivation and would avoid considering the employment of a tractor boom sprayer. Animal traction in farming is predominant in Nigeria, especially in the Northern part of the country. Development of boom sprayers drawn by work animals surely implies the maximization of animal utilization in farming. The Hydraulic Equipment Development Institute, Kano, in keeping to its mandate of targeted R&D in hydraulic and pneumatic systems, has developed an Animal Drawn Knapsack Boom Sprayer with four nozzles using the axle mechanism of a two wheeled cart to actuate the piston pump of two knapsack sprayers in line with appropriate technology demand of the country. It is hoped that the introduction of this novel contrivance shall enhance crop protection practice and lead to greater crop and food production in Nigeria.Keywords: boom, knapsack, farm, sprayer, wheel axle
Procedia PDF Downloads 2832870 The Factors Constitute the Interaction between Teachers and Students: An Empirical Study at the Notion of Framing
Authors: Tien-Hui Chiang
Abstract:
The code theory, proposed by Basil Bernstein, indicates that framing can be viewed as the core element in constituting the phenomenon of cultural reproduction because it is able to regulate the transmission of pedagogical information. Strong framing increases the social relation boundary between a teacher and pupils, which obstructs information transmission, so that in order to improve underachieving students’ academic performances, teachers need to reduce to strength of framing. Weak framing enables them to transform academic knowledge into commonsense knowledge in daily life language. This study posits that most teachers would deliver strong framing due to their belief mainly confined within the aspect of instrumental rationality that deprives their critical minds. This situation could make them view the normal distribution bell curve of students’ academic performances as a natural outcome. In order to examine the interplay between framing, instrumental rationality and pedagogical action, questionnaires were completed by over 5,000 primary school teachers in Henan province, China, who were stratified sample. The statistical results show that most teachers employed psychological concepts to measure students’ academic performances and, in turn, educational inequity was legitimatized as a natural outcome in the efficiency-led approach. Such efficiency-led minds made them perform as the agent practicing the mechanism of social control and in turn sustaining the phenomenon of cultural reproduction.Keywords: code, cultural reproduction, framing, instrumental rationality, social relation and interaction
Procedia PDF Downloads 1512869 The Influence of Different Flux Patterns on Magnetic Losses in Electric Machine Cores
Authors: Natheer Alatawneh
Abstract:
The finite element analysis of magnetic fields in electromagnetic devices shows that the machine cores experience different flux patterns including alternating and rotating fields. The rotating fields are generated in different configurations range between circular and elliptical with different ratios between the major and minor axis of the flux locus. Experimental measurements on electrical steel exposed to different flux patterns disclose different magnetic losses in the samples under test. Consequently, electric machines require special attention during the cores loss calculation process to consider the flux patterns. In this study, a circular rotational single sheet tester is employed to measure the core losses in electric steel sample of M36G29. The sample was exposed to alternating field, circular field, and elliptical fields with axis ratios of 0.2, 0.4, 0.6 and 0.8. The measured data was implemented on 6-4 switched reluctance motor at three different frequencies of interest to the industry as 60 Hz, 400 Hz, and 1 kHz. The results disclose a high margin of error that may occur during the loss calculations if the flux patterns issue is neglected. The error in different parts of the machine associated with considering the flux patterns can be around 50%, 10%, and 2% at 60Hz, 400Hz, and 1 kHz, respectively. The future work will focus on the optimization of machine geometrical shape which has a primary effect on the flux pattern in order to minimize the magnetic losses in machine cores.Keywords: alternating core losses, electric machines, finite element analysis, rotational core losses
Procedia PDF Downloads 2522868 Video Compression Using Contourlet Transform
Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan
Abstract:
Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform
Procedia PDF Downloads 443