Search results for: refractive errors
361 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 269360 Tuning of Kalman Filter Using Genetic Algorithm
Authors: Hesham Abdin, Mohamed Zakaria, Talaat Abd-Elmonaem, Alaa El-Din Sayed Hafez
Abstract:
Kalman filter algorithm is an estimator known as the workhorse of estimation. It has an important application in missile guidance, especially in lack of accurate data of the target due to noise or uncertainty. In this paper, a Kalman filter is used as a tracking filter in a simulated target-interceptor scenario with noise. It estimates the position, velocity, and acceleration of the target in the presence of noise. These estimations are needed for both proportional navigation and differential geometry guidance laws. A Kalman filter has a good performance at low noise, but a large noise causes considerable errors leads to performance degradation. Therefore, a new technique is required to overcome this defect using tuning factors to tune a Kalman filter to adapt increasing of noise. The values of the tuning factors are between 0.8 and 1.2, they have a specific value for the first half of range and a different value for the second half. they are multiplied by the estimated values. These factors have its optimum values and are altered with the change of the target heading. A genetic algorithm updates these selections to increase the maximum effective range which was previously reduced by noise. The results show that the selected factors have other benefits such as decreasing the minimum effective range that was increased earlier due to noise. In addition to, the selected factors decrease the miss distance for all ranges of this direction of the target, and expand the effective range which leads to increase probability of kill.Keywords: proportional navigation, differential geometry, Kalman filter, genetic algorithm
Procedia PDF Downloads 508359 Mathematical Modeling of the Operating Process and a Method to Determine the Design Parameters in an Electromagnetic Hammer Using Solenoid Electromagnets
Authors: Song Hyok Choe
Abstract:
This study presented a method to determine the optimum design parameters based on a mathematical model of the operating process in a manual electromagnetic hammer using solenoid electromagnets. The operating process of the electromagnetic hammer depends on the circuit scheme of the power controller. Mathematical modeling of the operating process was carried out by considering the energy transfer process in the forward and reverse windings and the electromagnetic force acting on the impact and brake pistons. Using the developed mathematical model, the initial design data of a manual electromagnetic hammer proposed in this paper are encoded and analyzed in Matlab. On the other hand, a measuring experiment was carried out by using a measurement device to check the accuracy of the developed mathematical model. The relative errors of the analytical results for measured stroke distance of the impact piston, peak value of forward stroke current and peak value of reverse stroke current were −4.65%, 9.08% and 9.35%, respectively. Finally, it was shown that the mathematical model of the operating process of an electromagnetic hammer is relatively accurate, and it can be used to determine the design parameters of the electromagnetic hammer. Therefore, the design parameters that can provide the required impact energy in the manual electromagnetic hammer were determined using a mathematical model developed. The proposed method will be used for the further design and development of the various types of percussion rock drills.Keywords: solenoid electromagnet, electromagnetic hammer, stone processing, mathematical modeling
Procedia PDF Downloads 44358 An Historical Revision of Change and Configuration Management Process
Authors: Expedito Pinto De Paula Junior
Abstract:
Current systems such as artificial satellites, airplanes, automobiles, turbines, power systems and air traffic controls are becoming increasingly more complex and/or highly integrated as defined in SAE-ARP-4754A (Society Automotive Engineering - Certification considerations for highly-integrated or complex aircraft systems standard). Among other processes, the development of such systems requires careful Change and Configuration Management (CCM) to establish and maintain product integrity. Understand the maturity of CCM process based in historical approach is crucial for better implementation in hardware and software lifecycle. The sense of work organization, in all fields of development is directly related to the order and interrelation of the parties, changes in time, and record of these changes. Generally, is observed that engineers, administrators and managers invest more time in technical activities than in organization of work. More these professionals are focused in solving complex problems with a purely technical bias. CCM process is fundamental for development, production and operation of new products specially in the safety critical systems. The objective of this paper is open a discussion about the historical revision based in standards focus of CCM around the world in order to understand and reflect the importance across the years, the contribution of this process for technology evolution, to understand the mature of organizations in the system lifecycle project and the benefits of CCM to avoid errors and mistakes during the Lifecycle Product.Keywords: changes, configuration management, historical, revision
Procedia PDF Downloads 200357 Reliability of the Estimate of Earthwork Quantity Based on 3D-BIM
Authors: Jaechoul Shin, Juhwan Hwang
Abstract:
In case of applying the BIM method to the civil engineering in the area of free formed structure, we can expect comparatively high rate of construction productivity as it is in the building engineering area. In this research, we developed quantity calculation error applying it to earthwork and bridge construction (e.g. PSC-I type segmental girder bridge amd integrated bridge of steel I-girders and inverted-Tee bent cap), NATM (New Austrian Tunneling Method) tunnel construction, retaining wall construction, culvert construction and implemented BIM based 3D modeling quantity survey. we confirmed high reliability of the BIM-based method in structure work in which errors occurred in range between -6% ~ +5%. Especially, understanding of the problem and improvement of the existing 2D-CAD based of quantity calculation through rock type quantity calculation error in range of -14% ~ +13% of earthwork quantity calculation. It is benefit and applicability of BIM method in civil engineering. In addition, routine method for quantity of earthwork has the same error tolerance negligible for that of structure work. But, rock type's quantity calculated as the error appears significantly to the reliability of 2D-based volume calculation shows that the problem could be. Through the estimating quantity of earthwork based 3D-BIM, proposed method has better reliability than routine method. BIM, as well as the design, construction, maintenance levels of information when you consider the benefits of integration, the introduction of BIM design in civil engineering and the possibility of applying for the effectiveness was confirmed.Keywords: BIM, 3D modeling, 3D-BIM, quantity of earthwork
Procedia PDF Downloads 441356 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application
Authors: Jui-Chien Hsieh
Abstract:
Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network
Procedia PDF Downloads 113355 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights
Authors: Nelson Bii, Christopher Ouma, John Odhiambo
Abstract:
Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths
Procedia PDF Downloads 137354 Learner's Difficulties Acquiring English: The Case of Native Speakers of Rio de La Plata Spanish Towards Justifying the Need for Corpora
Authors: Maria Zinnia Bardas Hoffmann
Abstract:
Contrastive Analysis (CA) is the systematic comparison between two languages. It stems from the notion that errors are caused by interference of the L1 system in the acquisition process of an L2. CA represents a useful tool to understand the nature of learning and acquisition. Also, this particular method promises a path to un-derstand the nature of underlying cognitive processes, even when other factors such as intrinsic motivation and teaching strategies were found to best explain student’s problems in acquisition. CA study is justified not only from the need to get a deeper understanding of the nature of SLA, but as an invaluable source to provide clues, at a cognitive level, for those general processes involved in rule formation and abstract thought. It is relevant for cross disciplinary studies and the fields of Computational Thought, Natural Language processing, Applied Linguistics, Cognitive Linguistics and Math Theory. That being said, this paper intends to address here as well its own set of constraints and limitations. Finally, this paper: (a) aims at identifying some of the difficulties students may find in their learning process due to the nature of their specific variety of L1, Rio de la Plata Spanish (RPS), (b) represents an attempt to discuss the necessity for specific models to approach CA.Keywords: second language acquisition, applied linguistics, contrastive analysis, applied contrastive analysis English language department, meta-linguistic rules, cross-linguistics studies, computational thought, natural language processing
Procedia PDF Downloads 150353 Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy
Authors: Hao Wang, Shengchun Wang, Weidong Wang
Abstract:
On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.Keywords: curvature entropy, profile registration, rail wear, structured light, train-running
Procedia PDF Downloads 258352 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches
Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.
Abstract:
A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency
Procedia PDF Downloads 145351 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index
Authors: A. Sathiya Susuman, Hamisi F. Hamisi
Abstract:
Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index
Procedia PDF Downloads 474350 Implications of Climate Change and World Uncertainty for Gender Inequality: Global Evidence
Authors: Kashif Nesar Rather, Mantu Kumar Mahalik
Abstract:
The discourse surrounding climate change has gained considerable traction, with a discernible emphasis on its nuanced and consequential impact on gender inequality. Concurrently, escalating global tensions are contributing to heightened uncertainty, potentially exerting influence on gender disparities. Within this framework, this study attempts to empirically investigate the implications of climate change and world uncertainty on the gender inequality for a balanced panel of 100 economies between 1995 to 2021. The estimated models also control for the effects of globalisation, economic growth, and education expenditure. The panel cointegration tests establish a significant long-run relationship between the variables of the study. Furthermore, the PMG-ARDL (Panel mean group-Autoregressive distributed lag model) estimation technique confirms that both climate change and world uncertainty perpetuate the global gender inequalities. Additionally, the results establish that globalisation, economic growth, and education expenditure exert a mitigating influence on gender inequality, signifying their role in diminishing gender disparities. These findings are further confirmed by the FGLS (Feasible Generalized Least Squares) and DKSE (Driscoll-Kraay Standard Errors) regression methods. Potential policy implications for mitigating the detrimental gender ramifications stemming from climate change and rising world uncertainties are also discussed.Keywords: gender inequality, world uncertainty, climate change, globalisation., ecological footprint
Procedia PDF Downloads 36349 Analysis of Cascade Control Structure in Train Dynamic Braking System
Authors: B. Moaveni, S. Morovati
Abstract:
In recent years, increasing the usage of railway transportations especially in developing countries caused more attention to control systems railway vehicles. Consequently, designing and implementing the modern control systems to improve the operating performance of trains and locomotives become one of the main concerns of researches. Dynamic braking systems is an important safety system which controls the amount of braking torque generated by traction motors, to keep the adhesion coefficient between the wheel-sets and rail road in optimum bound. Adhesion force has an important role to control the braking distance and prevent the wheels from slipping during the braking process. Cascade control structure is one of the best control methods for the wide range of industrial plants in the presence of disturbances and errors. This paper presents cascade control structure based on two forward simple controllers with two feedback loops to control the slip ratio and braking torque. In this structure, the inner loop controls the angular velocity and the outer loop control the longitudinal velocity of the locomotive that its dynamic is slower than the dynamic of angular velocity. This control structure by controlling the torque of DC traction motors, tries to track the desired velocity profile to access the predefined braking distance and to control the slip ratio. Simulation results are employed to show the effectiveness of the introduced methodology in dynamic braking system.Keywords: cascade control, dynamic braking system, DC traction motors, slip control
Procedia PDF Downloads 363348 Use of a New Multiplex Quantitative Polymerase Chain Reaction Based Assay for Simultaneous Detection of Neisseria Meningitidis, Escherichia Coli K1, Streptococcus agalactiae, and Streptococcus pneumoniae
Authors: Nastaran Hemmati, Farhad Nikkhahi, Amir Javadi, Sahar Eskandarion, Seyed Mahmuod Amin Marashi
Abstract:
Neisseria meningitidis, Escherichia coli K, Streptococcus agalactiae, and Streptococcus pneumoniae cause 90% of bacterial meningitis. Almost all infected people die or have irreversible neurological complications. Therefore, it is essential to have a diagnostic kit with the ability to quickly detect these fatal infections. The project involved 212 patients from whom cerebrospinal fluid samples were obtained. After total genome extraction and performing multiplex quantitative polymerase chain reaction (qPCR), the presence or absence of each infectious factor was determined by comparing with standard strains. The specificity, sensitivity, positive predictive value, and negative predictive value calculated were 100%, 92.9%, 50%, and 100%, respectively. So, due to the high specificity and sensitivity of the designed primers, they can be used instead of bacterial culture that takes at least 24 to 48 hours. The remarkable benefit of this method is associated with the speed (up to 3 hours) at which the procedure could be completed. It is also worth noting that this method can reduce the personnel unintentional errors which may occur in the laboratory. On the other hand, as this method simultaneously identifies four common factors that cause bacterial meningitis, it could be used as an auxiliary method diagnostic technique in laboratories particularly in cases of emergency medicine.Keywords: cerebrospinal fluid, meningitis, quantitative polymerase chain reaction, simultaneous detection, diagnosis testing
Procedia PDF Downloads 114347 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils
Authors: Muqdad Al-Juboori, Bithin Datta
Abstract:
Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.Keywords: artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis
Procedia PDF Downloads 222346 Apollo Clinical Excellence Scorecard (ACE@25): An Initiative to Drive Quality Improvement in Hospitals
Authors: Anupam Sibal
Abstract:
Whatever is measured tends to improve. With a view to objectively measuring and improving clinical quality across the Apollo Group Hospitals, the initiative of ACE @ 25 (Apollo Clinical Excellence@25) was launched on Jan 09. ACE @ 25 is a clinically balanced scorecard incorporating 25 clinical quality parameters involving complication rates, mortality rates, one-year survival rates and average length of stay after major procedures like liver and renal transplant, CABG, TKR, THR, TURP, PTCA, endoscopy, large bowel resection and MRM covering all major specialties. Also included are hospital acquired infection rates, pain satisfaction and medication errors. Benchmarks have been chosen from the world’s best hospitals. There are weighted scores for outcomes color coded green, orange and red. The cumulative score is 100. Data is reported monthly by 43 Group Hospitals online on the Lighthouse platform. Action taken reports for parameters falling in red are submitted quarterly and reviewed by the board. An audit team audits the data at all locations every six months. Scores are linked to appraisal of the medical head and there is an “ACE @ 25” Champion Award for the highest scorer. Scores for different parameters were variable from green to red at the start of the initiative. Most hospitals showed an improvement in scores over the last four years for parameters where they had showed scores in red or orange at the start of the initiative. The overall scores for the group have shown an increase from 72 in 2010 to 81 in 2015.Keywords: benchmarks, clinical quality, lighthouse, platform, scores
Procedia PDF Downloads 300345 Development of Residual Power Series Methods for Efficient Solutions of Stiff Differential Equations
Authors: Gebreegziabher Hailu
Abstract:
This paper presents the development of residual power series methods aimed at efficiently solving stiff differential equations, which pose significant challenges in numerical analysis due to their rapid changes in solution behavior. The RPSM is a numerical approach that generates polynomial-based approximate solutions without the need for linearization, discretization, or perturbation techniques, making it straightforward to implement and less prone to computational errors. We introduce an approach that utilizes power series expansions combined with residual minimization techniques to enhance convergence and stability. By analyzing the theoretical foundations of stiffness, we delve into the formulation of the residual power series method, detailing how it effectively captures the dynamics of stiff systems while maintaining computational efficiency. Numerical experiments demonstrate the method's superiority in terms of accuracy and computational cost when compared to traditional methods like implicit Runge-Kutta or multistep techniques. We also explore adaptive strategies within our framework to automatically adjust parameters based on the stiffness characteristics of the problem at hand. Ultimately, our findings contribute to the broader toolkit for tackling stiff differential equations, offering a robust alternative that promises to streamline computational workflows in various applied mathematics and engineering contexts.Keywords: residual power series methods, stiff differential equoations, numerical approach, Runge Kutta methods
Procedia PDF Downloads 21344 Problems in English into Thai Translation Normally Found in Thai University Students
Authors: Anochao Phetcharat
Abstract:
This research aims to study problems of translation basic knowledge, particularly from English into Thai. The researcher used 38 2nd-year non-English speaking students of Suratthani Rajabhat University as samples. The samples were required to translate an A4-sized article from English into Thai assigned as a part of BEN0202 Translation for Business, a requirement subject for Business English Department, which was also taught by the researcher. After completion of the translation, numerous problems were found and the research grouped them into 4 major types. The normally occurred problems in English-Thai translation works are the lack of knowledge in terms of parts of speech, word-by-word translation employment, misspellings as well as the poor knowledge in English language structure. However, this research is currently under the process of data analysis and shall be completed by the beginning of August. The researcher, nevertheless, predicts that all the above-mentioned problems, will support the researcher’s hypothesizes, that are; 1) the lack of knowledge in terms of parts of speech causes the mistranslation problem; 2) employing word-by-word translation technique hugely results in the mistranslation problem; 3) misspellings yields the mistranslation problem; and 4) the poor knowledge in English language structure also brings about translation errors. The research also predicts that, of all the aforementioned problems, the following ones are found the most, respectively: the poor knowledge in English language structure, word-by-word translation employment, the lack of knowledge in terms of parts of speech, and misspellings.Keywords: problem, student, Thai, translation
Procedia PDF Downloads 435343 Impact Position Method Based on Distributed Structure Multi-Agent Coordination with JADE
Authors: YU Kaijun, Liang Dong, Zhang Yarong, Jin Zhenzhou, Yang Zhaobao
Abstract:
For the impact monitoring of distributed structures, the traditional positioning methods are based on the time difference, which includes the four-point arc positioning method and the triangulation positioning method. But in the actual operation, these two methods have errors. In this paper, the Multi-Agent Blackboard Coordination Principle is used to combine the two methods. Fusion steps: (1) The four-point arc locating agent calculates the initial point and records it to the Blackboard Module.(2) The triangulation agent gets its initial parameters by accessing the initial point.(3) The triangulation agent constantly accesses the blackboard module to update its initial parameters, and it also logs its calculated point into the blackboard.(4) When the subsequent calculation point and the initial calculation point are within the allowable error, the whole coordination fusion process is finished. This paper presents a Multi-Agent collaboration method whose agent framework is JADE. The JADE platform consists of several agent containers, with the agent running in each container. Because of the perfect management and debugging tools of the JADE, it is very convenient to deal with complex data in a large structure. Finally, based on the data in Jade, the results show that the impact location method based on Multi-Agent coordination fusion can reduce the error of the two methods.Keywords: impact monitoring, structural health monitoring(SHM), multi-agent system(MAS), black-board coordination, JADE
Procedia PDF Downloads 176342 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data
Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer
Abstract:
This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML
Procedia PDF Downloads 128341 An Efficient Traceability Mechanism in the Audited Cloud Data Storage
Authors: Ramya P, Lino Abraham Varghese, S. Bose
Abstract:
By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.Keywords: data integrity, dynamic group, group signature, public auditing
Procedia PDF Downloads 391340 Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery
Authors: Sidney A. Lima, Hermann J. H. Kux, Elcio H. Shiguemori
Abstract:
The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6º in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation.Keywords: autonomy, navigation, security, photogrammetry, remote sensing, spatial resection, UAS
Procedia PDF Downloads 187339 Blockchain Technology for Secure and Transparent Oil and Gas Supply Chain Management
Authors: Gaurav Kumar Sinha
Abstract:
The oil and gas industry, characterized by its complex and global supply chains, faces significant challenges in ensuring security, transparency, and efficiency. Blockchain technology, with its decentralized and immutable ledger, offers a transformative solution to these issues. This paper explores the application of blockchain technology in the oil and gas supply chain, highlighting its potential to enhance data security, improve transparency, and streamline operations. By leveraging smart contracts, blockchain can automate and secure transactions, reducing the risk of fraud and errors. Additionally, the integration of blockchain with IoT devices enables real-time tracking and monitoring of assets, ensuring data accuracy and integrity throughout the supply chain. Case studies and pilot projects within the industry demonstrate the practical benefits and challenges of implementing blockchain solutions. The findings suggest that blockchain technology can significantly improve trust and collaboration among supply chain participants, ultimately leading to more efficient and resilient operations. This study provides valuable insights for industry stakeholders considering the adoption of blockchain technology to address their supply chain management challenges.Keywords: blockchain technology, oil and gas supply chain, data security, transparency, smart contracts, IoT integration, real-time tracking, asset monitoring, fraud reduction, supply chain efficiency, data integrity, case studies, industry implementation, trust, collaboration.
Procedia PDF Downloads 34338 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs
Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle
Abstract:
Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.Keywords: meteorological data, Washington D.C., DCNet data, NAM model
Procedia PDF Downloads 232337 Design and Implementation of PD-NN Controller Optimized Neural Networks for a Quad-Rotor
Authors: Chiraz Ben Jabeur, Hassene Seddik
Abstract:
In this paper, a full approach of modeling and control of a four-rotor unmanned air vehicle (UAV), known as quad-rotor aircraft, is presented. In fact, a PD and a PD optimized Neural Networks Approaches (PD-NN) are developed to be applied to control a quad-rotor. The goal of this work is to concept a smart self-tuning PD controller based on neural networks able to supervise the quad-rotor for an optimized behavior while tracking the desired trajectory. Many challenges could arise if the quad-rotor is navigating in hostile environments presenting irregular disturbances in the form of wind added to the model on each axis. Thus, the quad-rotor is subject to three-dimensional unknown static/varying wind disturbances. The quad-rotor has to quickly perform tasks while ensuring stability and accuracy and must behave rapidly with regard to decision-making facing disturbances. This technique offers some advantages over conventional control methods such as PD controller. Simulation results are obtained with the use of Matlab/Simulink environment and are founded on a comparative study between PD and PD-NN controllers based on wind disturbances. These later are applied with several degrees of strength to test the quad-rotor behavior. These simulation results are satisfactory and have demonstrated the effectiveness of the proposed PD-NN approach. In fact, this controller has relatively smaller errors than the PD controller and has a better capability to reject disturbances. In addition, it has proven to be highly robust and efficient, facing turbulences in the form of wind disturbances.Keywords: hostile environment, PD and PD-NN controllers, quad-rotor control, robustness against disturbance
Procedia PDF Downloads 136336 A Spatial Approach to Model Mortality Rates
Authors: Yin-Yee Leong, Jack C. Yue, Hsin-Chung Wang
Abstract:
Human longevity has been experiencing its largest increase since the end of World War II, and modeling the mortality rates is therefore often the focus of many studies. Among all mortality models, the Lee–Carter model is the most popular approach since it is fairly easy to use and has good accuracy in predicting mortality rates (e.g., for Japan and the USA). However, empirical studies from several countries have shown that the age parameters of the Lee–Carter model are not constant in time. Many modifications of the Lee–Carter model have been proposed to deal with this problem, including adding an extra cohort effect and adding another period effect. In this study, we propose a spatial modification and use clusters to explain why the age parameters of the Lee–Carter model are not constant. In spatial analysis, clusters are areas with unusually high or low mortality rates than their neighbors, where the “location” of mortality rates is measured by age and time, that is, a 2-dimensional coordinate. We use a popular cluster detection method—Spatial scan statistics, a local statistical test based on the likelihood ratio test to evaluate where there are locations with mortality rates that cannot be described well by the Lee–Carter model. We first use computer simulation to demonstrate that the cluster effect is a possible source causing the problem of the age parameters not being constant. Next, we show that adding the cluster effect can solve the non-constant problem. We also apply the proposed approach to mortality data from Japan, France, the USA, and Taiwan. The empirical results show that our approach has better-fitting results and smaller mean absolute percentage errors than the Lee–Carter model.Keywords: mortality improvement, Lee–Carter model, spatial statistics, cluster detection
Procedia PDF Downloads 170335 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements
Authors: Denis A. Sokolov, Andrey V. Mazurkevich
Abstract:
In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement
Procedia PDF Downloads 57334 Improving Second Language Speaking Skills via Video Exchange
Authors: Nami Takase
Abstract:
Computer-mediated-communication allows people to connect and interact with each other as if they were sharing the same space. The current study examined the effects of using video letters (VLs) on the development of second language speaking skills of Common European Framework of Reference for Languages (CEFR) A1 and CEFR B2 level learners of English as a foreign language. Two groups were formed to measure the impact of VLs. The experimental and control groups were given the same topic, and both groups worked with a native English-speaking university student from the United States of America. Students in the experimental group exchanged VLs, and students in the control group used video conferencing. Pre- and post-tests were conducted to examine the effects of each practice mode. The transcribed speech-text data showed that the VL group had improved speech accuracy scores, while the video conferencing group had increased sentence complexity scores. The use of VLs may be more effective for beginner-level learners because they are able to notice their own errors and replay videos to better understand the native speaker’s speech at their own pace. Both the VL and video conferencing groups provided positive feedback regarding their interactions with native speakers. The results showed how different types of computer-mediated communication impacts different areas of language learning and speaking practice and how each of these types of online communication tool is suited to different teaching objectives.Keywords: computer-assisted-language-learning, computer-mediated-communication, english as a foreign language, speaking
Procedia PDF Downloads 98333 Compensatory Articulation of Pressure Consonants in Telugu Cleft Palate Speech: A Spectrographic Analysis
Authors: Indira Kothalanka
Abstract:
For individuals born with a cleft palate (CP), there is no separation between the nasal cavity and the oral cavity, due to which they cannot build up enough air pressure in the mouth for speech. Therefore, it is common for them to have speech problems. Common cleft type speech errors include abnormal articulation (compensatory or obligatory) and abnormal resonance (hyper, hypo and mixed nasality). These are generally resolved after palate repair. However, in some individuals, articulation problems do persist even after the palate repair. Such individuals develop variant articulations in an attempt to compensate for the inability to produce the target phonemes. A spectrographic analysis is used to investigate the compensatory articulatory behaviours of pressure consonants in the speech of 10 Telugu speaking individuals aged between 7-17 years with a history of cleft palate. Telugu is a Dravidian language which is spoken in Andhra Pradesh and Telangana states in India. It is a language with the third largest number of native speakers in India and the most spoken Dravidian language. The speech of the informants is analysed using single word list, sentences, passage and conversation. Spectrographic analysis is carried out using PRAAT, speech analysis software. The place and manner of articulation of consonant sounds is studied through spectrograms with the help of various acoustic cues. The types of compensatory articulation identified are glottal stops, palatal stops, uvular, velar stops and nasal fricatives which are non-native in Telugu.Keywords: cleft palate, compensatory articulation, spectrographic analysis, PRAAT
Procedia PDF Downloads 440332 Efficient Estimation for the Cox Proportional Hazards Cure Model
Authors: Khandoker Akib Mohammad
Abstract:
While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood
Procedia PDF Downloads 142