Search results for: Errors and Mistakes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1071

Search results for: Errors and Mistakes

231 Derivatives Balance Method for Linear and Nonlinear Control Systems

Authors: Musaab Mohammed Ahmed Ali, Vladimir Vodichev

Abstract:

work deals with an universal control technique or single controller for linear and nonlinear stabilization and tracing control systems. These systems may be structured as SISO and MIMO. Parameters of controlled plants can vary over a wide range. Introduced a novel control systems design method, construction of stable platform orbits using derivative balance, solved transfer function stability preservation problem of linear system under partial substitution of a rational function. Universal controller is proposed as a polar system with the multiple orbits to simplify design procedure, where each orbit represent single order of controller transfer function. Designed controller consist of proportional, integral, derivative terms and multiple feedback and feedforward loops. The controller parameters synthesis method is presented. In generally, controller parameters depend on new polynomial equation where all parameters have a relationship with each other and have fixed values without requirements of retuning. The simulation results show that the proposed universal controller can stabilize infinity number of linear and nonlinear plants and shaping desired previously ordered performance. It has been proven that sensor errors and poor performance will be completely compensated and cannot affect system performance. Disturbances and noises effect on the controller loop will be fully rejected. Technical and economic effect of using proposed controller has been investigated and compared to adaptive, predictive, and robust controllers. The economic analysis shows the advantage of single controller with fixed parameters to drive infinity numbers of plants compared to above mentioned control techniques.

Keywords: derivative balance, fixed parameters, stable platform, universal control

Procedia PDF Downloads 136
230 Improved Regression Relations Between Different Magnitude Types and the Moment Magnitude in the Western Balkan Earthquake Catalogue

Authors: Anila Xhahysa, Migena Ceyhan, Neki Kuka, Klajdi Qoshi, Damiano Koxhaj

Abstract:

The seismic event catalog has been updated in the framework of a bilateral project supported by the Central European Investment Fund and with the extensive support of Global Earthquake Model Foundation to update Albania's national seismic hazard model. The earthquake catalogue prepared within this project covers the Western Balkan area limited by 38.0° - 48°N, 12.5° - 24.5°E and includes 41,806 earthquakes that occurred in the region between 510 BC and 2022. Since the moment magnitude characterizes the earthquake size accurately and the selected ground motion prediction equations for the seismic hazard assessment employ this scale, it was chosen as the uniform magnitude scale for the catalogue. Therefore, proxy values of moment magnitude had to be obtained by using new magnitude conversion equations between the local and other magnitude types to this unified scale. The Global Centroid Moment Tensor Catalogue was considered the most authoritative for moderate to large earthquakes for moment magnitude reports; hence it was used as a reference for calibrating other sources. The best fit was observed when compared to some regional agencies, whereas, with reports of moment magnitudes from Italy, Greece and Turkey, differences were observed in all magnitude ranges. For teleseismic magnitudes, to account for the non-linearity of the relationships, we used the exponential model for the derivation of the regression equations. The obtained regressions for the surface wave magnitude and short-period body-wave magnitude show considerable differences with Global Earthquake Model regression curves, especially for low magnitude ranges. Moreover, a conversion relation was obtained between the local magnitude of Albania and the corresponding moment magnitude as reported by the global and regional agencies. As errors were present in both variables, the Deming regression was used.

Keywords: regression, seismic catalogue, local magnitude, tele-seismic magnitude, moment magnitude

Procedia PDF Downloads 70
229 Development of Electric Generator and Water Purifier Cart

Authors: Luisito L. Lacatan, Gian Carlo J. Bergonia, Felipe C. Buado III, Gerald L. Gono, Ron Mark V. Ortil, Calvin A. Yap

Abstract:

This paper features the development of a Mobile Self-sustaining Electricity Generator for water distillation process with MCU- based wireless controller & indicator designed to solve the problem of scarcity of clean water. It is a fact that pure water is precious nowadays and its value is more precious to those who do not have or enjoy it. There are many water filtration products in existence today. However, none of these products fully satisfies the needs of families needing clean drinking water. All of the following products require either large sums of money or extensive maintenance, and some products do not even come with a guarantee of potable water. The proposed project was designed to alleviate the problem of scarcity of potable water in the country and part of the purpose was also to identify the problem or loopholes of the project such as the distance and speed required to produce electricity using a wheel and alternator, the required time for the heating element to heat up, the capacity of the battery to maintain the heat of the heating element and the time required for the boiler to produce a clean and potable water. The project has three parts. The first part included the researchers’ effort to plan every part of the project from the conversion of mechanical energy to electrical energy, from purifying water to potable drinking water to the controller and indicator of the project using microcontroller unit (MCU). This included identifying the problem encountered and any possible solution to prevent and avoid errors. Gathering and reviewing related studies about the project helped the researcher reduce and prevent any problems before they could be encountered. It also included the price and quantity of materials used to control the budget.

Keywords: mobile, self – sustaining, electricity generator, water distillation, wireless battery indicator, wireless water level indicator

Procedia PDF Downloads 310
228 Detection of Resistive Faults in Medium Voltage Overhead Feeders

Authors: Mubarak Suliman, Mohamed Hassan

Abstract:

Detection of downed conductors occurring with high fault resistance (reaching kilo-ohms) has always been a challenge, especially in countries like Saudi Arabia, on which earth resistivity is very high in general (reaching more than 1000 Ω-meter). The new approaches for the detection of resistive and high impedance faults are based on the analysis of the fault current waveform. These methods are still under research and development, and they are currently lacking security and dependability. The other approach is communication-based solutions which depends on voltage measurement at the end of overhead line branches and communicate the measured signals to substation feeder relay or a central control center. However, such a detection method is costly and depends on the availability of communication medium and infrastructure. The main objective of this research is to utilize the available standard protection schemes to increase the probability of detection of downed conductors occurring with a low magnitude of fault currents and at the same time avoiding unwanted tripping in healthy conditions and feeders. By specifying the operating region of the faulty feeder, use of tripping curve for discrimination between faulty and healthy feeders, and with proper selection of core balance current transformer (CBCT) and voltage transformers with fewer measurement errors, it is possible to set the pick-up of sensitive earth fault current to minimum values of few amps (i.e., Pick-up Settings = 3 A or 4 A, …) for the detection of earth faults with fault resistance more than (1 - 2 kΩ) for 13.8kV overhead network and more than (3-4) kΩ fault resistance in 33kV overhead network. By implementation of the outcomes of this study, the probability of detection of downed conductors is increased by the utilization of existing schemes (i.e., Directional Sensitive Earth Fault Protection).

Keywords: sensitive earth fault, zero sequence current, grounded system, resistive fault detection, healthy feeder

Procedia PDF Downloads 115
227 Dwindling the Stability of DNA Sequence by Base Substitution at Intersection of COMT and MIR4761 Gene

Authors: Srishty Gulati, Anju Singh, Shrikant Kukreti

Abstract:

The manifestation of structural polymorphism in DNA depends on the sequence and surrounding environment. Ample of folded DNA structures have been found in the cellular system out of which DNA hairpins are very common, however, are indispensable due to their role in the replication initiation sites, recombination, transcription regulation, and protein recognition. We enumerate this approach in our study, where the two base substitutions and change in temperature embark destabilization of DNA structure and misbalance the equilibrium between two structures of a sequence present at the overlapping region of the human COMT gene and MIR4761 gene. COMT and MIR4761 gene encodes for catechol-O-methyltransferase (COMT) enzyme and microRNAs (miRNAs), respectively. Environmental changes and errors during cell division lead to genetic abnormalities. The COMT gene entailed in dopamine regulation fosters neurological diseases like Parkinson's disease, schizophrenia, velocardiofacial syndrome, etc. A 19-mer deoxyoligonucleotide sequence 5'-AGGACAAGGTGTGCATGCC-3' (COMT19) is located at exon-4 on chromosome 22 and band q11.2 at the intersection of COMT and MIR4761 gene. Bioinformatics studies suggest that this sequence is conserved in humans and few other organisms and is involved in recognition of transcription factors in the vicinity of 3'-end. Non-denaturating gel electrophoresis and CD spectroscopy of COMT sequences indicate the formation of hairpin type DNA structures. Temperature-dependent CD studies revealed an unusual shift in the slipped DNA-Hairpin DNA equilibrium with the change in temperature. Also, UV-thermal melting techniques suggest that the two base substitutions on the complementary strand of COMT19 did not affect the structure but reduces the stability of duplex. This study gives insight about the possibility of existing structurally polymorphic transient states within DNA segments present at the intersection of COMT and MIR4761 gene.

Keywords: base-substitution, catechol-o-methyltransferase (COMT), hairpin-DNA, structural polymorphism

Procedia PDF Downloads 122
226 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 60
225 Hypotonia - A Concerning Issue in Neonatal Care

Authors: Eda Jazexhiu-Postoli, Gladiola Hoxha, Ada Simeoni, Sonila Biba

Abstract:

Background Neonatal hypotonia represents a commonly encountered issue in the Neonatal Intensive Care Unit and newborn nursery. The differential diagnosis is broad, encompassing chromosome abnormalities, primary muscular dystrophies, neuropathies and inborn errors of metabolism. Aim of study Our study describes some of the main clinical features of hypotonia in newborns and presents clinical cases of neonatal hypotonia we treated in our Neonatal unit in the last 3 years. Case reports Four neonates born in our hospital presented with hypotonia after birth, one preterm newborn 35-36 weeks of gestational age and three other term newborns (38-39 weeks of gestational age). Prenatal data revealed a decrease in fetal movements in both cases. Intrapartum meconium-stained amniotic fluid was found in 75% of our hypotonic newborns. Clinical features included inability to establish effective respiratory movements and need for resuscitation in the delivery room, respiratory distress syndrome, feeding difficulties and need for oro-gastric tube feeding, dysmorphic features, hoarse voice and moderate to severe muscular hypotonia. The genetic workup revealed the diagnosis of Autosomal Recessive Congenital Myasthenic Syndrome 1-B, Sotos Syndrome, Spinal Muscular Atrophy Type 1 and Transient Hypotonia of the Newborn. Two out of four hypotonic neonates were transferred to the Pediatric Intensive Care Unit and died at the age of three to five months old. Conclusion Hypotonia is a concerning finding in neonatal care and it is suggested by decreased intrauterine fetal movements, failure to establish first breaths, respiratory distress and feeding difficulties in the neonate. Prognosis is determined by its etiology and time of diagnosis and intervention.

Keywords: hypotonic neonate, respiratory distress, feeding difficulties, fetal movements

Procedia PDF Downloads 115
224 Effects of Neem (Azadirachta indica A. Juss) Kernel Inclusion in Broiler Diet on Growth Performance, Organ Weight and Gut Morphometry

Authors: Olatundun Bukola Ezekiel, Adejumo Olusoji

Abstract:

A feeding trial was conducted with 100 two-weeks old broiler chicken to evaluate the influence of inclusion in broiler diets at 0, 2.5, 5, 7.5 and 10% neem kernel (used to replace equal quantity of maize) on their performance, organ weight and gut morphometry. The birds were randomly allotted to five dietary treatments, each treatment having four replicates consisting of five broilers in a completely randomized design. The diets were formulated to be iso-nitrogenous (23% CP). Weekly feed intake and changes in body weight were calculated and feed efficiency determined. At the end of the 28-day feeding trial, four broilers per treatment were selected and sacrificed for carcass evaluation. Results were subjected to statistical analysis using the analysis of variance procedures of Statistical Analysis Software The treatment means were presented with group standard errors of means and where significant, were compared using the Duncan multiple range test of the same software. The results showed that broilers fed 2.5% neem kernel inclusion diets had growth performance statistically comparable to those fed the control diet. Birds on 5, 7.5 and 10% neem kernel diets showed significant (P<0.05) increase in relative weight of liver. The absolute weight of spleen also increased significantly (P<0.05) in birds on 10 % neem kernel diet. More than 5 % neem kernel diets gave significant (P<0.05) increase in the relative weight of the kidney. The length of the small intestine significantly increased in birds fed 7.5 and 10% neem kernel diets. Significant differences (P<0.05) did not occur in the length of the large intestine, right and left caeca. It is recommended that neem kernel can be included up to 2.5% in broiler chicken diet without any deleterious effects on the performance and physiological status of the birds.

Keywords: broiler chicken, growth performance, gut morphometry, neem kernel, organ weight

Procedia PDF Downloads 763
223 AI-Driven Solutions for Optimizing Master Data Management

Authors: Srinivas Vangari

Abstract:

In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.

Keywords: artificial intelligence, master data management, data governance, data quality

Procedia PDF Downloads 18
222 Evaluation of the Improve Vacuum Blood Collection Tube for Laboratory Tests

Authors: Yoon Kyung Song, Seung Won Han, Sang Hyun Hwang, Do Hoon Lee

Abstract:

Laboratory tests is a significant part for the diagnosis, prognosis, treatment of diseases. Blood collection is a simple process, but can be a potential cause of pre-analytical errors. Vacuum blood collection tubes used to collect and store the blood specimens is necessary for accurate test results. The purpose of this study was to validate Improve serum separator tube(SST) (Guanzhou Improve Medical Instruments Co., Ltd, China) for routine clinical chemistry laboratory testing. Blood specimens were collected from 100 volunteers in three different serum vacuum tubes (Greiner SST , Becton Dickinson SST , Improve SST). The specimens were evaluated for 16 routine chemistry tests using TBA-200FR NEO (Toshiba Medical Co. JAPAN). The results were statistically analyzed by paired t-test and Bland-Altman plot. For stability test, the initial results for each tube were compared with results of 72 hours preserved specimens. Their clinical availability was evaluated by biological Variation of Ricos data bank. Paired t-test analysis revealed that AST, ALT, K, Cl showed statistically same results but calcium (CA), phosphorus(PHOS), glucose(GLU), BUN, uric acid(UA), cholesterol(CHOL), total protein(TP), albumin(ALB), total bilirubin(TB), ALP, creatinine(CRE), sodium(NA) were different(P < 0.05) between Improve SST and Greiner SST. Also, CA, PHOS, TP, TB, AST, ALT, NA, K, Cl showed statistically the same results but GLU, BUN, UA, CHOL, ALB, ALP, CRE were different between Improve SST and Becton Dickinson SST. All statistically different cases were clinically acceptable by biological Variation of Ricos data bank. Improve SST tubes showed satisfactory results compared with Greiner SST and Becton Dickinson SST. We concluded that the tubes are acceptable for routine clinical chemistry laboratory testing.

Keywords: blood collection, Guanzhou Improve, SST, vacuum tube

Procedia PDF Downloads 244
221 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 299
220 Validation of the Formula for Air Attenuation Coefficient for Acoustic Scale Models

Authors: Katarzyna Baruch, Agata Szelag, Aleksandra Majchrzak, Tadeusz Kamisinski

Abstract:

Methodology of measurement of sound absorption coefficient in scaled models is based on the ISO 354 standard. The measurement is realised indirectly - the coefficient is calculated from the reverberation time of an empty chamber as well as a chamber with an inserted sample. It is crucial to maintain the atmospheric conditions stable during both measurements. Possible differences may be amended basing on the formulas for atmospheric attenuation coefficient α given in ISO 9613-1. Model studies require scaling particular factors in compliance with specified characteristic numbers. For absorption coefficient measurement, these are for example: frequency range or the value of attenuation coefficient m. Thanks to the possibilities of modern electroacoustic transducers, it is no longer a problem to scale the frequencies which have to be proportionally higher. However, it may be problematic to reduce values of the attenuation coefficient. It is practically obtained by drying the air down to a defined relative humidity. Despite the change of frequency range and relative humidity of the air, ISO 9613-1 standard still allows the calculation of the amendment for little differences of the atmospheric conditions in the chamber during measurements. The paper discusses a number of theoretical analyses and experimental measurements performed in order to obtain consistency between the values of attenuation coefficient calculated from the formulas given in the standard and by measurement. The authors performed measurements of reverberation time in a chamber made in a 1/8 scale in a corresponding frequency range, i.e. 800 Hz - 40 kHz and in different values of the relative air humidity (40% 5%). Based on the measurements, empirical values of attenuation coefficient were calculated and compared with theoretical ones. In general, the values correspond with each other, but for high frequencies and low values of relative air humidity the differences are significant. Those discrepancies may directly influence the values of measured sound absorption coefficient and cause errors. Therefore, the authors made an effort to determine an amendment minimizing described inaccuracy.

Keywords: air absorption correction, attenuation coefficient, dimensional analysis, model study, scaled modelling

Procedia PDF Downloads 421
219 Machine Learning Approach for Automating Electronic Component Error Classification and Detection

Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski

Abstract:

The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.

Keywords: augmented reality, machine learning, object recognition, virtual laboratories

Procedia PDF Downloads 134
218 Comparative Evaluation of a Dynamic Navigation System Versus a Three-Dimensional Microscope in Retrieving Separated Endodontic Files: An in Vitro Study

Authors: Mohammed H. Karim, Bestoon M. Faraj

Abstract:

Introduction: This study aimed to compare the effectiveness of a Dynamic Navigation System (DNS) and a three-dimensional microscope in retrieving broken rotary NiTi files when using trepan burs and the extractor system. Materials and Methods: Thirty maxillary first bicuspids with sixty separate roots were split into two comparable groups based on a comprehensive Cone-Beam Computed Tomography (CBCT) analysis of the root length and curvature. After standardized access opening, glide paths, and patency attainment with the K file (sizes 10 and 15), the teeth were arranged on 3D models (three per quadrant, six per model). Subsequently, controlled-memory heat-treated NiTi rotary files (#25/0.04) were notched 4 mm from the tips and fractured at the apical third of the roots. The C-FR1 Endo file removal system was employed under both guidance to retrieve the fragments, and the success rate, canal aberration, treatment time and volumetric changes were measured. The statistical analysis was performed using IBM SPSS software at a significance level of 0.05. Results: The microscope-guided group had a higher success rate than the DNS guidance, but the difference was insignificant (p > 0.05). In addition, the microscope-guided drills resulted in a substantially lower proportion of canal aberration, required less time to retrieve the fragments and caused minimal change in the root canal volume (p < 0.05). Conclusion: Although dynamically guided trephining with the extractor can retrieve separated instruments, it is inferior to three-dimensional microscope guidance regarding treatment time, procedural errors, and volume change.

Keywords: separated instruments retrieval, dynamic navigation system, 3D video microscope, trephine burs, extractor

Procedia PDF Downloads 69
217 Experimental Research and Analyses of Yoruba Native Speakers’ Chinese Phonetic Errors

Authors: Obasa Joshua Ifeoluwa

Abstract:

Phonetics is the foundation and most important part of language learning. This article, through an acoustic experiment as well as using Praat software, uses Yoruba students’ Chinese consonants, vowels, and tones pronunciation to carry out a visual comparison with that of native Chinese speakers. This article is aimed at Yoruba native speakers learning Chinese phonetics; therefore, Yoruba students are selected. The students surveyed are required to be at an elementary level and have learned Chinese for less than six months. The students selected are all undergraduates majoring in Chinese Studies at the University of Lagos. These students have already learned Chinese Pinyin and are all familiar with the pinyin used in the provided questionnaire. The Chinese students selected are those that have passed the level two Mandarin proficiency examination, which serves as an assurance that their pronunciation is standard. It is discovered in this work that in terms of Mandarin’s consonants pronunciation, Yoruba students cannot distinguish between the voiced and voiceless as well as the aspirated and non-aspirated phonetics features. For instance, while pronouncing [ph] it is clearly shown in the spectrogram that the Voice Onset Time (VOT) of a Chinese speaker is higher than that of a Yoruba native speaker, which means that the Yoruba speaker is pronouncing the unaspirated counterpart [p]. Another difficulty is to pronounce some affricates like [tʂ]、[tʂʰ]、[ʂ]、[ʐ]、 [tɕ]、[tɕʰ]、[ɕ]. This is because these sounds are not in the phonetic system of the Yoruba language. In terms of vowels, some students find it difficult to pronounce some allophonic high vowels such as [ɿ] and [ʅ], therefore pronouncing them as their phoneme [i]; another pronunciation error is pronouncing [y] as [u], also as shown in the spectrogram, a student pronounced [y] as [iu]. In terms of tone, it is most difficult for students to differentiate between the second (rising) and third (falling and rising) tones because these tones’ emphasis is on the rising pitch. This work concludes that the major error made by Yoruba students while pronouncing Chinese sounds is caused by the interference of their first language (LI) and sometimes by their lingua franca.

Keywords: Chinese, Yoruba, error analysis, experimental phonetics, consonant, vowel, tone

Procedia PDF Downloads 111
216 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation

Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell

Abstract:

Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.

Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models

Procedia PDF Downloads 146
215 Functional Vision of Older People in Galician Nursing Homes

Authors: C. Vázquez, L. M. Gigirey, C. P. del Oro, S. Seoane

Abstract:

Early detection of visual problems plays a key role in the aging process. However, although vision problems are common among older people, the percentage of aging people who perform regular optometric exams is low. In fact, uncorrected refractive errors are one of the main causes of visual impairment in this group of the population. Purpose: To evaluate functional vision of older residents in order to show the urgent need of visual screening programs in Galician nursing homes. Methodology: We examined 364 older adults aged 65 years and over. To measure vision of the daily living, we tested distance and near presenting visual acuity (binocular visual acuity with habitual correction if warn, directional E-Snellen) Presenting near vision was tested at the usual working distance. We defined visual impairment (distance and near) as a presenting visual acuity less than 0.3. Exclusion criteria included immobilized residents unable to reach the USC Dual Sensory Loss Unit for visual screening. Association between categorical variables was performed using chi-square tests. We used Pearson and Spearman correlation tests and the variance analysis to determine differences between groups of interest. Results: 23,1% of participants have visual impairment for distance vision and 16,4% for near vision. The percentage of residents with far and near visual impairment reaches 8,2%. As expected, prevalence of visual impairment increases with age. No differences exist with regard to the level of functional vision between gender. Differences exist between age group respect to distance vision, but not in case of near vision. Conclusion: prevalence of visual impairment is high among the older people tested in this pilot study. This means a high percentage of older people with limitations in their daily life activities. It is necessary to develop an effective vision screening program for early detection of vision problems in Galician nursing homes.

Keywords: functional vision, elders, aging, nursing homes

Procedia PDF Downloads 408
214 Variations in Spatial Learning and Memory across Natural Populations of Zebrafish, Danio rerio

Authors: Tamal Roy, Anuradha Bhat

Abstract:

Cognitive abilities aid fishes in foraging, avoiding predators & locating mates. Factors like predation pressure & habitat complexity govern learning & memory in fishes. This study aims to compare spatial learning & memory across four natural populations of zebrafish. Zebrafish, a small cyprinid inhabits a diverse range of freshwater habitats & this makes it amenable to studies investigating role of native environment in spatial cognitive abilities. Four populations were collected across India from waterbodies with contrasting ecological conditions. Habitat complexity of the water-bodies was evaluated as a combination of channel substrate diversity and diversity of vegetation. Experiments were conducted on populations under controlled laboratory conditions. A square shaped spatial testing arena (maze) was constructed for testing the performance of adult zebrafish. The square tank consisted of an inner square shaped layer with the edges connected to the diagonal ends of the tank-walls by connections thereby forming four separate chambers. Each of the four chambers had a main door in the centre. Each chamber had three sections separated by two windows. A removable coloured window-pane (red, yellow, green or blue) identified each main door. A food reward associated with an artificial plant was always placed inside the left-hand section of the red-door chamber. The position of food-reward and plant within the red-door chamber was fixed. A test fish would have to explore the maze by taking turns and locate the food inside the right-side section of the red-door chamber. Fishes were sorted from each population stock and kept individually in separate containers for identification. At a time, a test fish was released into the arena and allowed 20 minutes to explore in order to find the food-reward. In this way, individual fishes were trained through the maze to locate the food reward for eight consecutive days. The position of red door, with the plant and the reward, was shuffled every day. Following training, an intermission of four days was given during which the fishes were not subjected to trials. Post-intermission, the fishes were re-tested on the 13th day following the same protocol for their ability to remember the learnt task. Exploratory tendencies and latency of individuals to explore on 1st day of training, performance time across trials, and number of mistakes made each day were recorded. Additionally, mechanism used by individuals to solve the maze each day was analyzed across populations. Fishes could be expected to use algorithm (sequence of turns) or associative cues in locating the food reward. Individuals of populations did not differ significantly in latencies and tendencies to explore. No relationship was found between exploration and learning across populations. High habitat-complexity populations had higher rates of learning & stronger memory while low habitat-complexity populations had lower rates of learning and much reduced abilities to remember. High habitat-complexity populations used associative cues more than algorithm for learning and remembering while low habitat-complexity populations used both equally. The study, therefore, helped understand the role of natural ecology in explaining variations in spatial learning abilities across populations.

Keywords: algorithm, associative cue, habitat complexity, population, spatial learning

Procedia PDF Downloads 288
213 Wasting Human and Computer Resources

Authors: Mária Csernoch, Piroska Biró

Abstract:

The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem-solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text-based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text-based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.

Keywords: deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources

Procedia PDF Downloads 382
212 Design, Analysis and Obstacle Avoidance Control of an Electric Wheelchair with Sit-Sleep-Seat Elevation Functions

Authors: Waleed Ahmed, Huang Xiaohua, Wilayat Ali

Abstract:

The wheelchair users are generally exposed to physical and psychological health problems, e.g., pressure sores and pain in the hip joint, associated with seating posture or being inactive in a wheelchair for a long time. Reclining Wheelchair with back, thigh, and leg adjustment helps in daily life activities and health preservation. The seat elevating function of an electric wheelchair allows the user (lower limb amputation) to reach different heights. An electric wheelchair is expected to ease the lives of the elderly and disable people by giving them mobility support and decreasing the percentage of accidents caused by users’ narrow sight or joystick operation errors. Thus, this paper proposed the design, analysis and obstacle avoidance control of an electric wheelchair with sit-sleep-seat elevation functions. A 3D model of a wheelchair is designed in SolidWorks that was later used for multi-body dynamic (MBD) analysis and to verify driving control system. The control system uses the fuzzy algorithm to avoid the obstacle by getting information in the form of distance from the ultrasonic sensor and user-specified direction from the joystick’s operation. The proposed fuzzy driving control system focuses on the direction and velocity of the wheelchair. The wheelchair model has been examined and proven in MSC Adams (Automated Dynamic Analysis of Mechanical Systems). The designed fuzzy control algorithm is implemented on Gazebo robotic 3D simulator using Robotic Operating System (ROS) middleware. The proposed wheelchair design enhanced mobility and quality of life by improving the user’s functional capabilities. Simulation results verify the non-accidental behavior of the electric wheelchair.

Keywords: fuzzy logic control, joystick, multi body dynamics, obstacle avoidance, scissor mechanism, sensor

Procedia PDF Downloads 129
211 Ethical Considerations of Disagreements Between Clinicians and Artificial Intelligence Recommendations: A Scoping Review

Authors: Adiba Matin, Daniel Cabrera, Javiera Bellolio, Jasmine Stewart, Dana Gerberi (librarian), Nathan Cummins, Fernanda Bellolio

Abstract:

OBJECTIVES: Artificial intelligence (AI) tools are becoming more prevalent in healthcare settings, particularly for diagnostic and therapeutic recommendations, with an expected surge in the incoming years. The bedside use of this technology for clinicians opens the possibility of disagreements between the recommendations from AI algorithms and clinicians’ judgment. There is a paucity in the literature analyzing nature and possible outcomes of these potential conflicts, particularly related to ethical considerations. The goal of this scoping review is to identify, analyze and classify current themes and potential strategies addressing ethical conflicts originating from the conflict between AI and human recommendations. METHODS: A protocol was written prior to the initiation of the study. Relevant literature was searched by a medical librarian for the terms of artificial intelligence, healthcare and liability, ethics, or conflict. Search was run in 2021 in Ovid Cochrane Central Register of Controlled Trials, Embase, Medline, IEEE Xplore, Scopus, and Web of Science Core Collection. Articles describing the role of AI in healthcare that mentioned conflict between humans and AI were included in the primary search. Two investigators working independently and in duplicate screened titles and abstracts and reviewed full-text of potentially eligible studies. Data was abstracted into tables and reported by themes. We followed methodological guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). RESULTS: Of 6846 titles and abstracts, 225 full texts were selected, and 48 articles included in this review. 23 articles were included as original research and review papers. 25 were included as editorials and commentaries with similar themes. There was a lack of consensus in the included articles on who would be held liable for mistakes incurred by following AI recommendations. It appears that there is a dichotomy of the perceived ethical consequences depending on if the negative outcome is a result of a human versus AI conflict or secondary to a deviation from standard of care. Themes identified included transparency versus opacity of recommendations, data bias, liability of outcomes, regulatory framework, and the overall scope of artificial intelligence in healthcare. A relevant issue identified was the concern by clinicians of the “black box” nature of these recommendations and the ability to judge appropriateness of AI guidance. CONCLUSION AI clinical tools are being rapidly developed and adopted, and the use of this technology will create conflicts between AI algorithms and healthcare workers with various outcomes. In turn, these conflicts may have legal, and ethical considerations. There is limited consensus about the focus of ethical and liability for outcomes originated from disagreements. This scoping review identified the importance of framing the problem in terms of conflict between standard of care or not, and informed by the themes of transparency/opacity, data bias, legal liability, absent regulatory frameworks and understanding of the technology. Finally, limited recommendations to mitigate ethical conflicts between AI and humans have been identified. Further work is necessary in this field.

Keywords: ethics, artificial intelligence, emergency medicine, review

Procedia PDF Downloads 94
210 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 71
209 Downscaling Grace Gravity Models Using Spectral Combination Techniques for Terrestrial Water Storage and Groundwater Storage Estimation

Authors: Farzam Fatolazadeh, Kalifa Goita, Mehdi Eshagh, Shusen Wang

Abstract:

The Gravity Recovery and Climate Experiment (GRACE) is a satellite mission with twin satellites for the precise determination of spatial and temporal variations in the Earth’s gravity field. The products of this mission are monthly global gravity models containing the spherical harmonic coefficients and their errors. These GRACE models can be used for estimating terrestrial water storage (TWS) variations across the globe at large scales, thereby offering an opportunity for surface and groundwater storage (GWS) assessments. Yet, the ability of GRACE to monitor changes at smaller scales is too limited for local water management authorities. This is largely due to the low spatial and temporal resolutions of its models (~200,000 km2 and one month, respectively). High-resolution GRACE data products would substantially enrich the information that is needed by local-scale decision-makers while offering the data for the regions that lack adequate in situ monitoring networks, including northern parts of Canada. Such products could eventually be obtained through downscaling. In this study, we extended the spectral combination theory to simultaneously downscale spatiotemporally the 3o spatial coarse resolution of GRACE to 0.25o degrees resolution and monthly coarse resolution to daily resolution. This method combines the monthly gravity field solution of GRACE and daily hydrological model products in the form of both low and high-frequency signals to produce high spatiotemporal resolution TWSA and GWSA products. The main contribution and originality of this study are to comprehensively and simultaneously consider GRACE and hydrological variables and their uncertainties to form the estimator in the spectral domain. Therefore, it is predicted that we reach downscale products with an acceptable accuracy.

Keywords: GRACE satellite, groundwater storage, spectral combination, terrestrial water storage

Procedia PDF Downloads 83
208 Hybrid CNN-SAR and Lee Filtering for Enhanced InSAR Phase Unwrapping and Coherence Optimization

Authors: Hadj Sahraoui Omar, Kebir Lahcen Wahib, Bennia Ahmed

Abstract:

Interferometric Synthetic Aperture Radar (InSAR) coherence is a crucial parameter for accurately monitoring ground deformation and environmental changes. However, coherence can be degraded by various factors such as temporal decorrelation, atmospheric disturbances, and geometric misalignments, limiting the reliability of InSAR measurements (Omar Hadj‐Sahraoui and al. 2019). To address this challenge, we propose an innovative hybrid approach that combines artificial intelligence (AI) with advanced filtering techniques to optimize interferometric coherence in InSAR data. Specifically, we introduce a Convolutional Neural Network (CNN) integrated with the Lee filter to enhance the performance of radar interferometry. This hybrid method leverages the strength of CNNs to automatically identify and mitigate the primary sources of decorrelation, while the Lee filter effectively reduces speckle noise, improving the overall quality of interferograms. We develop a deep learning-based model trained on multi-temporal and multi-frequency SAR datasets, enabling it to predict coherence patterns and enhance low-coherence regions. This hybrid CNN-SAR with Lee filtering significantly reduces noise and phase unwrapping errors, leading to more precise deformation maps. Experimental results demonstrate that our approach improves coherence by up to 30% compared to traditional filtering techniques, making it a robust solution for challenging scenarios such as urban environments, vegetated areas, and rapidly changing landscapes. Our method has potential applications in geohazard monitoring, urban planning, and environmental studies, offering a new avenue for enhancing InSAR data reliability through AI-powered optimization combined with robust filtering techniques.

Keywords: CNN-SAR, Lee Filter, hybrid optimization, coherence, InSAR phase unwrapping, speckle noise reduction

Procedia PDF Downloads 12
207 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping

Authors: Jie Xu, Zengshan Tian, Ze Li

Abstract:

Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.

Keywords: frequency hopping, phase error elimination, carrier phase, ranging

Procedia PDF Downloads 122
206 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 75
205 A Review on the Impact of Mental Health of the Workman Employees Leads to Unsafe Activities in the Manufacturing Industry

Authors: C. John Thomas, Sabitha Jannet

Abstract:

The review concentrates on mental health wellbeing at workplace to create a safe work environment. The purpose of the study is to find the existing gaps in occupational health in the manufacturing sector. Mental wellbeing is important because it is an essential component of human life and influences our emotions, attitudes, and feelings. In the workplace, mental wellbeing can encourage a culture of safety and avoid accidents. An environment where individuals are comfortable voicing themselves and being themselves. More technically, when individuals have psychological protection at work, without regard for humiliation or punishment, they feel relaxed expressing complaints and errors. They are sure they are going to speak up and not humiliate, neglect, or accuse them. Once they are uncertain about something, they know they are going to ask questions. They are inclined to trust their colleagues and respect them. The reviews were considered through keywords and health-related topics. There are different characteristics of mental wellbeing in the literature and how it impacts the workplace. There is also a possibility that their personal lives will have an impact. In every occupation, however, there is widespread acknowledgment that psychosocial hazards are an important health risk for workers, yet in many workplaces, the focus remains on physical hazards. It is alleged that the understating of workplace psychosocial hazards is primarily due to the perception that they present a more difficult and complex challenge when compared to other health and safety issues. Others, however, allege it is the paucity of awareness about psychosocial hazards and their alleviation that explains their relative neglect. The other researchers focused that following global trends, it is believed that psychosocial hazards must be minimized within our workplaces and that there is a requirement for workplace interventions to reduce psychological harm and promote mental health for all the workman employees to achieve zero harm. In common, this literature review compares various results of the individual studies on their research methods and finding to fill gaps.

Keywords: mental health wellbeing, occupational health, psychosocial hazards, safety culture, safety management systems, workman employee, workplace safety

Procedia PDF Downloads 114
204 Predicting Returns Volatilities and Correlations of Stock Indices Using Multivariate Conditional Autoregressive Range and Return Models

Authors: Shay Kee Tan, Kok Haur Ng, Jennifer So-Kuen Chan

Abstract:

This paper extends the conditional autoregressive range (CARR) model to multivariate CARR (MCARR) model and further to the two-stage MCARR-return model to model and forecast volatilities, correlations and returns of multiple financial assets. The first stage model fits the scaled realised Parkinson volatility measures using individual series and their pairwise sums of indices to the MCARR model to obtain in-sample estimates and forecasts of volatilities for these individual and pairwise sum series. Then covariances are calculated to construct the fitted variance-covariance matrix of returns which are imputed into the stage-two return model to capture the heteroskedasticity of assets’ returns. We investigate different choices of mean functions to describe the volatility dynamics. Empirical applications are based on the Standard and Poor 500, Dow Jones Industrial Average and Dow Jones United States Financial Service Indices. Results show that the stage-one MCARR models using asymmetric mean functions give better in-sample model fits than those based on symmetric mean functions. They also provide better out-of-sample volatility forecasts than those using CARR models based on two robust loss functions with the scaled realised open-to-close volatility measure as the proxy for the unobserved true volatility. We also find that the stage-two return models with constant means and multivariate Student-t errors give better in-sample fits than the Baba, Engle, Kraft, and Kroner type of generalized autoregressive conditional heteroskedasticity (BEKK-GARCH) models. The estimates and forecasts of value-at-risk (VaR) and conditional VaR based on the best MCARR-return models for each asset are provided and tested using Kupiec test to confirm the accuracy of the VaR forecasts.

Keywords: range-based volatility, correlation, multivariate CARR-return model, value-at-risk, conditional value-at-risk

Procedia PDF Downloads 99
203 Development of Advanced Virtual Radiation Detection and Measurement Laboratory (AVR-DML) for Nuclear Science and Engineering Students

Authors: Lily Ranjbar, Haori Yang

Abstract:

Online education has been around for several decades, but the importance of online education became evident after the COVID-19 pandemic. Eventhough the online delivery approach works well for knowledge building through delivering content and oversight processes, it has limitations in developing hands-on laboratory skills, especially in the STEM field. During the pandemic, many education institutions faced numerous challenges in delivering lab-based courses, especially in the STEM field. Also, many students worldwide were unable to practice working with lab equipment due to social distancing or the significant cost of highly specialized equipment. The laboratory plays a crucial role in nuclear science and engineering education. It can engage students and improve their learning outcomes. In addition, online education and virtual labs have gained substantial popularity in engineering and science education. Therefore, developing virtual labs is vital for institutions to deliver high-class education to their students, including their online students. The School of Nuclear Science and Engineering (NSE) at Oregon State University, in partnership with SpectralLabs company, has developed an Advanced Virtual Radiation Detection and Measurement Lab (AVR-DML) to offer a fully online Master of Health Physics program. It was essential for us to use a system that could simulate nuclear modules that accurately replicate the underlying physics, the nature of radiation and radiation transport, and the mechanics of the instrumentations used in the real radiation detection lab. It was all accomplished using a Realistic, Adaptive, Interactive Learning System (RAILS). RAILS is a comprehensive software simulation-based learning system for use in training. It is comprised of a web-based learning management system that is located on a central server, as well as a 3D-simulation package that is downloaded locally to user machines. Users will find that the graphics, animations, and sounds in RAILS create a realistic, immersive environment to practice detecting different radiation sources. These features allow students to coexist, interact and engage with a real STEM lab in all its dimensions. It enables them to feel like they are in a real lab environment and to see the same system they would in a lab. Unique interactive interfaces were designed and developed by integrating all the tools and equipment needed to run each lab. These interfaces provide students full functionality for data collection, changing the experimental setup, and live data collection with real-time updates for each experiment. Students can manually do all experimental setups and parameter changes in this lab. Experimental results can then be tracked and analyzed in an oscilloscope, a multi-channel analyzer, or a single-channel analyzer (SCA). The advanced virtual radiation detection and measurement laboratory developed in this study enabled the NSE school to offer a fully online MHP program. This flexibility of course modality helped us to attract more non-traditional students, including international students. It is a valuable educational tool as students can walk around the virtual lab, make mistakes, and learn from them. They have an unlimited amount of time to repeat and engage in experiments. This lab will also help us speed up training in nuclear science and engineering.

Keywords: advanced radiation detection and measurement, virtual laboratory, realistic adaptive interactive learning system (rails), online education in stem fields, student engagement, stem online education, stem laboratory, online engineering education

Procedia PDF Downloads 90
202 Main Control Factors of Fluid Loss in Drilling and Completion in Shunbei Oilfield by Unmanned Intervention Algorithm

Authors: Peng Zhang, Lihui Zheng, Xiangchun Wang, Xiaopan Kou

Abstract:

Quantitative research on the main control factors of lost circulation has few considerations and single data source. Using Unmanned Intervention Algorithm to find the main control factors of lost circulation adopts all measurable parameters. The degree of lost circulation is characterized by the loss rate as the objective function. Geological, engineering and fluid data are used as layers, and 27 factors such as wellhead coordinates and WOB are used as dimensions. Data classification is implemented to determine function independent variables. The mathematical equation of loss rate and 27 influencing factors is established by multiple regression method, and the undetermined coefficient method is used to solve the undetermined coefficient of the equation. Only three factors in t-test are greater than the test value 40, and the F-test value is 96.557%, indicating that the correlation of the model is good. The funnel viscosity, final shear force and drilling time were selected as the main control factors by elimination method, contribution rate method and functional method. The calculated values of the two wells used for verification differ from the actual values by -3.036m3/h and -2.374m3/h, with errors of 7.21% and 6.35%. The influence of engineering factors on the loss rate is greater than that of funnel viscosity and final shear force, and the influence of the three factors is less than that of geological factors. Quantitatively calculate the best combination of funnel viscosity, final shear force and drilling time. The minimum loss rate of lost circulation wells in Shunbei area is 10m3/h. It can be seen that man-made main control factors can only slow down the leakage, but cannot fundamentally eliminate it. This is more in line with the characteristics of karst caves and fractures in Shunbei fault solution oil and gas reservoir.

Keywords: drilling and completion, drilling fluid, lost circulation, loss rate, main controlling factors, unmanned intervention algorithm

Procedia PDF Downloads 112