Search results for: explanations for the probable causes of the errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1404

Search results for: explanations for the probable causes of the errors

204 A Corpus Study of English Verbs in Chinese EFL Learners’ Academic Writing Abstracts

Authors: Shuaili Ji

Abstract:

The correct use of verbs is an important element of high-quality research articles, and thus for Chinese EFL learners, it is significant to master characteristics of verbs and to precisely use verbs. However, some researches have shown that there are differences in using verbs between learners and native speakers and learners have difficulty in using English verbs. This corpus-based quantitative research can enhance learners’ knowledge of English verbs and promote the quality of research article abstracts even of the whole academic writing. The aim of this study is to find the differences between learners’ and native speakers’ use of verbs and to study the factors that contribute to those differences. To this end, the research question is as follows: What are the differences between most frequently used verbs by learners and those by native speakers? The research question is answered through a study that uses corpus-based data-driven approach to analyze the verbs used by learners in their abstract writings in terms of collocation, colligation and semantic prosody. The results show that: (1) EFL learners obviously overused ‘be, can, find, make’ and underused ‘investigate, examine, may’. As to modal verbs, learners obviously overused ‘can’ while underused ‘may’. (2) Learners obviously overused ‘we find + object clauses’ while underused ‘nouns (results, findings, data) + suggest/indicate/reveal + object clauses’ when expressing research results. (3) Learners tended to transfer the collocation, colligation and semantic prosody of shǐ and zuò to make. (4) Learners obviously overused ‘BE+V-ed’ and used BE as the main verb. They also obviously overused the basic forms of BE such as be, is, are, while obviously underused its inflections (was, were). These results manifested learners’ lack of accuracy and idiomatic property in verb usage. Due to the influence of the concept transfer of Chinese, the verbs in learners’ abstracts showed obvious transfer of mother language. In addition, learners have not fully mastered the use of verbs, avoiding using complex colligations to prevent errors. Based on these findings, the present study has implications for English teaching, seeking to have implications for English academic abstract writing in China. Further research could be undertaken to study the use of verbs in the whole dissertation to find out whether the characteristic of the verbs in abstracts can apply in the whole dissertation or not.

Keywords: academic writing abstracts, Chinese EFL learners, corpus-based, data-driven, verbs

Procedia PDF Downloads 335
203 A Genre-Based Approach to the Teaching of Pronunciation

Authors: Marden Silva, Danielle Guerra

Abstract:

Some studies have indicated that pronunciation teaching hasn’t been paid enough attention by teachers regarding EFL contexts. In particular, segmental and suprasegmental features through genre-based approach may be an opportunity on how to integrate pronunciation into a more meaningful learning practice. Therefore, the aim of this project was to carry out a survey on some aspects related to English pronunciation that Brazilian students consider more difficult to learn, thus enabling the discussion of strategies that can facilitate the development of oral skills in English classes by integrating the teaching of phonetic-phonological aspects into the genre-based approach. Notions of intelligibility, fluency and accuracy were proposed by some authors as an ideal didactic sequence. According to their proposals, basic learners should be exposed to activities focused on the notion of intelligibility as well as intermediate students to the notion of fluency, and finally more advanced ones to accuracy practices. In order to test this hypothesis, data collection was conducted during three high school English classes at Federal Center for Technological Education of Minas Gerais (CEFET-MG), in Brazil, through questionnaires and didactic activities, which were recorded and transcribed for further analysis. The genre debate was chosen to facilitate the oral expression of the participants in a freer way, making them answering questions and giving their opinion about a previously selected topic. The findings indicated that basic students demonstrated more difficulty with aspects of English pronunciation than the others. Many of the intelligibility aspects analyzed had to be listened more than once for a better understanding. For intermediate students, the speeches recorded were considerably easier to understand, but nevertheless they found it more difficult to pronounce the words fluently, often interrupting their speech to think about what they were going to say and how they would talk. Lastly, more advanced learners seemed to express their ideas more fluently, but still subtle errors related to accuracy were perceptible in speech, thereby confirming the proposed hypothesis. It was also seen that using genre-based approach to promote oral communication in English classes might be a relevant method, considering the socio-communicative function inherent in the suggested approach.

Keywords: EFL, genre-based approach, oral skills, pronunciation

Procedia PDF Downloads 130
202 The Noun-Phrase Elements on the Usage of the Zero Article

Authors: Wen Zhen

Abstract:

Compared to content words, function words have been relatively overlooked by English learners especially articles. The article system, to a certain extent, becomes a resistance to know English better, driven by different elements. Three principal factors can be summarized in term of the nature of the articles when referring to the difficulty of the English article system. However, making the article system more complex are difficulties in the second acquisition process, for [-ART] learners have to create another category, causing even most non-native speakers at proficiency level to make errors. According to the sequences of acquisition of the English article, it is showed that the zero article is first acquired and in high inaccuracy. The zero article is often overused in the early stages of L2 acquisition. Although learners at the intermediate level move to underuse the zero article for they realize that the zero article does not cover any case, overproduction of the zero article even occurs among advanced L2 learners. The aim of the study is to investigate noun-phrase factors which give rise to incorrect usage or overuse of the zero article, thus providing suggestions for L2 English acquisition. Moreover, it enables teachers to carry out effective instruction that activate conscious learning of students. The research question will be answered through a corpus-based, data- driven approach to analyze the noun-phrase elements from the semantic context and countability of noun-phrases. Based on the analysis of the International Thurber Thesis corpus, the results show that: (1) Although context of [-definite,-specific] favored the zero article, both[-definite,+specific] and [+definite,-specific] showed less influence. When we reflect on the frequency order of the zero article , prototypicality plays a vital role in it .(2)EFL learners in this study have trouble classifying abstract nouns as countable. We can find that it will bring about overuse of the zero article when learners can not make clear judgements on countability altered from (+definite ) to (-definite).Once a noun is perceived as uncountable by learners, the choice would fall back on the zero article. These findings suggest that learners should be engaged in recognition of the countability of new vocabulary by explaining nouns in lexical phrases and explore more complex aspects such as analysis dependent on discourse.

Keywords: noun phrase, zero article, corpus, second language acquisition

Procedia PDF Downloads 253
201 Patient Safety Culture in Brazilian Hospitals from Nurse's Team Perspective

Authors: Carmen Silvia Gabriel, Dsniele Bernardi da Costa, Andrea Bernardes, Sabrina Elias Mikael, Daniele da Silva Ramos

Abstract:

The goal of this quantitative study is to investigate patient safety culture from the perspective of professional from the hospital nursing team.It was conducted in two Brazilian hospitals,.The sample included 282 nurses Data collection occurred in 2013, through the questionnaire Hospital Survey on Patient Safety Culture.Based on the assessment of the dimensions is stressed that, in the dimension teamwork across hospital units, 69.4% of professionals agree that when a lot of work needs to be done quickly, they work together as a team; about the dimension supervisor/ manager expectations and actions promoting safety, 70.2% agree that their supervisor overlooks patient safety problems.Related to organizational learning and continuous improvement, 56.5% agree that there is evaluation of the effectiveness of the changes after its implementation.On hospital management support for patient safety, 52.8% refer that the actions of hospital management show that patient safety is a top priority.On the overall perception of patient safety, 57.2% disagree that patient safety is never compromised due to higher amount of work to be completed.In what refers to feedback and communication about error, 57.7% refer that always and usually receive such information. Relative to communication openness, 42.9% said they never or rarely feel free to question the decisions / actions of their superiors.On frequency of event reporting, 64.7% said often and always notify events with no damages to patients..About teamwork across hospital units is noted similarity between the percentages of agreement and disagreement, as on the item there is a good cooperation among hospital units that need to work together, that indicates 41.4% and 40.5% respectively.Related to adequacy of professionals, 77.8 % disagree on the existence of sufficient amount of employees to do the job, 52.4% agree that shift changes are problematic for patients. On nonpunitive response to errors, 71.7% indicate that when an event is reported it seems that the focus is on the person.On the patient safety grade of the institution, 41.6 % classified it as very good. it is concluded that there are positive points in the safety culture, and some weaknesses as a punitive culture and impaired patient safety due to work overload .

Keywords: quality of health care, health services evaluation, safety culture, patient safety, nursing team

Procedia PDF Downloads 299
200 Comparative Evaluation of a Dynamic Navigation System Versus a Three-Dimensional Microscope in Retrieving Separated Endodontic Files: An in Vitro Study

Authors: Mohammed H. Karim, Bestoon M. Faraj

Abstract:

Introduction: instrument separation is a common challenge in the endodontic field. Various techniques and technologies have been developed to improve the retrieval success rate. This study aimed to compare the effectiveness of a Dynamic Navigation System (DNS) and a three-dimensional microscope in retrieving broken rotary NiTi files when using trepan burs and the extractor system. Materials and Methods: Thirty maxillary first bicuspids with sixty separate roots were split into two comparable groups based on a comprehensive Cone-Beam Computed Tomography (CBCT) analysis of the root length and curvature. After standardised access opening, glide paths, and patency attainment with the K file (sizes 10 and 15), the teeth were arranged on 3D models (three per quadrant, six per model). Subsequently, controlled-memory heat-treated NiTi rotary files (#25/0.04) were notched 4 mm from the tips and fractured at the apical third of the roots. The C-FR1 Endo file removal system was employed under both guidance to retrieve the fragments, and the success rate, canal aberration, treatment time and volumetric changes were measured. The statistical analysis was performed using IBM SPSS software at a significance level of 0.05. Results: The microscope-guided group had a higher success rate than the DNS guidance, but the difference was insignificant (p > 0.05). In addition, the microscope-guided drills resulted in a substantially lower proportion of canal aberration, required less time to retrieve the fragments and caused a minor change in the root canal volume (p < 0.05). Conclusion: Although dynamically guided trephining with the extractor can retrieve separated instruments, it is inferior to three-dimensional microscope guidance regarding treatment time, procedural errors, and volume change.

Keywords: dynamic navigation system, separated instruments retrieval, trephine burs and extractor system, three-dimensional video microscope

Procedia PDF Downloads 99
199 A Computerized Tool for Predicting Future Reading Abilities in Pre-Readers Children

Authors: Stephanie Ducrot, Marie Vernet, Eve Meiss, Yves Chaix

Abstract:

Learning to read is a key topic of debate today, both in terms of its implications on school failure and illiteracy and regarding what are the best teaching methods to develop. It is estimated today that four to six percent of school-age children suffer from specific developmental disorders that impair learning. The findings from people with dyslexia and typically developing readers suggest that the problems children experience in learning to read are related to the preliteracy skills that they bring with them from kindergarten. Most tools available to professionals are designed for the evaluation of child language problems. In comparison, there are very few tools for assessing the relations between visual skills and the process of learning to read. Recent literature reports that visual-motor skills and visual-spatial attention in preschoolers are important predictors of reading development — the main goal of this study aimed at improving screening for future reading difficulties in preschool children. We used a prospective, longitudinal approach where oculomotor processes (assessed with the DiagLECT test) were measured in pre-readers, and the impact of these skills on future reading development was explored. The dialect test specifically measures the online time taken to name numbers arranged irregularly in horizontal rows (horizontal time, HT), and the time taken to name numbers arranged in vertical columns (vertical time, VT). A total of 131 preschoolers took part in this study. At Time 0 (kindergarten), the mean VT, HT, errors were recorded. One year later, at Time 1, the reading level of the same children was evaluated. Firstly, this study allowed us to provide normative data for a standardized evaluation of the oculomotor skills in 5- and 6-year-old children. The data also revealed that 25% of our sample of preschoolers showed oculomotor impairments (without any clinical complaints). Finally, the results of this study assessed the validity of the DiagLECT test for predicting reading outcomes; the better a child's oculomotor skills are, the better his/her reading abilities will be.

Keywords: vision, attention, oculomotor processes, reading, preschoolers

Procedia PDF Downloads 147
198 Comparison of the Hospital Patient Safety Culture between Bulgarian, Croatian and American: Preliminary Results

Authors: R. Stoyanova, R. Dimova, M. Tarnovska, T. Boeva, R. Dimov, I. Doykov

Abstract:

Patient safety culture (PSC) is an essential component of quality of healthcare. Improving PSC is considered a priority in many developed countries. Specialized software platform for registration and evaluation of hospital patient safety culture has been developed with the support of the Medical University Plovdiv Project №11/2017. The aim of the study is to assess the status of PSC in Bulgarian hospitals and to compare it to that in USA and Croatian hospitals. Methods: The study was conducted from June 01 to July 31, 2018 using the web-based Bulgarian Version of the Hospital Survey on Patient Safety Culture Questionnaire (B-HSOPSC). Two hundred and forty-eight medical professionals from different hospitals in Bulgaria participated in the study. To quantify the differences of positive scores distributions for each of the 42 HSOPSC items between Bulgarian, Croatian and USA samples, the x²-test was applied. The research hypothesis assumed that there are no significant differences between the Bulgarian, Croatian and US PSCs. Results: The results revealed 14 significant differences in the positive scores between the Bulgarian and Croatian PSCs and 15 between the Bulgarian and the USA PSC, respectively. Bulgarian medical professionals provided less positive responses to 12 items compared with Croatian and USA respondents. The Bulgarian respondents were more positive compared to Croatians on the feedback and communication of medical errors (Items - C1, C4, C5) as well as on the employment of locum staff (A7) and the frequency of reported mistakes (D1). Bulgarian medical professionals were more positive compared with their USA colleagues on the communication of information at shift handover and across hospital units (F5, F7). The distribution of positive scores on items: ‘Staff worries that their mistakes are kept in their personnel file’ (RA16), ‘Things ‘fall between the cracks’ when transferring patients from one unit to another’ (RF3) and ‘Shift handovers are problematic for patients in this hospital’ (RF11) were significantly higher among Bulgarian respondents compared with Croatian and US respondents. Conclusions: Significant differences of positive scores distribution were found between Bulgarian and USA PSC on one hand and between Bulgarian and Croatian on the other. The study reveals that distribution of positive responses could be explained by the cultural, organizational and healthcare system differences.

Keywords: patient safety culture, healthcare, HSOPSC, medical error

Procedia PDF Downloads 136
197 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa

Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees

Abstract:

The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.

Keywords: solar energy, solar radiation, ERA-5, potential energy

Procedia PDF Downloads 213
196 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 115
195 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application

Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier

Abstract:

Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.

Keywords: calf raise test, mobile application, validity, reliability

Procedia PDF Downloads 166
194 Improving the Weekend Handover in General Surgery: A Quality Improvement Project

Authors: Michael Ward, Eliana Kalakouti, Andrew Alabi

Abstract:

Aim: The handover process is recognized as a vulnerable step in the patient care pathway where errors are likely to occur. As such, it is a major preventable cause of patient harm due to human factors of poor communication and systematic error. The aim of this study was to audit the general surgery department’s weekend handover process compared to the recommended criteria for safe handover as set out by the Royal College of Surgeons (RCS). Method: A retrospective audit of the General Surgery department’s Friday patient lists and patient medical notes used for weekend handover in a London-based District General Hospital (DGH). Medical notes were analyzed against RCS's suggested criteria for handover. A standardized paper weekend handover proforma was then developed in accordance with guidelines and circulated in the department. A post-intervention audit was then conducted using the same methods for cycle 1. For cycle 2, we introduced an electronic weekend handover tool along with Electronic Patient Records (EPR). After a one-month period, a second post-intervention audit was conducted. Results: Following cycle 1, the paper weekend handover proforma was only used in 23% of patient notes. However, when it was used, 100% of them had a plan for the weekend, diagnosis and location but only 40% documented potential discharge status and 40% ceiling of care status. Qualitative feedback was that it was time-consuming to fill out. Better results were achieved following cycle 2, with 100% of patient notes having the electronic proforma. Results improved with every patient having documented ceiling of care, discharge status and location. Only 55% of patients had a past surgical history; however, this was still an increase when compared to paper proforma (45%). When comparing electronic versus paper proforma, there was an increase in documentation in every domain of the handover outlined by RCS with an average relative increase of 1.72 times (p<0.05). Qualitative feedback was that the autofill function made it easy to use and simple to view. Conclusion: These results demonstrate that the implementation of an electronic autofill handover proforma significantly improved handover compliance with RCS guidelines, thereby improving the transmission of information from week-day to weekend teams.

Keywords: surgery, handover, proforma, electronic handover, weekend, general surgery

Procedia PDF Downloads 159
193 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 402
192 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 298
191 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)

Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara

Abstract:

Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.

Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry

Procedia PDF Downloads 176
190 Effect of Fertilization and Combined Inoculation with Azospirillum brasilense and Pseudomonas fluorescens on Rhizosphere Microbial Communities of Avena sativa (Oats) and Secale Cereale (Rye) Grown as Cover Crops

Authors: Jhovana Silvia Escobar Ortega, Ines Eugenia Garcia De Salamone

Abstract:

Cover crops are an agri-technological alternative to improve all properties of soils. Cover crops such as oats and rye could be used to reduce erosion and favor system sustainability when they are grown in the same agricultural cycle of the soybean crop. This crop is very profitable but its low contribution of easily decomposable residues, due to its low C/N ratio, leaves the soil exposed to erosive action and raises the need to reduce its monoculture. Furthermore, inoculation with the plant growth promoting rhizobacteria contributes to the implementation, development and production of several cereal crops. However, there is little information on its effects on forage crops which are often used as cover crops to improve soil quality. In order to evaluate the effect of combined inoculation with Azospirillum brasilense and Pseudomonas fluorescens on rhizosphere microbial communities, field experiments were conducted in the west of Buenos Aires province, Argentina, with a split-split plot randomized complete block factorial design with three replicates. The factors were: type of cover crop, inoculation and fertilization. In the main plot two levels of fertilization 0 and 7 40-0-5 (NPKS) were established at sowing. Rye (Secale cereale cultivar Quehué) and oats (Avena sativa var Aurora.) were sown in the subplots. In the sub-subplots two inoculation treatments are applied without and with application of a combined inoculant with A. brasilense and P. fluorescens. Due to the growth of cover crops has to be stopped usually with the herbicide glyphosate, rhizosphere soil of 0-20 and 20-40 cm layers was sampled at three sampling times which were: before glyphosate application (BG), a month after glyphosate application (AG) and at soybean harvest (SH). Community level of physiological profiles (CLPP) and Shannon index of microbial diversity (H) were obtained by multivariate analysis of Principal Components. Also, the most probable number (MPN) of nitrifiers and cellulolytics were determined using selective liquid media for each functional group. The CLPP of rhizosphere microbial communities showed significant differences between sampling times. There was not interaction between sampling times and both, types of cover crops and inoculation. Rhizosphere microbial communities of samples obtained BG had different CLPP with respect to the samples obtained in the sampling times AG and SH. Fertilizer and depth of sampling also caused changes in the CLPP. The H diversity index of rhizosphere microbial communities of rye in the sampling time BG were higher than those associated with oats. The MPN of both microbial functional types was lower in the deeper layer since these microorganisms are mostly aerobic. The MPN of nitrifiers decreased in rhizosphere of both cover crops only AG. At the sampling time BG, the NMP of both microbial types were larger than those obtained for AG and SH. This may mean that the glyphosate application could cause fairly permanent changes in these microbial communities which can be considered bio-indicators of soil quality. Inoculation and fertilizer inputs could be included to improve management of these cover crops because they can have a significant positive effect on the sustainability of the agro-ecosystem.

Keywords: community level of physiological profiles, microbial diversity, plant growth promoting rhizobacteria, rhizosphere microbial communities, soil quality, system sustainability

Procedia PDF Downloads 408
189 The Correlation between Eye Movements, Attentional Shifting, and Driving Simulator Performance among Adolescents with Attention Deficit Hyperactivity Disorder

Authors: Navah Z. Ratzon, Anat Keren, Shlomit Y. Greenberg

Abstract:

Car accidents are a problem worldwide. Adolescents’ involvement in car accidents is higher in comparison to the overall driving population. Researchers estimate the risk of accidents among adolescents with symptoms of attention-deficit/hyperactivity disorder (ADHD) to be 1.2 to 4 times higher than that of their peers. Individuals with ADHD exhibit unique patterns of eye movements and attentional shifts that play an important role in driving. In addition, deficiencies in cognitive and executive functions among adolescents with ADHD is likely to put them at greater risk for car accidents. Fifteen adolescents with ADHD and 17 matched controls participated in the study. Individuals from both groups attended local public schools and did not have a driver’s license. Participants’ mean age was 16.1 (SD=.23). As part of the experiment, they all completed a driving simulation session, while their eye movements were monitored. Data were recorded by an eye tracker: The entire driving session was recorded, registering the tester’s exact gaze position directly on the screen. Eye movements and simulator data were analyzed using Matlab (Mathworks, USA). Participants’ cognitive and metacognitive abilities were evaluated as well. No correlation was found between saccade properties, regions of interest, and simulator performance in either group, although participants with ADHD allocated more visual scan time (25%, SD = .13%) to a smaller segment of dashboard area, whereas controls scanned the monitor more evenly (15%, SD = .05%). The visual scan pattern found among participants with ADHD indicates a distinct pattern of engagement-disengagement of spatial attention compared to that of non-ADHD participants as well as lower attention flexibility, which likely affects driving. Additionally the lower the results on the cognitive tests, the worse driving performance was. None of the participants had prior driving experience, yet participants with ADHD distinctly demonstrated difficulties in scanning their surroundings, which may impair driving. This stresses the need to consider intervention programs, before driving lessons begin, to help adolescents with ADHD acquire proper driving habits, avoid typical driving errors, and achieve safer driving.

Keywords: ADHD, attentional shifting, driving simulator, eye movements

Procedia PDF Downloads 330
188 A Case Study on an Integrated Analysis of Well Control and Blow out Accident

Authors: Yasir Memon

Abstract:

The complexity and challenges in the offshore industry are increasing more than the past. The oil and gas industry is expanding every day by accomplishing these challenges. More challenging wells such as longer and deeper are being drilled in today’s environment. Blowout prevention phenomena hold a worthy importance in oil and gas biosphere. In recent, so many past years when the oil and gas industry was growing drilling operation were extremely dangerous. There was none technology to determine the pressure of reservoir and drilling hence was blind operation. A blowout arises when an uncontrolled reservoir pressure enters in wellbore. A potential of blowout in the oil industry is the danger for the both environment and the human life. Environmental damage, state/country regulators, and the capital investment causes in loss. There are many cases of blowout in the oil the gas industry caused damage to both human and the environment. A huge capital investment is being in used to stop happening of blowout through all over the biosphere to bring damage at the lowest level. The objective of this study is to promote safety and good resources to assure safety and environmental integrity in all operations during drilling. This study shows that human errors and management failure is the main cause of blowout therefore proper management with the wise use of precautions, prevention methods or controlling techniques can reduce the probability of blowout to a minimum level. It also discusses basic procedures, concepts and equipment involved in well control methods and various steps using at various conditions. Furthermore, another aim of this study work is to highlight management role in oil gas operations. Moreover, this study analyze the causes of Blowout of Macondo well occurred in the Gulf of Mexico on April 20, 2010, and deliver the recommendations and analysis of various aspect of well control methods and also provides the list of mistakes and compromises that British Petroleum and its partner were making during drilling and well completion methods and also the Macondo well disaster happened due to various safety and development rules violation. This case study concludes that Macondo well blowout disaster could be avoided with proper management of their personnel’s and communication between them and by following safety rules/laws it could be brought to minimum environmental damage.

Keywords: energy, environment, oil and gas industry, Macondo well accident

Procedia PDF Downloads 189
187 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context

Authors: Nicole Merkle, Stefan Zander

Abstract:

Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.

Keywords: ambient intelligence, machine learning, semantic web, software agents

Procedia PDF Downloads 282
186 Accuracy Analysis of the American Society of Anesthesiologists Classification Using ChatGPT

Authors: Jae Ni Jang, Young Uk Kim

Abstract:

Background: Chat Generative Pre-training Transformer-3 (ChatGPT; San Francisco, California, Open Artificial Intelligence) is an artificial intelligence chatbot based on a large language model designed to generate human-like text. As the usage of ChatGPT is increasing among less knowledgeable patients, medical students, and anesthesia and pain medicine residents or trainees, we aimed to evaluate the accuracy of ChatGPT-3 responses to questions about the American Society of Anesthesiologists (ASA) classification based on patients’ underlying diseases and assess the quality of the generated responses. Methods: A total of 47 questions were submitted to ChatGPT using textual prompts. The questions were designed for ChatGPT-3 to provide answers regarding ASA classification in response to common underlying diseases frequently observed in adult patients. In addition, we created 18 questions regarding the ASA classification for pediatric patients and pregnant women. The accuracy of ChatGPT’s responses was evaluated by cross-referencing with Miller’s Anesthesia, Morgan & Mikhail’s Clinical Anesthesiology, and the American Society of Anesthesiologists’ ASA Physical Status Classification System (2020). Results: Out of the 47 questions pertaining to adults, ChatGPT -3 provided correct answers for only 23, resulting in an accuracy rate of 48.9%. Furthermore, the responses provided by ChatGPT-3 regarding children and pregnant women were mostly inaccurate, as indicated by a 28% accuracy rate (5 out of 18). Conclusions: ChatGPT provided correct responses to questions relevant to the daily clinical routine of anesthesiologists in approximately half of the cases, while the remaining responses contained errors. Therefore, caution is advised when using ChatGPT to retrieve anesthesia-related information. Although ChatGPT may not yet be suitable for clinical settings, we anticipate significant improvements in ChatGPT and other large language models in the near future. Regular assessments of ChatGPT's ASA classification accuracy are essential due to the evolving nature of ChatGPT as an artificial intelligence entity. This is especially important because ChatGPT has a clinically unacceptable rate of error and hallucination, particularly in pediatric patients and pregnant women. The methodology established in this study may be used to continue evaluating ChatGPT.

Keywords: American Society of Anesthesiologists, artificial intelligence, Chat Generative Pre-training Transformer-3, ChatGPT

Procedia PDF Downloads 50
185 Comparison of Risk Analysis Methodologies Through the Consequences Identification in Chemical Accidents Associated with Dangerous Flammable Goods Storage

Authors: Daniel Alfonso Reséndiz-García, Luis Antonio García-Villanueva

Abstract:

As a result of the high industrial activity, which arises from the search to satisfy the needs of products and services for society, several chemical accidents have occurred, causing serious damage to different sectors: human, economic, infrastructure and environmental losses. Historically, with the study of this chemical accidents, it has been determined that the causes are mainly due to human errors (inexperienced personnel, negligence, lack of maintenance and deficient risk analysis). The industries have the aim to increase production and reduce costs. However, it should be kept in mind that the costs involved in risk studies, implementation of barriers and safety systems is much cheaper than paying for the possible damages that could occur in the event of an accident, without forgetting that there are things that cannot be replaced, such as human lives.Therefore, it is of utmost importance to implement risk studies in all industries, which provide information for prevention and planning. The aim of this study is to compare risk methodologies by identifying the consequences of accidents related to the storage of flammable, dangerous goods for decision making and emergency response.The methodologies considered in this study are qualitative and quantitative risk analysis and consequence analysis. The latter, by means of modeling software, which provides radius of affectation and the possible scope and magnitude of damages.By using risk analysis, possible scenarios of occurrence of chemical accidents in the storage of flammable substances are identified. Once the possible risk scenarios have been identified, the characteristics of the substances, their storage and atmospheric conditions are entered into the software.The results provide information that allows the implementation of prevention, detection, control, and combat elements for emergency response, thus having the necessary tools to avoid the occurrence of accidents and, if they do occur, to significantly reduce the magnitude of the damage.This study highlights the importance of risk studies applying tools that best suited to each case study. It also proves the importance of knowing the risk exposure of industrial activities for a better prevention, planning and emergency response.

Keywords: chemical accidents, emergency response, flammable substances, risk analysis, modeling

Procedia PDF Downloads 93
184 Role of Maternal Astaxanthin Supplementation on Brain Derived Neurotrophic Factor and Spatial Learning Behavior in Wistar Rat Offspring’s

Authors: K. M. Damodara Gowda

Abstract:

Background: Maternal health and nutrition are considered as the predominant factors influencing brain functional development. If the mother is free of illness and genetic defects, maternal nutrition would be one of the most critical factors affecting the brain development. Calorie restrictions cause significant impairment in spatial learning ability and the levels of Brain Derived Neurotrophic Factor (BDNF) in rats. But, the mechanism by which the prenatal under-nutrition leads to impairment in brain learning and memory function is still unclear. In the present study, prenatal Astaxanthin supplementation on BDNF level, spatial learning and memory performance in the offspring’s of normal, calorie restricted and Astaxanthin supplemented rats was investigated. Methodology: The rats were administered with 6mg and 12 mg of astaxanthin /kg bw for 21 days following which acquisition and retention of spatial memory was tested in a partially-baited eight arm radial maze. The BDNF level in different regions of the brain (cerebral cortex, hippocampus and cerebellum) was estimated by ELISA method. Results: Calorie restricted animals treated with astaxanthin made significantly more correct choices (P < 0.05), and fewer reference memory errors (P < 0.05) on the tenth day of training compared to offsprings of calorie restricted animals. Calorie restricted animals treated with astaxanthin also made significantly higher correct choices (P < 0.001) than untreated calorie restricted animals in a retention test 10 days after the training period. The mean BDNF level in cerebral cortex, Hippocampus and cerebellum in Calorie restricted animals treated with astaxanthin didnot show significant variation from that of control animals. Conclusion: Findings of the study indicated that memory and learning was impaired in the offspring’s of calorie restricted rats which was effectively modulated by astaxanthin at the dosage of 12 mg/kg body weight. In the same way the BDNF level at cerebral cortex, Hippocampus and Cerebellum was also declined in the offspring’s of calorie restricted animals, which was also found to be effectively normalized by astaxanthin.

Keywords: calorie restiction, learning, Memory, Cerebral cortex, Hippocampus, Cerebellum, BDNF, Astaxanthin

Procedia PDF Downloads 233
183 Hardness map of Human Tarsals, Meta Tarsals and Phalanges of Toes

Authors: Irfan Anjum Manarvi, Zahid Ali kaimkhani

Abstract:

Predicting location of the fracture in human bones has been a keen area of research for the past few decades. A variety of tests for hardness, deformation, and strain field measurement have been conducted in the past; but considered insufficient due to various limitations. Researchers, therefore, have proposed further studies due to inaccuracies in measurement methods, testing machines, and experimental errors. Advancement and availability of hardware, measuring instrumentation, and testing machines can now provide remedies to these limitations. The human foot is a critical part of the body exposed to various forces throughout its life. A number of products are developed for using it for protection and care, which many times do not provide sufficient protection and may itself become a source of stress due to non-consideration of the delicacy of bones in the feet. A continuous strain or overloading on feet may occur resulting to discomfort and even fracture. Mechanical properties of Tarsals, Metatarsals, and phalanges are, therefore, the primary area of consideration for all such design applications. Hardness is one of the mechanical properties which are considered very important to establish the mechanical resistance behavior of a material against applied loads. Past researchers have worked in the areas of investigating mechanical properties of these bones. However, their results were based on a limited number of experiments and taking average values of hardness due to either limitation of samples or testing instruments. Therefore, they proposed further studies in this area. The present research has been carried out to develop a hardness map of the human foot by measuring micro hardness at various locations of these bones. Results are compiled in the form of distance from a reference point on a bone and the hardness values for each surface. The number of test results is far more than previous studies and are spread over a typical bone to give a complete hardness map of these bones. These results could also be used to establish other properties such as stress and strain distribution in the bones. Also, industrial engineers could use it for design and development of various accessories for human feet health care and comfort and further research in the same areas.

Keywords: tarsals, metatarsals, phalanges, hardness testing, biomechanics of human foot

Procedia PDF Downloads 421
182 Interlanguage Acquisition of a Postposition ‘e’ in Korean: Analysis of the Korean Novice Learners’ Output

Authors: Eunjung Lee

Abstract:

This study aims to analyze the sentences generated by the beginners who learn ‘e,’ a postposition in Korean and to find out the regularity of learners’ interlanguage upon investigating the usages of ‘e’ that appears by meanings and functions in their interlanguage, and conditions that ‘e’ is used. This study was conducted with mainly two assumptions; first, the learner’s language has the specific type of interlanguage; and second, there is the regularity of interlanguage when students produce ‘e’ under the specific conditions. Learners’ output has various values and can be used as the useful data to understand interlanguage. Therefore, all the sentences containing a postposition ‘e’ by English speaking learners were searched in ‘Learners’ corpus sharing center in The National Institute of Korean Language’ in Korea, and the data were collected upon limiting the levels of learners with Level 1 and 2. 789 sentences that were used with ‘e’ were selected as the final subjects of the analysis. First, to understand the environmental characteristics to be used with a postposition, ‘e’ after summarizing 13 meaning and functions of ‘e’ appeared in three books of Korean dictionary that summarized the Korean grammar, 1) meaning function of ‘e’ that were used in each sentence was classified; 2) the nouns that were combined with ‘e,’ keywords of the sentences, and the characteristics of modifiers, linkers, and predicates appeared in front of ‘e’ were analyzed; 3) the regularity by the novice learners’ meaning and functions were reviewed; and 4) the differences of the regularity by level 1 and 2 learners’ meaning and functions were found. Upon the study results, the novice learners showed 1) they used the nouns related to ‘time(시간), before(전), after(후), next(다음), the next(그다음), then(때), day of the week(요일), and season(계절)’ mainly in front of ‘e’ when they used ‘e’ as the meaning function of time; 2) they used mainly the verbs of ‘go(가다),’ ‘come(오다),’ and ‘go round(다니다)’ as the predicate to match with ‘e’ that was the meaning function of direction and destination; and 3) they used mainly the nouns related to ‘locations or countries’ in front of ‘e,’ a meaning function postposition of ‘place,’ used mainly the verbs ‘be(있다), not be(없다), live(살다), be many(많다)’ after ‘e,’ and ‘i(이) or ka(가)’ was combined mainly in the subject words in case of ‘be(있다), not be(없다)’ or ‘be many(많다),’ and ‘eun(은) or nun(는)’ was combined mainly in the subject words in front of ‘live at’ In addition, 4) they used ‘e’ which indicates ‘cause or reason’ in the form of ‘because( 때문에),’ and 5) used ‘e’ of the subjects as the predicates to match with the predicates such as ‘treat(대하다), like(들다), and catch(걸리다).’ From these results, ‘e’ usage patterns of the Korean novice learners demonstrated very differently by the meaning functions and the learners’ interlanguage regularity could be deducted. However, little difference was found in interlanguage regularity between level 1 and 2. This study has the meaning to try to understand the interlanguage system and regularity in the learners’ acquisition process of postposition ‘e’ and this can be utilized to lessen their errors.

Keywords: interlanguage, interlagnage anaylsis, postposition ‘e’, Korean acquisition

Procedia PDF Downloads 129
181 A Measurement and Motor Control System for Free Throw Shots in Basketball Using Gyroscope Sensor

Authors: Niloofar Zebarjad

Abstract:

This research aims at finding a tool to provide basketball players with real-time audio feedback on their shooting form in free throw shots. Free throws played a pivotal role in taking the lead in fierce competitions. The major problem in performing an accurate free throw seems to be improper training. Since the arm movement during the free throw shot is complex, the coach or the athlete might miss the movement details during practice. Hence, there is a necessity to create a system that measures arm movements' critical characteristics and control for improper kinematics. The proposed setup in this study quantifies arm kinematics and provides real-time feedback as an audio signal consisting of a gyroscope sensor. Spatial shoulder angle data are transmitted in a mobile application in real-time and can be saved and processed for statistical and analysis purposes. The proposed system is easy to use, inexpensive, portable, and real-time applicable. Objectives: This research aims to modify and control the free throw using audio feedback and determine if and to what extent the new setup reduces errors in arm formations during throws and finally assesses the successful throw rate. Methods: One group of elite basketball athletes and two novice athletes (control and study group) participated in this study. Each group contains 5 participants being studied in three separate sessions over a week. Results: Empirical results showed enhancements in the free throw shooting style, shot pocket (SP), and locked position (LP). The mean values of shoulder angle were controlled on 25° and 45° for SP and LP, respectively, recommended by valid FIBA references. Conclusion: Throughout the experiments, the system helped correct and control the shoulder angles toward the targeted pattern of shot pocket (SP) and locked position (LP). According to the desired results for arm motion, adding another sensor to measure and control the elbow angle is recommended.

Keywords: audio-feedback, basketball, free-throw, locked-position, motor-control, shot-pocket

Procedia PDF Downloads 296
180 Comparison of Extended Kalman Filter and Unscented Kalman Filter for Autonomous Orbit Determination of Lagrangian Navigation Constellation

Authors: Youtao Gao, Bingyu Jin, Tanran Zhao, Bo Xu

Abstract:

The history of satellite navigation can be dated back to the 1960s. From the U.S. Transit system and the Russian Tsikada system to the modern Global Positioning System (GPS) and the Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS), performance of satellite navigation has been greatly improved. Nowadays, the navigation accuracy and coverage of these existing systems have already fully fulfilled the requirement of near-Earth users, but these systems are still beyond the reach of deep space targets. Due to the renewed interest in space exploration, a novel high-precision satellite navigation system is becoming even more important. The increasing demand for such a deep space navigation system has contributed to the emergence of a variety of new constellation architectures, such as the Lunar Global Positioning System. Apart from a Walker constellation which is similar to the one adopted by GPS on Earth, a novel constellation architecture which consists of libration point satellites in the Earth-Moon system is also available to construct the lunar navigation system, which can be called accordingly, the libration point satellite navigation system. The concept of using Earth-Moon libration point satellites for lunar navigation was first proposed by Farquhar and then followed by many other researchers. Moreover, due to the special characteristics of Libration point orbits, an autonomous orbit determination technique, which is called ‘Liaison navigation’, can be adopted by the libration point satellites. Using only scalar satellite-to-satellite tracking data, both the orbits of the user and libration point satellites can be determined autonomously. In this way, the extensive Earth-based tracking measurement can be eliminated, and an autonomous satellite navigation system can be developed for future space exploration missions. The method of state estimate is an unnegligible factor which impacts on the orbit determination accuracy besides type of orbit, initial state accuracy and measurement accuracy. We apply the extended Kalman filter(EKF) and the unscented Kalman filter(UKF) to determinate the orbits of Lagrangian navigation satellites. The autonomous orbit determination errors are compared. The simulation results illustrate that UKF can improve the accuracy and z-axis convergence to some extent.

Keywords: extended Kalman filter, autonomous orbit determination, unscented Kalman filter, navigation constellation

Procedia PDF Downloads 285
179 Speaking Anxiety: Sources, Coping Mechanisms and Teacher Management

Authors: Mylene T. Caytap-Milan

Abstract:

This study was materialized with the purpose of determining the anxieties of students towards spoken English, sources of the specified anxiety, coping mechanisms to counter the apprehensions, and teacher management to reduce the anxiety within the classroom. Being qualitative in nature, interview as the data gathering tool was utilized with an audio-recorder. Participants of the study included thirteen teachers and students of speech classes in a state university in Region I, Philippines. Data elicited were transcribed in verbatim, confirmed by the participants, coded and categorized, and themed accordingly. A triangulation method was applied to establish the stronger validity of the data. Findings confirmed teachers’ and students’ awareness of the existence of Anxiety in speaking English (ASE). Based on the data gathered from the teachers, the following themes on students’ ASE were identified: (1) No Brain and Mouth Coordination, (2) Center of Attention, and (3) Acting Out Loud. However, the following themes were formulated based on the responses made by the students themselves: (1) The Common Feeling, (2) The Incompetent Me, and (3) The Limelight. With regard the sources of students’ ASE according to teachers are the following: (1) It Began at Home, (2) It Continued in School, (3) It’s not for me at all. On the other hand, the sources of students’ ASE according to students themselves are: (1) It Comes from Within, (2) It wasn’t Nursed Well, and (3) They’re Looking for Errors. In terms of coping with ASE, students identified the following mechanisms, which were themed into: (1) Acceptance, (2) Application, and (3) Apathy. Moreover, to reduce the ASE phenomenon within the classroom, the teachers demonstrate the following roles according to themes: (1) The Compass, (2) The Counselor, (3) The Referee, (4) The Polyglot, and (5) The English Nazi. Based on the findings, the following conclusions were drawn: (1) ASE can both serve positive and negative influences to the English speaking skills of students, (2) ASE can be reduced with teachers’ provision of more English speaking opportunities and with students’ initiative of personal training, (3) ASE can be reduced when English is introduced and practiced by children at an early age, and (4) ASE is inevitable in the affective domain thus teachers are encouraged to apply psychological positivism in the classroom. Studies related to the present undertaking may refer to the succeeding recommendations: (1) experiment on activities that will reduce anxiety ASE, (2) involve a psychologist for more critical but reliable results and recommendations, and (3) conduct the study among high school and primary students.

Keywords: coping mechanisms, sources, speaking anxiety, teacher management

Procedia PDF Downloads 115
178 An Educational Electronic Health Record with a Configurable User Interface

Authors: Floriane Shala, Evangeline Wagner, Yichun Zhao

Abstract:

Background: Proper educational training and support are proven to be major components of EHR (Electronic Health Record) implementation and use. However, the majority of health providers are not sufficiently trained in EHR use, leading to adverse events, errors, and decreased quality of care. In response to this, students studying Health Information Science, Public Health, Nursing, and Medicine should all gain a thorough understanding of EHR use at different levels for different purposes. The design of a usable and safe EHR system that accommodates the needs and workflows of different users, user groups, and disciplines is required for EHR learning to be efficient and effective. Objectives: This project builds several artifacts which seek to address both the educational and usability aspects of an educational EHR. The artifacts proposed are models for and examples of such an EHR with a configurable UI to be learned by students who need a background in EHR use during their degrees. Methods: Review literature and gather professional opinions from domain experts on usability, the use of workflow patterns, UI configurability and design, and the educational aspect of EHR use. Conduct interviews in a semi-casual virtual setting with open discussion in order to gain a deeper understanding of the principal aspects of EHR use in educational settings. Select a specific task and user group to illustrate how the proposed solution will function based on the current research. Develop three artifacts based on the available research, professional opinions, and prior knowledge of the topic. The artifacts capture the user task and user’s interactions with the EHR for learning. The first generic model provides a general understanding of the EHR system process. The second model is a specific example of performing the task of MRI ordering with a configurable UI. The third artifact includes UI mock-ups showcasing the models in a practical and visual way. Significance: Due to the lack of educational EHRs, medical professionals do not receive sufficient EHR training. Implementing an educational EHR with a usable and configurable interface to suit the needs of different user groups and disciplines will help facilitate EHR learning and training and ultimately improve the quality of patient care.

Keywords: education, EHR, usability, configurable

Procedia PDF Downloads 158
177 Frequency of Consonant Production Errors in Children with Speech Sound Disorder: A Retrospective-Descriptive Study

Authors: Amulya P. Rao, Prathima S., Sreedevi N.

Abstract:

Speech sound disorders (SSD) encompass the major concern in younger population of India with highest prevalence rate among the speech disorders. Children with SSD if not identified and rehabilitated at the earliest, are at risk for academic difficulties. This necessitates early identification using screening tools assessing the frequently misarticulated speech sounds. The literature on frequently misarticulated speech sounds is ample in English and other western languages targeting individuals with various communication disorders. Articulation is language specific, and there are limited studies reporting the same in Kannada, a Dravidian Language. Hence, the present study aimed to identify the frequently misarticulated consonants in Kannada and also to examine the error type. A retrospective, descriptive study was carried out using secondary data analysis of 41 participants (34-phonetic type and 7-phonemic type) with SSD in the age range 3-to 12-years. All the consonants of Kannada were analyzed by considering three words for each speech sound from the Kannada Diagnostic Photo Articulation test (KDPAT). Picture naming task was carried out, and responses were audio recorded. The recorded data were transcribed using IPA 2018 broad transcription. A criterion of 2/3 or 3/3 error productions was set to consider the speech sound to be an error. Number of error productions was calculated for each consonant in each participant. Then, the percentage of participants meeting the criteria were documented for each consonant to identify the frequently misarticulated speech sound. Overall results indicated that velar /k/ (48.78%) and /g/ (43.90%) were frequently misarticulated followed by voiced retroflex /ɖ/ (36.58%) and trill /r/ (36.58%). The lateral retroflex /ɭ/ was misarticulated by 31.70% of the children with SSD. Dentals (/t/, /n/), bilabials (/p/, /b/, /m/) and labiodental /v/ were produced correctly by all the participants. The highly misarticulated velars /k/ and /g/ were frequently substituted by dentals /t/ and /d/ respectively or omitted. Participants with SSD-phonemic type had multiple substitutions for one speech sound whereas, SSD-phonetic type had consistent single sound substitutions. Intra- and inter-judge reliability for 10% of the data using Cronbach’s Alpha revealed good reliability (0.8 ≤ α < 0.9). Analyzing a larger sample by replicating such studies will validate the present study results.

Keywords: consonant, frequently misarticulated, Kannada, SSD

Procedia PDF Downloads 139
176 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: electric propulsion, mass gauging, propellant, PVT, xenon

Procedia PDF Downloads 346
175 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria

Authors: O. O. Aiyelokun, O. A. Agbede

Abstract:

Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.

Keywords: boundary condition, goodness of fit, groundwater, satellite-based data

Procedia PDF Downloads 130