Search results for: Errors and Mistakes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1072

Search results for: Errors and Mistakes

202 Main Control Factors of Fluid Loss in Drilling and Completion in Shunbei Oilfield by Unmanned Intervention Algorithm

Authors: Peng Zhang, Lihui Zheng, Xiangchun Wang, Xiaopan Kou

Abstract:

Quantitative research on the main control factors of lost circulation has few considerations and single data source. Using Unmanned Intervention Algorithm to find the main control factors of lost circulation adopts all measurable parameters. The degree of lost circulation is characterized by the loss rate as the objective function. Geological, engineering and fluid data are used as layers, and 27 factors such as wellhead coordinates and WOB are used as dimensions. Data classification is implemented to determine function independent variables. The mathematical equation of loss rate and 27 influencing factors is established by multiple regression method, and the undetermined coefficient method is used to solve the undetermined coefficient of the equation. Only three factors in t-test are greater than the test value 40, and the F-test value is 96.557%, indicating that the correlation of the model is good. The funnel viscosity, final shear force and drilling time were selected as the main control factors by elimination method, contribution rate method and functional method. The calculated values of the two wells used for verification differ from the actual values by -3.036m3/h and -2.374m3/h, with errors of 7.21% and 6.35%. The influence of engineering factors on the loss rate is greater than that of funnel viscosity and final shear force, and the influence of the three factors is less than that of geological factors. Quantitatively calculate the best combination of funnel viscosity, final shear force and drilling time. The minimum loss rate of lost circulation wells in Shunbei area is 10m3/h. It can be seen that man-made main control factors can only slow down the leakage, but cannot fundamentally eliminate it. This is more in line with the characteristics of karst caves and fractures in Shunbei fault solution oil and gas reservoir.

Keywords: drilling and completion, drilling fluid, lost circulation, loss rate, main controlling factors, unmanned intervention algorithm

Procedia PDF Downloads 112
201 Tsunami Wave Height and Flow Velocity Calculations Based on Density Measurements of Boulders: Case Studies from Anegada and Pakarang Cape

Authors: Zakiul Fuady, Michaela Spiske

Abstract:

Inundation events, such as storms and tsunamis can leave onshore sedimentary evidence like sand deposits or large boulders. These deposits store indirect information on the related inundation parameters (e.g., flow velocity, flow depth, wave height). One tool to reveal these parameters are inverse models that use the physical characteristics of the deposits to refer to the magnitude of inundation. This study used boulders of the 2004 Indian Ocean Tsunami from Thailand (Pakarang Cape) and form a historical tsunami event that inundated the outer British Virgin Islands (Anegada). For the largest boulder found in Pakarang Cape with a volume of 26.48 m³ the required tsunami wave height is 0.44 m and storm wave height are 1.75 m (for a bulk density of 1.74 g/cm³. In Pakarang Cape the highest tsunami wave height is 0.45 m and storm wave height are 1.8 m for transporting a 20.07 m³ boulder. On Anegada, the largest boulder with a diameter of 2.7 m is the asingle coral head (species Diploria sp.) with a bulk density of 1.61 g/cm³, and requires a minimum tsunami wave height of 0.31 m and storm wave height of 1.25 m. The highest required tsunami wave height on Anegada is 2.12 m for a boulder with a bulk density of 2.46 g/cm³ (volume 0.0819 m³) and the highest storm wave height is 5.48 m (volume 0.216 m³) from the same bulk density and the coral type is limestone. Generally, the higher the bulk density, volume, and weight of the boulders, the higher the minimum tsunami and storm wave heights required to initiate transport. It requires 4.05 m/s flow velocity by Nott’s equation (2003) and 3.57 m/s by Nandasena et al. (2011) to transport the largest boulder in Pakarang Cape, whereas on Anegada, it requires 3.41 m/s to transport a boulder with diameter 2.7 m for both equations. Thus, boulder equations need to be handled with caution because they make many assumptions and simplifications. Second, the physical boulder parameters, such as density and volume need to be determined carefully to minimize any errors.

Keywords: tsunami wave height, storm wave height, flow velocity, boulders, Anegada, Pakarang Cape

Procedia PDF Downloads 238
200 Analysis of Autonomous Orbit Determination for Lagrangian Navigation Constellation with Different Dynamical Models

Authors: Gao Youtao, Zhao Tanran, Jin Bingyu, Xu Bo

Abstract:

Global navigation satellite system(GNSS) can deliver navigation information for spacecraft orbiting on low-Earth orbits and medium Earth orbits. However, the GNSS cannot navigate the spacecraft on high-Earth orbit or deep space probes effectively. With the deep space exploration becoming a hot spot of aerospace, the demand for a deep space satellite navigation system is becoming increasingly prominent. Many researchers discussed the feasibility and performance of a satellite navigation system on periodic orbits around the Earth-Moon libration points which can be called Lagrangian point satellite navigation system. Autonomous orbit determination (AOD) is an important performance for the Lagrangian point satellite navigation system. With this ability, the Lagrangian point satellite navigation system can reduce the dependency on ground stations. AOD also can greatly reduce total system cost and assure mission continuity. As the elliptical restricted three-body problem can describe the Earth-Moon system more accurately than the circular restricted three-body problem, we study the autonomous orbit determination of Lagrangian navigation constellation using only crosslink range based on elliptical restricted three body problem. Extended Kalman filter is used in the autonomous orbit determination. In order to compare the autonomous orbit determination results based on elliptical restricted three-body problem to the results of autonomous orbit determination based on circular restricted three-body problem, we give the autonomous orbit determination position errors of a navigation constellation include four satellites based on the circular restricted three-body problem. The simulation result shows that the Lagrangian navigation constellation can achieve long-term precise autonomous orbit determination using only crosslink range. In addition, the type of the libration point orbit will influence the autonomous orbit determination accuracy.

Keywords: extended Kalman filter, autonomous orbit determination, quasi-periodic orbit, navigation constellation

Procedia PDF Downloads 282
199 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials

Authors: Sheikh Omar Sillah

Abstract:

Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.

Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring

Procedia PDF Downloads 77
198 A Comparative Analysis of Various Companding Techniques Used to Reduce PAPR in VLC Systems

Authors: Arushi Singh, Anjana Jain, Prakash Vyavahare

Abstract:

Recently, Li-Fi(light-fiedelity) has been launched based on VLC(visible light communication) technique, 100 times faster than WiFi. Now 5G mobile communication system is proposed to use VLC-OFDM as the transmission technique. The VLC system focused on visible rays, is considered for efficient spectrum use and easy intensity modulation through LEDs. The reason of high speed in VLC is LED, as they flicker incredibly fast(order of MHz). Another advantage of employing LED is-it acts as low pass filter results no out-of-band emission. The VLC system falls under the category of ‘green technology’ for utilizing LEDs. In present scenario, OFDM is used for high data-rates, interference immunity and high spectral efficiency. Inspite of the advantages OFDM suffers from large PAPR, ICI among carriers and frequency offset errors. Since, the data transmission technique used in VLC system is OFDM, the system suffers the drawbacks of OFDM as well as VLC, the non-linearity dues to non-linear characteristics of LED and PAPR of OFDM due to which the high power amplifier enters in non-linear region. The proposed paper focuses on reduction of PAPR in VLC-OFDM systems. Many techniques are applied to reduce PAPR such as-clipping-introduces distortion in the carrier; selective mapping technique-suffers wastage of bandwidth; partial transmit sequence-very complex due to exponentially increased number of sub-blocks. The paper discusses three companding techniques namely- µ-law, A-law and advance A-law companding technique. The analysis shows that the advance A-law companding techniques reduces the PAPR of the signal by adjusting the companding parameter within the range. VLC-OFDM systems are the future of the wireless communication but non-linearity in VLC-OFDM is a severe issue. The proposed paper discusses the techniques to reduce PAPR, one of the non-linearities of the system. The companding techniques mentioned in this paper provides better results without increasing the complexity of the system.

Keywords: non-linear companding techniques, peak to average power ratio (PAPR), visible light communication (VLC), VLC-OFDM

Procedia PDF Downloads 286
197 Empowering and Educating Young People Against Cybercrime by Playing: The Rayuela Method

Authors: Jose L. Diego, Antonio Berlanga, Gregorio López, Diana López

Abstract:

The Rayuela method is a success story, as it is part of a project selected by the European Commission to face the challenge launched by itself for achieving a better understanding of human factors, as well as social and organisational aspects that are able to solve issues in fighting against crime. Rayuela's method specifically focuses on the drivers of cyber criminality, including approaches to prevent, investigate, and mitigate cybercriminal behavior. As the internet has become an integral part of young people’s lives, they are the key target of the Rayuela method because they (as a victim or as a perpetrator) are the most vulnerable link of the chain. Considering the increased time spent online and the control of their internet usage and the low level of awareness of cyber threats and their potential impact, it is understandable the proliferation of incidents due to human mistakes. 51% of Europeans feel not well informed about cyber threats, and 86% believe that the risk of becoming a victim of cybercrime is rapidly increasing. On the other hand, Law enforcement has noted that more and more young people are increasingly committing cybercrimes. This is an international problem that has considerable cost implications; it is estimated that crimes in cyberspace will cost the global economy $445B annually. Understanding all these phenomena drives to the necessity of a shift in focus from sanctions to deterrence and prevention. As a research project, Rayuela aims to bring together law enforcement agencies (LEAs), sociologists, psychologists, anthropologists, legal experts, computer scientists, and engineers, to develop novel methodologies that allow better understanding the factors affecting online behavior related to new ways of cyber criminality, as well as promoting the potential of these young talents for cybersecurity and technologies. Rayuela’s main goal is to better understand the drivers and human factors affecting certain relevant ways of cyber criminality, as well as empower and educate young people in the benefits, risks, and threats intrinsically linked to the use of the Internet by playing, thus preventing and mitigating cybercriminal behavior. In order to reach that goal it´s necessary an interdisciplinary consortium (formed by 17 international partners) carries out researches and actions like Profiling and case studies of cybercriminals and victims, risk assessments, studies on Internet of Things and its vulnerabilities, development of a serious gaming environment, training activities, data analysis and interpretation using Artificial intelligence, testing and piloting, etc. For facilitating the real implementation of the Rayuela method, as a community policing strategy, is crucial to count on a Police Force with a solid background in trust-building and community policing in order to do the piloting, specifically with young people. In this sense, Valencia Local Police is a pioneer Police Force working with young people in conflict solving, through providing police mediation and peer mediation services and advice. As an example, it is an official mediation institution, so agreements signed by their police mediators have once signed by the parties, the value of a judicial decision.

Keywords: fight against crime and insecurity, avert and prepare young people against aggression, ICT, serious gaming and artificial intelligence against cybercrime, conflict solving and mediation with young people

Procedia PDF Downloads 128
196 The Hindrances Associated with Internet Banking Services in Nigeria: The Lagos State Perspective

Authors: Patience Oluchi Silas, Yemi Adeshina

Abstract:

Financial transactions involving the use of the internet has become an important practice among commercial banks in Nigeria with the introduction of internet banking and this has improved banking efficiency in rending services to customers. However, customers in Lagos State are enslaved in the fear of insecurity, technical failure, inadequate operational facilities, including improper telecommunications and poor power supply. It is in line with this that this paper explores the obstacles faced by Lagosians, tourists, small scale business owners, companies, customers and the government's attitude in addressing the challenges associated with online banking system in Nigeria through relevant legislations. Internet banking has the potential to transform economic activity and achieve developmental goals. If the associated Challenges are addressed quickly, then it will have the desired impact on the Nigerian economy. In this study, Respondents, mostly bank employees and customers were issued well designed and structured questionnaires to effectively examine the new developments brought about by the introduction of Internet banking and the challenges inhibiting its adoption. Hypotheses were formulated to test assumptions and claims generated from the study. The results were statistically analyzed to address the issues of errors and chances, and at the end, the result of the statistical analysis shows that all especially insecurity, inadequate operational facilities and poor power supply are the significant factors affecting the adoption of internet banking services in Nigeria. The study recommends that for internet banking to assume a developmental dimension in Nigeria and for the country to be fully integrated and respected in global financial environment, the prevalent level of frauds in Lagos State and among Nigerians must first be addressed and the relevant local laws should be put in place and in consonance with international laws and conventions; get the citizens well educated on the intricacies of Internet usage and frauds.

Keywords: internet-banking, adoption, challenges, insecurity, legislation, fraud, Lagos state, statistics

Procedia PDF Downloads 342
195 Effect of Smartphone Applications on Patients' Knowledge of Surgery-Related Adverse Events during Hospitalization

Authors: Eunjoo Lee

Abstract:

Background: As the number of surgeries increases, the incidence of adverse events is likely to become more prevalent. Patients who are somewhat knowledgeable about surgery-related adverse events are more likely to engage in safety initiatives to prevent them. Objectives: To evaluate the impact of a smartphone application developed during the study to enhance patients’ knowledge of surgery-related adverse events during hospitalization. Design: Non-randomized, one group, measured pre- and post-intervention. Participants: Thirty-six hospitalized patients admitted to the orthopedics unit of a general hospital in South Korea. Methods. First, a smartphone application to enhance patients’ knowledge of surgery-related adverse events was developed through an iterative process, which included a literature review, expert consultation, and pilot testing. The application was installed on participants’ smartphones, and research assistants taught the participants to use it. Twenty-five true/false questions were used to assess patients’ knowledge of preoperative precautions (eight items), surgical site infection (five items), Foley catheter management (four items), drainage management (four items), and anesthesia-related complications (four items). Results: Overall, the percentage of correct answers increased significantly, from 57.02% to 73.82%, although answers related to a few specific topics did not increase that much. Although the patients’ understanding of drainage management and the Foley catheter did increase substantially after they used the smartphone application, it was still relatively low. Conclusions: The smartphone application developed during this study enhanced the patients’ knowledge of surgery-related adverse events during hospitalization. However, nurses must make an additional effort to help patients to understand certain topics, including drainage and Foley catheter management. Relevance to clinical practice: Insufficient patient knowledge increases the risk of adverse events during hospitalization. Nurses should take active steps to enhance patients’ knowledge of a range of safety issues during hospitalization, in order to decrease the number of surgery-related adverse events.

Keywords: patient education, patient participation, patient safety, smartphone application, surgical errors

Procedia PDF Downloads 245
194 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks

Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain

Abstract:

With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.

Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic

Procedia PDF Downloads 545
193 Discovering Word-Class Deficits in Persons with Aphasia

Authors: Yashaswini Channabasavegowda, Hema Nagaraj

Abstract:

Aim: The current study aims at discovering word-class deficits concerning the noun-verb ratio in confrontation naming, picture description, and picture-word matching tasks. A total of ten persons with aphasia (PWA) and ten age-matched neurotypical individuals (NTI) were recruited for the study. The research includes both behavioural and objective measures to assess the word class deficits in PWA. Objective: The main objective of the research is to identify word class deficits seen in persons with aphasia, using various speech eliciting tasks. Method: The study was conducted in the L1 of the participants, considered to be Kannada. Action naming test and Boston naming test adapted to the Kannada version are administered to the participants; also, a picture description task is carried out. Picture-word matching task was carried out using e-prime software (version 2) to measure the accuracy and reaction time with respect to identification verbs and nouns. The stimulus was presented through auditory and visual modes. Data were analysed to identify errors noticed in the naming of nouns versus verbs, with respect to the Boston naming test and action naming test and also usage of nouns and verbs in the picture description task. Reaction time and accuracy for picture-word matching were extracted from the software. Results: PWA showed a significant difference in sentence structure compared to age-matched NTI. Also, PWA showed impairment in syntactic measures in the picture description task, with fewer correct grammatical sentences and fewer correct usage of verbs and nouns, and they produced a greater proportion of nouns compared to verbs. PWA had poorer accuracy and lesser reaction time in the picture-word matching task compared to NTI, and accuracy was higher for nouns compared to verbs in PWA. The deficits were noticed irrespective of the cause leading to aphasia.

Keywords: nouns, verbs, aphasia, naming, description

Procedia PDF Downloads 102
192 Multi-Stakeholder Involvement in Construction and Challenges of Building Information Modeling Implementation

Authors: Zeynep Yazicioglu

Abstract:

Project development is a complex process where many stakeholders work together. Employers and main contractors are the base stakeholders, whereas designers, engineers, sub-contractors, suppliers, supervisors, and consultants are other stakeholders. A combination of the complexity of the building process with a large number of stakeholders often leads to time and cost overruns and irregular resource utilization. Failure to comply with the work schedule and inefficient use of resources in the construction processes indicate that it is necessary to accelerate production and increase productivity. The development of computer software called Building Information Modeling, abbreviated as BIM, is a major technological breakthrough in this area. The use of BIM enables architectural, structural, mechanical, and electrical projects to be drawn in coordination. BIM is a tool that should be considered by every stakeholder with the opportunities it offers, such as minimizing construction errors, reducing construction time, forecasting, and determination of the final construction cost. It is a process spreading over the years, enabling all stakeholders associated with the project and construction to use it. The main goal of this paper is to explore the problems associated with the adoption of BIM in multi-stakeholder projects. The paper is a conceptual study, summarizing the author’s practical experience with design offices and construction firms working with BIM. In the transition period to BIM, three of the challenges will be examined in this paper: 1. The compatibility of supplier companies with BIM, 2. The need for two-dimensional drawings, 3. Contractual issues related to BIM. The paper reviews the literature on BIM usage and reviews the challenges in the transition stage to BIM. Even on an international scale, the supplier that can work in harmony with BIM is not very common, which means that BIM's transition is continuing. In parallel, employers, local approval authorities, and material suppliers still need a 2-D drawing. In the BIM environment, different stakeholders can work on the same project simultaneously, giving rise to design ownership issues. Practical applications and problems encountered are also discussed, providing a number of suggestions for the future.

Keywords: BIM opportunities, collaboration, contract issues about BIM, stakeholders of project

Procedia PDF Downloads 102
191 Milling Simulations with a 3-DOF Flexible Planar Robot

Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden

Abstract:

Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.

Keywords: control, milling, multibody, robotic, simulation

Procedia PDF Downloads 249
190 Teaching Young Children Social and Emotional Learning through Shared Book Reading: Project GROW

Authors: Stephanie Al Otaiba, Kyle Roberts

Abstract:

Background and Significance Globally far too many students read below grade level; thus improving literacy outcomes is vital. Research suggests that non-cognitive factors, including Social and Emotional Learning (SEL) are linked to success in literacy outcomes. Converging evidence exists that early interventions are more effective than later remediation; therefore teachers need strategies to support early literacy while developing students’ SEL and their vocabulary, or language, for learning. This presentation describe findings from a US federally-funded project that trained teachers to provide an evidence-based read-aloud program for young children, using commercially available books with multicultural characters and themes to help their students “GROW”. The five GROW SEL themes include: “I can name my feelings”, “I can learn from my mistakes”, “I can persist”, “I can be kind to myself and others”, and “I can work toward and achieve goals”. Examples of GROW vocabulary (from over 100 words taught across the 5 units) include: emotions, improve, resilient, cooperate, accomplish, responsible, compassion, adapt, achieve, analyze. Methodology This study used a mixed methods research design, with qualitative methods to describe data from teacher feedback surveys (regarding satisfaction, feasibility), observations of fidelity of implementation, and with quantitative methods to assess the effect sizes for student vocabulary growth. GROW Intervention and Teacher Training Procedures Researchers trained classroom teachers to implement GROW. Each thematic unit included four books, vocabulary cards with images of the vocabulary words, and scripted lessons. Teacher training included online and in-person training; researchers incorporated virtual reality videos of instructors with child avatars to model lessons. Classroom teachers provided 2-3 20 min lessons per week ranging from short-term (8 weeks) to longer-term trials for up to 16 weeks. Setting and Participants The setting for the study included two large urban charter schools in the South. Data was collected across two years; during the first year, participants included 7 kindergarten teachers and 108 and the second year involved an additional set of 5 kindergarten and first grade teachers and 65 students. Initial Findings The initial qualitative findings indicate teachers reported the lessons to be feasible to implement and they reported that students enjoyed the books. Teachers found the vocabulary words to be challenging and important. They were able to implement lessons with fidelity. Quantitative analyses of growth for each taught word suggest that students’ growth on taught words ranged from large (ES = .75) to small (<.20). Researchers will contrast the effects for more and less successful books within the GROW units. Discussion and Conclusion It is feasible for teachers of young students to effectively teach SEL vocabulary and themes during shared book reading. Teachers and students enjoyed the books and students demonstrated growth on taught vocabulary. Researchers will discuss implications of the study and about the GROW program for researchers in learning sciences, will describe some limitations about research designs that are inherent in school-based research partnerships, and will provide some suggested directions for future research and practice.

Keywords: early literacy, learning science, language and vocabulary, social and emotional learning, multi-cultural

Procedia PDF Downloads 43
189 Lean Production to Increase Reproducibility and Work Safety in the Laser Beam Melting Process Chain

Authors: C. Bay, A. Mahr, H. Groneberg, F. Döpper

Abstract:

Additive Manufacturing processes are becoming increasingly established in the industry for the economic production of complex prototypes and functional components. Laser beam melting (LBM), the most frequently used Additive Manufacturing technology for metal parts, has been gaining in industrial importance for several years. The LBM process chain – from material storage to machine set-up and component post-processing – requires many manual operations. These steps often depend on the manufactured component and are therefore not standardized. These operations are often not performed in a standardized manner, but depend on the experience of the machine operator, e.g., levelling of the build plate and adjusting the first powder layer in the LBM machine. This lack of standardization limits the reproducibility of the component quality. When processing metal powders with inhalable and alveolar particle fractions, the machine operator is at high risk due to the high reactivity and the toxic (e.g., carcinogenic) effect of the various metal powders. Faulty execution of the operation or unintentional omission of safety-relevant steps can impair the health of the machine operator. In this paper, all the steps of the LBM process chain are first analysed in terms of their influence on the two aforementioned challenges: reproducibility and work safety. Standardization to avoid errors increases the reproducibility of component quality as well as the adherence to and correct execution of safety-relevant operations. The corresponding lean method 5S will therefore be applied, in order to develop approaches in the form of recommended actions that standardize the work processes. These approaches will then be evaluated in terms of ease of implementation and their potential for improving reproducibility and work safety. The analysis and evaluation showed that sorting tools and spare parts as well as standardizing the workflow are likely to increase reproducibility. Organizing the operational steps and production environment decreases the hazards of material handling and consequently improves work safety.

Keywords: additive manufacturing, lean production, reproducibility, work safety

Procedia PDF Downloads 184
188 A Corpus Study of English Verbs in Chinese EFL Learners’ Academic Writing Abstracts

Authors: Shuaili Ji

Abstract:

The correct use of verbs is an important element of high-quality research articles, and thus for Chinese EFL learners, it is significant to master characteristics of verbs and to precisely use verbs. However, some researches have shown that there are differences in using verbs between learners and native speakers and learners have difficulty in using English verbs. This corpus-based quantitative research can enhance learners’ knowledge of English verbs and promote the quality of research article abstracts even of the whole academic writing. The aim of this study is to find the differences between learners’ and native speakers’ use of verbs and to study the factors that contribute to those differences. To this end, the research question is as follows: What are the differences between most frequently used verbs by learners and those by native speakers? The research question is answered through a study that uses corpus-based data-driven approach to analyze the verbs used by learners in their abstract writings in terms of collocation, colligation and semantic prosody. The results show that: (1) EFL learners obviously overused ‘be, can, find, make’ and underused ‘investigate, examine, may’. As to modal verbs, learners obviously overused ‘can’ while underused ‘may’. (2) Learners obviously overused ‘we find + object clauses’ while underused ‘nouns (results, findings, data) + suggest/indicate/reveal + object clauses’ when expressing research results. (3) Learners tended to transfer the collocation, colligation and semantic prosody of shǐ and zuò to make. (4) Learners obviously overused ‘BE+V-ed’ and used BE as the main verb. They also obviously overused the basic forms of BE such as be, is, are, while obviously underused its inflections (was, were). These results manifested learners’ lack of accuracy and idiomatic property in verb usage. Due to the influence of the concept transfer of Chinese, the verbs in learners’ abstracts showed obvious transfer of mother language. In addition, learners have not fully mastered the use of verbs, avoiding using complex colligations to prevent errors. Based on these findings, the present study has implications for English teaching, seeking to have implications for English academic abstract writing in China. Further research could be undertaken to study the use of verbs in the whole dissertation to find out whether the characteristic of the verbs in abstracts can apply in the whole dissertation or not.

Keywords: academic writing abstracts, Chinese EFL learners, corpus-based, data-driven, verbs

Procedia PDF Downloads 335
187 A Genre-Based Approach to the Teaching of Pronunciation

Authors: Marden Silva, Danielle Guerra

Abstract:

Some studies have indicated that pronunciation teaching hasn’t been paid enough attention by teachers regarding EFL contexts. In particular, segmental and suprasegmental features through genre-based approach may be an opportunity on how to integrate pronunciation into a more meaningful learning practice. Therefore, the aim of this project was to carry out a survey on some aspects related to English pronunciation that Brazilian students consider more difficult to learn, thus enabling the discussion of strategies that can facilitate the development of oral skills in English classes by integrating the teaching of phonetic-phonological aspects into the genre-based approach. Notions of intelligibility, fluency and accuracy were proposed by some authors as an ideal didactic sequence. According to their proposals, basic learners should be exposed to activities focused on the notion of intelligibility as well as intermediate students to the notion of fluency, and finally more advanced ones to accuracy practices. In order to test this hypothesis, data collection was conducted during three high school English classes at Federal Center for Technological Education of Minas Gerais (CEFET-MG), in Brazil, through questionnaires and didactic activities, which were recorded and transcribed for further analysis. The genre debate was chosen to facilitate the oral expression of the participants in a freer way, making them answering questions and giving their opinion about a previously selected topic. The findings indicated that basic students demonstrated more difficulty with aspects of English pronunciation than the others. Many of the intelligibility aspects analyzed had to be listened more than once for a better understanding. For intermediate students, the speeches recorded were considerably easier to understand, but nevertheless they found it more difficult to pronounce the words fluently, often interrupting their speech to think about what they were going to say and how they would talk. Lastly, more advanced learners seemed to express their ideas more fluently, but still subtle errors related to accuracy were perceptible in speech, thereby confirming the proposed hypothesis. It was also seen that using genre-based approach to promote oral communication in English classes might be a relevant method, considering the socio-communicative function inherent in the suggested approach.

Keywords: EFL, genre-based approach, oral skills, pronunciation

Procedia PDF Downloads 130
186 The Noun-Phrase Elements on the Usage of the Zero Article

Authors: Wen Zhen

Abstract:

Compared to content words, function words have been relatively overlooked by English learners especially articles. The article system, to a certain extent, becomes a resistance to know English better, driven by different elements. Three principal factors can be summarized in term of the nature of the articles when referring to the difficulty of the English article system. However, making the article system more complex are difficulties in the second acquisition process, for [-ART] learners have to create another category, causing even most non-native speakers at proficiency level to make errors. According to the sequences of acquisition of the English article, it is showed that the zero article is first acquired and in high inaccuracy. The zero article is often overused in the early stages of L2 acquisition. Although learners at the intermediate level move to underuse the zero article for they realize that the zero article does not cover any case, overproduction of the zero article even occurs among advanced L2 learners. The aim of the study is to investigate noun-phrase factors which give rise to incorrect usage or overuse of the zero article, thus providing suggestions for L2 English acquisition. Moreover, it enables teachers to carry out effective instruction that activate conscious learning of students. The research question will be answered through a corpus-based, data- driven approach to analyze the noun-phrase elements from the semantic context and countability of noun-phrases. Based on the analysis of the International Thurber Thesis corpus, the results show that: (1) Although context of [-definite,-specific] favored the zero article, both[-definite,+specific] and [+definite,-specific] showed less influence. When we reflect on the frequency order of the zero article , prototypicality plays a vital role in it .(2)EFL learners in this study have trouble classifying abstract nouns as countable. We can find that it will bring about overuse of the zero article when learners can not make clear judgements on countability altered from (+definite ) to (-definite).Once a noun is perceived as uncountable by learners, the choice would fall back on the zero article. These findings suggest that learners should be engaged in recognition of the countability of new vocabulary by explaining nouns in lexical phrases and explore more complex aspects such as analysis dependent on discourse.

Keywords: noun phrase, zero article, corpus, second language acquisition

Procedia PDF Downloads 253
185 Patient Safety Culture in Brazilian Hospitals from Nurse's Team Perspective

Authors: Carmen Silvia Gabriel, Dsniele Bernardi da Costa, Andrea Bernardes, Sabrina Elias Mikael, Daniele da Silva Ramos

Abstract:

The goal of this quantitative study is to investigate patient safety culture from the perspective of professional from the hospital nursing team.It was conducted in two Brazilian hospitals,.The sample included 282 nurses Data collection occurred in 2013, through the questionnaire Hospital Survey on Patient Safety Culture.Based on the assessment of the dimensions is stressed that, in the dimension teamwork across hospital units, 69.4% of professionals agree that when a lot of work needs to be done quickly, they work together as a team; about the dimension supervisor/ manager expectations and actions promoting safety, 70.2% agree that their supervisor overlooks patient safety problems.Related to organizational learning and continuous improvement, 56.5% agree that there is evaluation of the effectiveness of the changes after its implementation.On hospital management support for patient safety, 52.8% refer that the actions of hospital management show that patient safety is a top priority.On the overall perception of patient safety, 57.2% disagree that patient safety is never compromised due to higher amount of work to be completed.In what refers to feedback and communication about error, 57.7% refer that always and usually receive such information. Relative to communication openness, 42.9% said they never or rarely feel free to question the decisions / actions of their superiors.On frequency of event reporting, 64.7% said often and always notify events with no damages to patients..About teamwork across hospital units is noted similarity between the percentages of agreement and disagreement, as on the item there is a good cooperation among hospital units that need to work together, that indicates 41.4% and 40.5% respectively.Related to adequacy of professionals, 77.8 % disagree on the existence of sufficient amount of employees to do the job, 52.4% agree that shift changes are problematic for patients. On nonpunitive response to errors, 71.7% indicate that when an event is reported it seems that the focus is on the person.On the patient safety grade of the institution, 41.6 % classified it as very good. it is concluded that there are positive points in the safety culture, and some weaknesses as a punitive culture and impaired patient safety due to work overload .

Keywords: quality of health care, health services evaluation, safety culture, patient safety, nursing team

Procedia PDF Downloads 299
184 Comparative Evaluation of a Dynamic Navigation System Versus a Three-Dimensional Microscope in Retrieving Separated Endodontic Files: An in Vitro Study

Authors: Mohammed H. Karim, Bestoon M. Faraj

Abstract:

Introduction: instrument separation is a common challenge in the endodontic field. Various techniques and technologies have been developed to improve the retrieval success rate. This study aimed to compare the effectiveness of a Dynamic Navigation System (DNS) and a three-dimensional microscope in retrieving broken rotary NiTi files when using trepan burs and the extractor system. Materials and Methods: Thirty maxillary first bicuspids with sixty separate roots were split into two comparable groups based on a comprehensive Cone-Beam Computed Tomography (CBCT) analysis of the root length and curvature. After standardised access opening, glide paths, and patency attainment with the K file (sizes 10 and 15), the teeth were arranged on 3D models (three per quadrant, six per model). Subsequently, controlled-memory heat-treated NiTi rotary files (#25/0.04) were notched 4 mm from the tips and fractured at the apical third of the roots. The C-FR1 Endo file removal system was employed under both guidance to retrieve the fragments, and the success rate, canal aberration, treatment time and volumetric changes were measured. The statistical analysis was performed using IBM SPSS software at a significance level of 0.05. Results: The microscope-guided group had a higher success rate than the DNS guidance, but the difference was insignificant (p > 0.05). In addition, the microscope-guided drills resulted in a substantially lower proportion of canal aberration, required less time to retrieve the fragments and caused a minor change in the root canal volume (p < 0.05). Conclusion: Although dynamically guided trephining with the extractor can retrieve separated instruments, it is inferior to three-dimensional microscope guidance regarding treatment time, procedural errors, and volume change.

Keywords: dynamic navigation system, separated instruments retrieval, trephine burs and extractor system, three-dimensional video microscope

Procedia PDF Downloads 98
183 A Computerized Tool for Predicting Future Reading Abilities in Pre-Readers Children

Authors: Stephanie Ducrot, Marie Vernet, Eve Meiss, Yves Chaix

Abstract:

Learning to read is a key topic of debate today, both in terms of its implications on school failure and illiteracy and regarding what are the best teaching methods to develop. It is estimated today that four to six percent of school-age children suffer from specific developmental disorders that impair learning. The findings from people with dyslexia and typically developing readers suggest that the problems children experience in learning to read are related to the preliteracy skills that they bring with them from kindergarten. Most tools available to professionals are designed for the evaluation of child language problems. In comparison, there are very few tools for assessing the relations between visual skills and the process of learning to read. Recent literature reports that visual-motor skills and visual-spatial attention in preschoolers are important predictors of reading development — the main goal of this study aimed at improving screening for future reading difficulties in preschool children. We used a prospective, longitudinal approach where oculomotor processes (assessed with the DiagLECT test) were measured in pre-readers, and the impact of these skills on future reading development was explored. The dialect test specifically measures the online time taken to name numbers arranged irregularly in horizontal rows (horizontal time, HT), and the time taken to name numbers arranged in vertical columns (vertical time, VT). A total of 131 preschoolers took part in this study. At Time 0 (kindergarten), the mean VT, HT, errors were recorded. One year later, at Time 1, the reading level of the same children was evaluated. Firstly, this study allowed us to provide normative data for a standardized evaluation of the oculomotor skills in 5- and 6-year-old children. The data also revealed that 25% of our sample of preschoolers showed oculomotor impairments (without any clinical complaints). Finally, the results of this study assessed the validity of the DiagLECT test for predicting reading outcomes; the better a child's oculomotor skills are, the better his/her reading abilities will be.

Keywords: vision, attention, oculomotor processes, reading, preschoolers

Procedia PDF Downloads 147
182 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa

Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees

Abstract:

The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.

Keywords: solar energy, solar radiation, ERA-5, potential energy

Procedia PDF Downloads 211
181 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 114
180 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application

Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier

Abstract:

Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.

Keywords: calf raise test, mobile application, validity, reliability

Procedia PDF Downloads 166
179 Improving the Weekend Handover in General Surgery: A Quality Improvement Project

Authors: Michael Ward, Eliana Kalakouti, Andrew Alabi

Abstract:

Aim: The handover process is recognized as a vulnerable step in the patient care pathway where errors are likely to occur. As such, it is a major preventable cause of patient harm due to human factors of poor communication and systematic error. The aim of this study was to audit the general surgery department’s weekend handover process compared to the recommended criteria for safe handover as set out by the Royal College of Surgeons (RCS). Method: A retrospective audit of the General Surgery department’s Friday patient lists and patient medical notes used for weekend handover in a London-based District General Hospital (DGH). Medical notes were analyzed against RCS's suggested criteria for handover. A standardized paper weekend handover proforma was then developed in accordance with guidelines and circulated in the department. A post-intervention audit was then conducted using the same methods for cycle 1. For cycle 2, we introduced an electronic weekend handover tool along with Electronic Patient Records (EPR). After a one-month period, a second post-intervention audit was conducted. Results: Following cycle 1, the paper weekend handover proforma was only used in 23% of patient notes. However, when it was used, 100% of them had a plan for the weekend, diagnosis and location but only 40% documented potential discharge status and 40% ceiling of care status. Qualitative feedback was that it was time-consuming to fill out. Better results were achieved following cycle 2, with 100% of patient notes having the electronic proforma. Results improved with every patient having documented ceiling of care, discharge status and location. Only 55% of patients had a past surgical history; however, this was still an increase when compared to paper proforma (45%). When comparing electronic versus paper proforma, there was an increase in documentation in every domain of the handover outlined by RCS with an average relative increase of 1.72 times (p<0.05). Qualitative feedback was that the autofill function made it easy to use and simple to view. Conclusion: These results demonstrate that the implementation of an electronic autofill handover proforma significantly improved handover compliance with RCS guidelines, thereby improving the transmission of information from week-day to weekend teams.

Keywords: surgery, handover, proforma, electronic handover, weekend, general surgery

Procedia PDF Downloads 159
178 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 401
177 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 297
176 The Correlation between Eye Movements, Attentional Shifting, and Driving Simulator Performance among Adolescents with Attention Deficit Hyperactivity Disorder

Authors: Navah Z. Ratzon, Anat Keren, Shlomit Y. Greenberg

Abstract:

Car accidents are a problem worldwide. Adolescents’ involvement in car accidents is higher in comparison to the overall driving population. Researchers estimate the risk of accidents among adolescents with symptoms of attention-deficit/hyperactivity disorder (ADHD) to be 1.2 to 4 times higher than that of their peers. Individuals with ADHD exhibit unique patterns of eye movements and attentional shifts that play an important role in driving. In addition, deficiencies in cognitive and executive functions among adolescents with ADHD is likely to put them at greater risk for car accidents. Fifteen adolescents with ADHD and 17 matched controls participated in the study. Individuals from both groups attended local public schools and did not have a driver’s license. Participants’ mean age was 16.1 (SD=.23). As part of the experiment, they all completed a driving simulation session, while their eye movements were monitored. Data were recorded by an eye tracker: The entire driving session was recorded, registering the tester’s exact gaze position directly on the screen. Eye movements and simulator data were analyzed using Matlab (Mathworks, USA). Participants’ cognitive and metacognitive abilities were evaluated as well. No correlation was found between saccade properties, regions of interest, and simulator performance in either group, although participants with ADHD allocated more visual scan time (25%, SD = .13%) to a smaller segment of dashboard area, whereas controls scanned the monitor more evenly (15%, SD = .05%). The visual scan pattern found among participants with ADHD indicates a distinct pattern of engagement-disengagement of spatial attention compared to that of non-ADHD participants as well as lower attention flexibility, which likely affects driving. Additionally the lower the results on the cognitive tests, the worse driving performance was. None of the participants had prior driving experience, yet participants with ADHD distinctly demonstrated difficulties in scanning their surroundings, which may impair driving. This stresses the need to consider intervention programs, before driving lessons begin, to help adolescents with ADHD acquire proper driving habits, avoid typical driving errors, and achieve safer driving.

Keywords: ADHD, attentional shifting, driving simulator, eye movements

Procedia PDF Downloads 329
175 Accuracy Analysis of the American Society of Anesthesiologists Classification Using ChatGPT

Authors: Jae Ni Jang, Young Uk Kim

Abstract:

Background: Chat Generative Pre-training Transformer-3 (ChatGPT; San Francisco, California, Open Artificial Intelligence) is an artificial intelligence chatbot based on a large language model designed to generate human-like text. As the usage of ChatGPT is increasing among less knowledgeable patients, medical students, and anesthesia and pain medicine residents or trainees, we aimed to evaluate the accuracy of ChatGPT-3 responses to questions about the American Society of Anesthesiologists (ASA) classification based on patients’ underlying diseases and assess the quality of the generated responses. Methods: A total of 47 questions were submitted to ChatGPT using textual prompts. The questions were designed for ChatGPT-3 to provide answers regarding ASA classification in response to common underlying diseases frequently observed in adult patients. In addition, we created 18 questions regarding the ASA classification for pediatric patients and pregnant women. The accuracy of ChatGPT’s responses was evaluated by cross-referencing with Miller’s Anesthesia, Morgan & Mikhail’s Clinical Anesthesiology, and the American Society of Anesthesiologists’ ASA Physical Status Classification System (2020). Results: Out of the 47 questions pertaining to adults, ChatGPT -3 provided correct answers for only 23, resulting in an accuracy rate of 48.9%. Furthermore, the responses provided by ChatGPT-3 regarding children and pregnant women were mostly inaccurate, as indicated by a 28% accuracy rate (5 out of 18). Conclusions: ChatGPT provided correct responses to questions relevant to the daily clinical routine of anesthesiologists in approximately half of the cases, while the remaining responses contained errors. Therefore, caution is advised when using ChatGPT to retrieve anesthesia-related information. Although ChatGPT may not yet be suitable for clinical settings, we anticipate significant improvements in ChatGPT and other large language models in the near future. Regular assessments of ChatGPT's ASA classification accuracy are essential due to the evolving nature of ChatGPT as an artificial intelligence entity. This is especially important because ChatGPT has a clinically unacceptable rate of error and hallucination, particularly in pediatric patients and pregnant women. The methodology established in this study may be used to continue evaluating ChatGPT.

Keywords: American Society of Anesthesiologists, artificial intelligence, Chat Generative Pre-training Transformer-3, ChatGPT

Procedia PDF Downloads 48
174 Comparison of Risk Analysis Methodologies Through the Consequences Identification in Chemical Accidents Associated with Dangerous Flammable Goods Storage

Authors: Daniel Alfonso Reséndiz-García, Luis Antonio García-Villanueva

Abstract:

As a result of the high industrial activity, which arises from the search to satisfy the needs of products and services for society, several chemical accidents have occurred, causing serious damage to different sectors: human, economic, infrastructure and environmental losses. Historically, with the study of this chemical accidents, it has been determined that the causes are mainly due to human errors (inexperienced personnel, negligence, lack of maintenance and deficient risk analysis). The industries have the aim to increase production and reduce costs. However, it should be kept in mind that the costs involved in risk studies, implementation of barriers and safety systems is much cheaper than paying for the possible damages that could occur in the event of an accident, without forgetting that there are things that cannot be replaced, such as human lives.Therefore, it is of utmost importance to implement risk studies in all industries, which provide information for prevention and planning. The aim of this study is to compare risk methodologies by identifying the consequences of accidents related to the storage of flammable, dangerous goods for decision making and emergency response.The methodologies considered in this study are qualitative and quantitative risk analysis and consequence analysis. The latter, by means of modeling software, which provides radius of affectation and the possible scope and magnitude of damages.By using risk analysis, possible scenarios of occurrence of chemical accidents in the storage of flammable substances are identified. Once the possible risk scenarios have been identified, the characteristics of the substances, their storage and atmospheric conditions are entered into the software.The results provide information that allows the implementation of prevention, detection, control, and combat elements for emergency response, thus having the necessary tools to avoid the occurrence of accidents and, if they do occur, to significantly reduce the magnitude of the damage.This study highlights the importance of risk studies applying tools that best suited to each case study. It also proves the importance of knowing the risk exposure of industrial activities for a better prevention, planning and emergency response.

Keywords: chemical accidents, emergency response, flammable substances, risk analysis, modeling

Procedia PDF Downloads 92
173 Role of Maternal Astaxanthin Supplementation on Brain Derived Neurotrophic Factor and Spatial Learning Behavior in Wistar Rat Offspring’s

Authors: K. M. Damodara Gowda

Abstract:

Background: Maternal health and nutrition are considered as the predominant factors influencing brain functional development. If the mother is free of illness and genetic defects, maternal nutrition would be one of the most critical factors affecting the brain development. Calorie restrictions cause significant impairment in spatial learning ability and the levels of Brain Derived Neurotrophic Factor (BDNF) in rats. But, the mechanism by which the prenatal under-nutrition leads to impairment in brain learning and memory function is still unclear. In the present study, prenatal Astaxanthin supplementation on BDNF level, spatial learning and memory performance in the offspring’s of normal, calorie restricted and Astaxanthin supplemented rats was investigated. Methodology: The rats were administered with 6mg and 12 mg of astaxanthin /kg bw for 21 days following which acquisition and retention of spatial memory was tested in a partially-baited eight arm radial maze. The BDNF level in different regions of the brain (cerebral cortex, hippocampus and cerebellum) was estimated by ELISA method. Results: Calorie restricted animals treated with astaxanthin made significantly more correct choices (P < 0.05), and fewer reference memory errors (P < 0.05) on the tenth day of training compared to offsprings of calorie restricted animals. Calorie restricted animals treated with astaxanthin also made significantly higher correct choices (P < 0.001) than untreated calorie restricted animals in a retention test 10 days after the training period. The mean BDNF level in cerebral cortex, Hippocampus and cerebellum in Calorie restricted animals treated with astaxanthin didnot show significant variation from that of control animals. Conclusion: Findings of the study indicated that memory and learning was impaired in the offspring’s of calorie restricted rats which was effectively modulated by astaxanthin at the dosage of 12 mg/kg body weight. In the same way the BDNF level at cerebral cortex, Hippocampus and Cerebellum was also declined in the offspring’s of calorie restricted animals, which was also found to be effectively normalized by astaxanthin.

Keywords: calorie restiction, learning, Memory, Cerebral cortex, Hippocampus, Cerebellum, BDNF, Astaxanthin

Procedia PDF Downloads 232