Search results for: Errors and Mistakes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1072

Search results for: Errors and Mistakes

352 Determination of the Knowledge Level of Healthcare Professional's Working at the Emergency Services in Turkey about Their Approaches to Common Forensic Cases

Authors: E. Tuğba Topçu, Ebru E. Kazan, Erhan Büken

Abstract:

Emergency nurses are the first health care professional to generally observe the patients, communicate patients’ family or relatives, touch the properties of patients and contact to laboratory sample of patients. Also, they are the encounter incidents related crime, people who engage in violence or suspicious injuries frequently. So, documentation of patients’ condition came to the hospital and conservation of evidence are important in the inquiry of forensic medicine. The aim of the study was to determine the knowledge level of healthcare professional working at the emergency services regarding their approaches to common forensic cases. The study was comprised of 404 healthcare professional working (nurse, emergency medicine technician, health officer) at the emergency services of 6 state hospitals, 6 training and 6 research hospitals and 3 university hospitals in Ankara. Data was collected using questionnaire form which was developed by researches in the direction of literature. Questionnaire form is comprised of two sections. The first section includes 17 questions related demographic information about health care professional and 4 questions related Turkish laws. The second section includes 43 questions to the determination of knowledge level of health care professional’s working in the emergency department, about approaches to frequently encountered forensic cases. For the data evaluation of the study; Mann Whitney U test, Bonferroni correction Kruskal Wallis H test and Chi Square tests have been used. According to study, it’s said that there is no forensic medicine expert in the foundation by 73.4% of health care professionals. Two third (66%) of participants’ in emergency department reported daily average 7 or above forensic cases applied to the emergency department and 52.1% of participants did not evaluate incidents came to the emergency department as a forensic case. Most of the participants informed 'duty of preservation of evidence' is health care professionals duty related forensic cases. In result, we determinated that knowledge level of health care professional working in the emergency department, about approaches to frequently encountered forensic cases, is not the expected level. Because we found that most of them haven't received education about forensic nursing.Postgraduates participants, educated health professional about forensic nursing, staff who applied to sources about forensic nursing and staff who evaluated emergency department cases as forensic cases have significantly higher level of knowledge. Moreover, it’s found that forensic cases diagnosis score is the highest in health officer and university graduated. Health care professional’s deficiency in knowledge about forensic cases can cause defects in operation of the forensic process because of mistakes in collecting and conserving of evidence. It is obvious that training about the approach to forensic nursing should be arranged.

Keywords: emergency nurses, forensic case, forensic nursing, level of knowledge

Procedia PDF Downloads 294
351 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils

Authors: Muqdad Al-Juboori, Bithin Datta

Abstract:

Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.

Keywords: artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis

Procedia PDF Downloads 224
350 Research of the Factors Affecting the Administrative Capacity of Enterprises in the Logistic Sector of Bulgaria

Authors: R. Kenova, K. Anguelov, R. Nikolova

Abstract:

The human factor plays a major role in boosting the competitive capacity of logistic enterprises. This is of particular importance when it comes to logistic companies. On the one hand they should be strictly compliant with legislation; on the other hand, they should be competitive in terms of pricing and of delivery timelines. Moreover, their policies should allow them to be as flexible as possible. All these circumstances are reason for very serious challenges for the qualification, motivation and experience of the human resources, working in logistic companies or in logistic departments of trade and industrial enterprises. The geographic place of Bulgaria puts it in position of a country with some specific competitive advantages in the goods transport from Europe to Asia and back. Along with it, there is a number of logistic companies, that operate in this sphere in Bulgaria. In the current paper, the authors aim to establish the condition of the administrative capacity and human resources in the logistic companies and logistic departments of trade and industrial companies in Bulgaria in order to propose some guidelines for improving of their effectiveness. Due to independent empirical research, conducted in Bulgarian logistic, trade and industrial enterprises, the authors investigate both the impact degree and the interdependence of various factors that characterize the administrative capacity. The study is conducted with a prepared questionnaire, in format of direct interview with the respondents. The volume of the poll is 50 respondents, representatives of: general managers of industrial or trade enterprises; logistic managers of industrial or trade enterprises; general managers of forwarding companies – either with own or with hired transport; experts from Bulgarian association of logistics; logistic lobbyist and scientists of the relevant area. The data are gathered for 3 months, then arranged by a specialized software program and analyzed by preset criteria. Based on the results of this methodological toolbox, it can be claimed that there is a correlation between the individual criteria. Also, a commitment between the administrative capacity and other factors that determine the competitiveness of the studied companies is established. In this paper, the authors present results of the empirical research that concerns the number and the workload in the logistic departments of the enterprises. Also, what is commented is the experience, related to logistic processes management and human resources competence. Moreover, the overload level of the logistic specialists is analyzed as one of the main threats for making mistakes and losing clients. The paper stands behind the thesis that there is indispensability of forming an effective and efficient administrative capacity, based on the number, qualification, experience and motivation of the staff in the logistic companies. The paper ends with recommendations about the qualification and experience of the specialists in logistic departments; providing effective and efficient administrative capacity in the logistic departments; interdependence of the human factor and the other factors that influence the enterprise competitiveness.

Keywords: administrative capacity, human resources, logistic competitiveness, staff qualification

Procedia PDF Downloads 152
349 Apollo Clinical Excellence Scorecard (ACE@25): An Initiative to Drive Quality Improvement in Hospitals

Authors: Anupam Sibal

Abstract:

Whatever is measured tends to improve. With a view to objectively measuring and improving clinical quality across the Apollo Group Hospitals, the initiative of ACE @ 25 (Apollo Clinical Excellence@25) was launched on Jan 09. ACE @ 25 is a clinically balanced scorecard incorporating 25 clinical quality parameters involving complication rates, mortality rates, one-year survival rates and average length of stay after major procedures like liver and renal transplant, CABG, TKR, THR, TURP, PTCA, endoscopy, large bowel resection and MRM covering all major specialties. Also included are hospital acquired infection rates, pain satisfaction and medication errors. Benchmarks have been chosen from the world’s best hospitals. There are weighted scores for outcomes color coded green, orange and red. The cumulative score is 100. Data is reported monthly by 43 Group Hospitals online on the Lighthouse platform. Action taken reports for parameters falling in red are submitted quarterly and reviewed by the board. An audit team audits the data at all locations every six months. Scores are linked to appraisal of the medical head and there is an “ACE @ 25” Champion Award for the highest scorer. Scores for different parameters were variable from green to red at the start of the initiative. Most hospitals showed an improvement in scores over the last four years for parameters where they had showed scores in red or orange at the start of the initiative. The overall scores for the group have shown an increase from 72 in 2010 to 81 in 2015.

Keywords: benchmarks, clinical quality, lighthouse, platform, scores

Procedia PDF Downloads 301
348 Development of Residual Power Series Methods for Efficient Solutions of Stiff Differential Equations

Authors: Gebreegziabher Hailu

Abstract:

This paper presents the development of residual power series methods aimed at efficiently solving stiff differential equations, which pose significant challenges in numerical analysis due to their rapid changes in solution behavior. The RPSM is a numerical approach that generates polynomial-based approximate solutions without the need for linearization, discretization, or perturbation techniques, making it straightforward to implement and less prone to computational errors. We introduce an approach that utilizes power series expansions combined with residual minimization techniques to enhance convergence and stability. By analyzing the theoretical foundations of stiffness, we delve into the formulation of the residual power series method, detailing how it effectively captures the dynamics of stiff systems while maintaining computational efficiency. Numerical experiments demonstrate the method's superiority in terms of accuracy and computational cost when compared to traditional methods like implicit Runge-Kutta or multistep techniques. We also explore adaptive strategies within our framework to automatically adjust parameters based on the stiffness characteristics of the problem at hand. Ultimately, our findings contribute to the broader toolkit for tackling stiff differential equations, offering a robust alternative that promises to streamline computational workflows in various applied mathematics and engineering contexts.

Keywords: residual power series methods, stiff differential equoations, numerical approach, Runge Kutta methods

Procedia PDF Downloads 24
347 Problems in English into Thai Translation Normally Found in Thai University Students

Authors: Anochao Phetcharat

Abstract:

This research aims to study problems of translation basic knowledge, particularly from English into Thai. The researcher used 38 2nd-year non-English speaking students of Suratthani Rajabhat University as samples. The samples were required to translate an A4-sized article from English into Thai assigned as a part of BEN0202 Translation for Business, a requirement subject for Business English Department, which was also taught by the researcher. After completion of the translation, numerous problems were found and the research grouped them into 4 major types. The normally occurred problems in English-Thai translation works are the lack of knowledge in terms of parts of speech, word-by-word translation employment, misspellings as well as the poor knowledge in English language structure. However, this research is currently under the process of data analysis and shall be completed by the beginning of August. The researcher, nevertheless, predicts that all the above-mentioned problems, will support the researcher’s hypothesizes, that are; 1) the lack of knowledge in terms of parts of speech causes the mistranslation problem; 2) employing word-by-word translation technique hugely results in the mistranslation problem; 3) misspellings yields the mistranslation problem; and 4) the poor knowledge in English language structure also brings about translation errors. The research also predicts that, of all the aforementioned problems, the following ones are found the most, respectively: the poor knowledge in English language structure, word-by-word translation employment, the lack of knowledge in terms of parts of speech, and misspellings.

Keywords: problem, student, Thai, translation

Procedia PDF Downloads 438
346 Impact Position Method Based on Distributed Structure Multi-Agent Coordination with JADE

Authors: YU Kaijun, Liang Dong, Zhang Yarong, Jin Zhenzhou, Yang Zhaobao

Abstract:

For the impact monitoring of distributed structures, the traditional positioning methods are based on the time difference, which includes the four-point arc positioning method and the triangulation positioning method. But in the actual operation, these two methods have errors. In this paper, the Multi-Agent Blackboard Coordination Principle is used to combine the two methods. Fusion steps: (1) The four-point arc locating agent calculates the initial point and records it to the Blackboard Module.(2) The triangulation agent gets its initial parameters by accessing the initial point.(3) The triangulation agent constantly accesses the blackboard module to update its initial parameters, and it also logs its calculated point into the blackboard.(4) When the subsequent calculation point and the initial calculation point are within the allowable error, the whole coordination fusion process is finished. This paper presents a Multi-Agent collaboration method whose agent framework is JADE. The JADE platform consists of several agent containers, with the agent running in each container. Because of the perfect management and debugging tools of the JADE, it is very convenient to deal with complex data in a large structure. Finally, based on the data in Jade, the results show that the impact location method based on Multi-Agent coordination fusion can reduce the error of the two methods.

Keywords: impact monitoring, structural health monitoring(SHM), multi-agent system(MAS), black-board coordination, JADE

Procedia PDF Downloads 178
345 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML

Procedia PDF Downloads 129
344 An Efficient Traceability Mechanism in the Audited Cloud Data Storage

Authors: Ramya P, Lino Abraham Varghese, S. Bose

Abstract:

By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.

Keywords: data integrity, dynamic group, group signature, public auditing

Procedia PDF Downloads 392
343 Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery

Authors: Sidney A. Lima, Hermann J. H. Kux, Elcio H. Shiguemori

Abstract:

The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6º in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation.

Keywords: autonomy, navigation, security, photogrammetry, remote sensing, spatial resection, UAS

Procedia PDF Downloads 191
342 Blockchain Technology for Secure and Transparent Oil and Gas Supply Chain Management

Authors: Gaurav Kumar Sinha

Abstract:

The oil and gas industry, characterized by its complex and global supply chains, faces significant challenges in ensuring security, transparency, and efficiency. Blockchain technology, with its decentralized and immutable ledger, offers a transformative solution to these issues. This paper explores the application of blockchain technology in the oil and gas supply chain, highlighting its potential to enhance data security, improve transparency, and streamline operations. By leveraging smart contracts, blockchain can automate and secure transactions, reducing the risk of fraud and errors. Additionally, the integration of blockchain with IoT devices enables real-time tracking and monitoring of assets, ensuring data accuracy and integrity throughout the supply chain. Case studies and pilot projects within the industry demonstrate the practical benefits and challenges of implementing blockchain solutions. The findings suggest that blockchain technology can significantly improve trust and collaboration among supply chain participants, ultimately leading to more efficient and resilient operations. This study provides valuable insights for industry stakeholders considering the adoption of blockchain technology to address their supply chain management challenges.

Keywords: blockchain technology, oil and gas supply chain, data security, transparency, smart contracts, IoT integration, real-time tracking, asset monitoring, fraud reduction, supply chain efficiency, data integrity, case studies, industry implementation, trust, collaboration.

Procedia PDF Downloads 37
341 Progressing Institutional Quality Assurance and Accreditation of Higher Education Programmes

Authors: Dominique Parrish

Abstract:

Globally, higher education institutions are responsible for the quality assurance and accreditation of their educational programmes (Courses). The primary purpose of these activities is to ensure that the educational standards of the governing higher education authority are met and the quality of the education provided to students is assured. Despite policies and frameworks being established in many countries, to improve the veracity and accountability of quality assurance and accreditation processes, there are reportedly still mistakes, gaps and deficiencies in these processes. An analysis of Australian universities’ quality assurance and accreditation processes noted that significant improvements were needed in managing these processes and ensuring that review recommendations were implemented. It has also been suggested that the following principles are critical for higher education quality assurance and accreditation to be effective and sustainable: academic standards and performance outcomes must be defined, attainable and monitored; those involved in providing the higher education must assume responsibility for the associated quality assurance and accreditation; potential academic risks must be identified and management solutions developed; and the expectations of the public, governments and students should be considered and incorporated into Course enhancements. This phenomenological study, which was conducted in a Faculty of Science, Medicine and Health in an Australian university, sought to systematically and iteratively develop an effective quality assurance and accreditation process that integrated the evidence-based principles of success and promoted meaningful and sustainable change. Qualitative evaluative feedback was gathered, over a period of eleven months (January - November 2014), from faculty staff engaged in the quality assurance and accreditation of forty-eight undergraduate and postgraduate Courses. Reflexive analysis was used to analyse the data and inform ongoing modifications and developments to the assurance and accreditation process as well as the associated supporting resources. The study resulted in the development of a formal quality assurance and accreditation process together with a suite of targeted resources that were identified as critical for success. The research findings also provided some insights into the institutional enablers that were antecedents to successful quality assurance and accreditation processes as well as meaningful change in the educational practices of academics. While longitudinal data will be collected to further assess the value of the assurance and accreditation process on educational quality, early indicators are that there has been a change in the pedagogical perspectives and activities of academic staff and growing momentum to explore opportunities to further enhance and develop Courses. This presentation will explain the formal quality assurance and accreditation process as well as the component parts, which resulted from this study. The targeted resources that were developed will be described, the pertinent factors that contributed to the success of the process will be discussed and early indicators of sustainable academic change as well as suggestions for future research will be outlined.

Keywords: academic standards, quality assurance and accreditation, phenomenological study, process, resources

Procedia PDF Downloads 377
340 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs

Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle

Abstract:

Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.

Keywords: meteorological data, Washington D.C., DCNet data, NAM model

Procedia PDF Downloads 234
339 Design and Implementation of PD-NN Controller Optimized Neural Networks for a Quad-Rotor

Authors: Chiraz Ben Jabeur, Hassene Seddik

Abstract:

In this paper, a full approach of modeling and control of a four-rotor unmanned air vehicle (UAV), known as quad-rotor aircraft, is presented. In fact, a PD and a PD optimized Neural Networks Approaches (PD-NN) are developed to be applied to control a quad-rotor. The goal of this work is to concept a smart self-tuning PD controller based on neural networks able to supervise the quad-rotor for an optimized behavior while tracking the desired trajectory. Many challenges could arise if the quad-rotor is navigating in hostile environments presenting irregular disturbances in the form of wind added to the model on each axis. Thus, the quad-rotor is subject to three-dimensional unknown static/varying wind disturbances. The quad-rotor has to quickly perform tasks while ensuring stability and accuracy and must behave rapidly with regard to decision-making facing disturbances. This technique offers some advantages over conventional control methods such as PD controller. Simulation results are obtained with the use of Matlab/Simulink environment and are founded on a comparative study between PD and PD-NN controllers based on wind disturbances. These later are applied with several degrees of strength to test the quad-rotor behavior. These simulation results are satisfactory and have demonstrated the effectiveness of the proposed PD-NN approach. In fact, this controller has relatively smaller errors than the PD controller and has a better capability to reject disturbances. In addition, it has proven to be highly robust and efficient, facing turbulences in the form of wind disturbances.

Keywords: hostile environment, PD and PD-NN controllers, quad-rotor control, robustness against disturbance

Procedia PDF Downloads 137
338 A Spatial Approach to Model Mortality Rates

Authors: Yin-Yee Leong, Jack C. Yue, Hsin-Chung Wang

Abstract:

Human longevity has been experiencing its largest increase since the end of World War II, and modeling the mortality rates is therefore often the focus of many studies. Among all mortality models, the Lee–Carter model is the most popular approach since it is fairly easy to use and has good accuracy in predicting mortality rates (e.g., for Japan and the USA). However, empirical studies from several countries have shown that the age parameters of the Lee–Carter model are not constant in time. Many modifications of the Lee–Carter model have been proposed to deal with this problem, including adding an extra cohort effect and adding another period effect. In this study, we propose a spatial modification and use clusters to explain why the age parameters of the Lee–Carter model are not constant. In spatial analysis, clusters are areas with unusually high or low mortality rates than their neighbors, where the “location” of mortality rates is measured by age and time, that is, a 2-dimensional coordinate. We use a popular cluster detection method—Spatial scan statistics, a local statistical test based on the likelihood ratio test to evaluate where there are locations with mortality rates that cannot be described well by the Lee–Carter model. We first use computer simulation to demonstrate that the cluster effect is a possible source causing the problem of the age parameters not being constant. Next, we show that adding the cluster effect can solve the non-constant problem. We also apply the proposed approach to mortality data from Japan, France, the USA, and Taiwan. The empirical results show that our approach has better-fitting results and smaller mean absolute percentage errors than the Lee–Carter model.

Keywords: mortality improvement, Lee–Carter model, spatial statistics, cluster detection

Procedia PDF Downloads 171
337 Improving Second Language Speaking Skills via Video Exchange

Authors: Nami Takase

Abstract:

Computer-mediated-communication allows people to connect and interact with each other as if they were sharing the same space. The current study examined the effects of using video letters (VLs) on the development of second language speaking skills of Common European Framework of Reference for Languages (CEFR) A1 and CEFR B2 level learners of English as a foreign language. Two groups were formed to measure the impact of VLs. The experimental and control groups were given the same topic, and both groups worked with a native English-speaking university student from the United States of America. Students in the experimental group exchanged VLs, and students in the control group used video conferencing. Pre- and post-tests were conducted to examine the effects of each practice mode. The transcribed speech-text data showed that the VL group had improved speech accuracy scores, while the video conferencing group had increased sentence complexity scores. The use of VLs may be more effective for beginner-level learners because they are able to notice their own errors and replay videos to better understand the native speaker’s speech at their own pace. Both the VL and video conferencing groups provided positive feedback regarding their interactions with native speakers. The results showed how different types of computer-mediated communication impacts different areas of language learning and speaking practice and how each of these types of online communication tool is suited to different teaching objectives.

Keywords: computer-assisted-language-learning, computer-mediated-communication, english as a foreign language, speaking

Procedia PDF Downloads 100
336 Compensatory Articulation of Pressure Consonants in Telugu Cleft Palate Speech: A Spectrographic Analysis

Authors: Indira Kothalanka

Abstract:

For individuals born with a cleft palate (CP), there is no separation between the nasal cavity and the oral cavity, due to which they cannot build up enough air pressure in the mouth for speech. Therefore, it is common for them to have speech problems. Common cleft type speech errors include abnormal articulation (compensatory or obligatory) and abnormal resonance (hyper, hypo and mixed nasality). These are generally resolved after palate repair. However, in some individuals, articulation problems do persist even after the palate repair. Such individuals develop variant articulations in an attempt to compensate for the inability to produce the target phonemes. A spectrographic analysis is used to investigate the compensatory articulatory behaviours of pressure consonants in the speech of 10 Telugu speaking individuals aged between 7-17 years with a history of cleft palate. Telugu is a Dravidian language which is spoken in Andhra Pradesh and Telangana states in India. It is a language with the third largest number of native speakers in India and the most spoken Dravidian language. The speech of the informants is analysed using single word list, sentences, passage and conversation. Spectrographic analysis is carried out using PRAAT, speech analysis software. The place and manner of articulation of consonant sounds is studied through spectrograms with the help of various acoustic cues. The types of compensatory articulation identified are glottal stops, palatal stops, uvular, velar stops and nasal fricatives which are non-native in Telugu.

Keywords: cleft palate, compensatory articulation, spectrographic analysis, PRAAT

Procedia PDF Downloads 443
335 Efficient Estimation for the Cox Proportional Hazards Cure Model

Authors: Khandoker Akib Mohammad

Abstract:

While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.

Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood

Procedia PDF Downloads 144
334 Merits and Demerits of Participation of Fellow Examinee as Subjects in Observed Structured Practical Examination in Physiology

Authors: Mohammad U. A. Khan, Md. D. Hossain

Abstract:

Background: Department of Physiology finds difficulty in managing ‘subjects’ in practical procedure. To avoid this difficulty fellow examinees of other group may be used as subjects. Objective: To find out the merits and demerits of using fellow examinees as subjects in the practical procedure. Method: This cross-sectional descriptive study was conducted in the Department of Physiology, Noakhali Medical College, Bangladesh during May-June’14. Forty-two 1st year undergraduate medical students from a selected public medical college of Bangladesh were enrolled for the study purposively. Consent of students and authority was taken. Eighteen of them were selected as subjects and designated as subject-examinees. Other fellow examinees (non-subject) examined their blood pressure and pulse as part of ‘observed structured practical examination’ (OSPE). The opinion of all examinees regarding the merits and demerits of using fellow examinee as subjects in the practical procedure was recorded. Result: Examinees stated that they could perform their practical procedure without nervousness (24/42, 57.14%), accurately and comfortably (14/42, 33.33%) and subjects were made available without wasting time (2/42, 4.76%). Nineteen students (45.24%) found no disadvantage and 2 (4.76%) felt embracing when the subject was of opposite sex. The subject-examinees narrated that they could learn from the errors done by their fellow examinee (11/18, 61.1%). 75% non-subject examinees expressed their willingness to be subject so that they can learn from their fellows’ error. Conclusion: Using fellow examinees as subjects is beneficial for both the non-subject and subject examinees. Funding sources: Navana, Beximco, Unihealth, Square & Acme Pharma, Bangladesh Ltd.

Keywords: physiology, teaching, practical, OSPE

Procedia PDF Downloads 152
333 Comparative Evaluation of Pharmacologically Guided Approaches (PGA) to Determine Maximum Recommended Starting Dose (MRSD) of Monoclonal Antibodies for First Clinical Trial

Authors: Ibraheem Husain, Abul Kalam Najmi, Karishma Chester

Abstract:

First-in-human (FIH) studies are a critical step in clinical development of any molecule that has shown therapeutic promise in preclinical evaluations, since preclinical research and safety studies into clinical development is a crucial step for successful development of monoclonal antibodies for guidance in pharmaceutical industry for the treatment of human diseases. Therefore, comparison between USFDA and nine pharmacologically guided approaches (PGA) (simple allometry, maximum life span potential, brain weight, rule of exponent (ROE), two species methods and one species methods) were made to determine maximum recommended starting dose (MRSD) for first in human clinical trials using four drugs namely Denosumab, Bevacizumab, Anakinra and Omalizumab. In our study, the predicted pharmacokinetic (pk) parameters and the estimated first-in-human dose of antibodies were compared with the observed human values. The study indicated that the clearance and volume of distribution of antibodies can be predicted with reasonable accuracy in human and a good estimate of first human dose can be obtained from the predicted human clearance and volume of distribution. A pictorial method evaluation chart was also developed based on fold errors for simultaneous evaluation of various methods.

Keywords: clinical pharmacology (CPH), clinical research (CRE), clinical trials (CTR), maximum recommended starting dose (MRSD), clearance and volume of distribution

Procedia PDF Downloads 374
332 A Gradient Orientation Based Efficient Linear Interpolation Method

Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar

Abstract:

This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.

Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing

Procedia PDF Downloads 261
331 Automated User Story Driven Approach for Web-Based Functional Testing

Authors: Mahawish Masud, Muhammad Iqbal, M. U. Khan, Farooque Azam

Abstract:

Manual writing of test cases from functional requirements is a time-consuming task. Such test cases are not only difficult to write but are also challenging to maintain. Test cases can be drawn from the functional requirements that are expressed in natural language. However, manual test case generation is inefficient and subject to errors.  In this paper, we have presented a systematic procedure that could automatically derive test cases from user stories. The user stories are specified in a restricted natural language using a well-defined template.  We have also presented a detailed methodology for writing our test ready user stories. Our tool “Test-o-Matic” automatically generates the test cases by processing the restricted user stories. The generated test cases are executed by using open source Selenium IDE.  We evaluate our approach on a case study, which is an open source web based application. Effectiveness of our approach is evaluated by seeding faults in the open source case study using known mutation operators.  Results show that the test case generation from restricted user stories is a viable approach for automated testing of web applications.

Keywords: automated testing, natural language, restricted user story modeling, software engineering, software testing, test case specification, transformation and automation, user story, web application testing

Procedia PDF Downloads 387
330 Automatic Reporting System for Transcriptome Indel Identification and Annotation Based on Snapshot of Next-Generation Sequencing Reads Alignment

Authors: Shuo Mu, Guangzhi Jiang, Jinsa Chen

Abstract:

The analysis of Indel for RNA sequencing of clinical samples is easily affected by sequencing experiment errors and software selection. In order to improve the efficiency and accuracy of analysis, we developed an automatic reporting system for Indel recognition and annotation based on image snapshot of transcriptome reads alignment. This system includes sequence local-assembly and realignment, target point snapshot, and image-based recognition processes. We integrated high-confidence Indel dataset from several known databases as a training set to improve the accuracy of image processing and added a bioinformatical processing module to annotate and filter Indel artifacts. Subsequently, the system will automatically generate data, including data quality levels and images results report. Sanger sequencing verification of the reference Indel mutation of cell line NA12878 showed that the process can achieve 83% sensitivity and 96% specificity. Analysis of the collected clinical samples showed that the interpretation accuracy of the process was equivalent to that of manual inspection, and the processing efficiency showed a significant improvement. This work shows the feasibility of accurate Indel analysis of clinical next-generation sequencing (NGS) transcriptome. This result may be useful for RNA study for clinical samples with microsatellite instability in immunotherapy in the future.

Keywords: automatic reporting, indel, next-generation sequencing, NGS, transcriptome

Procedia PDF Downloads 191
329 Easymodel: Web-based Bioinformatics Software for Protein Modeling Based on Modeller

Authors: Alireza Dantism

Abstract:

Presently, describing the function of a protein sequence is one of the most common problems in biology. Usually, this problem can be facilitated by studying the three-dimensional structure of proteins. In the absence of a protein structure, comparative modeling often provides a useful three-dimensional model of the protein that is dependent on at least one known protein structure. Comparative modeling predicts the three-dimensional structure of a given protein sequence (target) mainly based on its alignment with one or more proteins of known structure (templates). Comparative modeling consists of four main steps 1. Similarity between the target sequence and at least one known template structure 2. Alignment of target sequence and template(s) 3. Build a model based on alignment with the selected template(s). 4. Prediction of model errors 5. Optimization of the built model There are many computer programs and web servers that automate the comparative modeling process. One of the most important advantages of these servers is that it makes comparative modeling available to both experts and non-experts, and they can easily do their own modeling without the need for programming knowledge, but some other experts prefer using programming knowledge and do their modeling manually because by doing this they can maximize the accuracy of their modeling. In this study, a web-based tool has been designed to predict the tertiary structure of proteins using PHP and Python programming languages. This tool is called EasyModel. EasyModel can receive, according to the user's inputs, the desired unknown sequence (which we know as the target) in this study, the protein sequence file (template), etc., which also has a percentage of similarity with the primary sequence, and its third structure Predict the unknown sequence and present the results in the form of graphs and constructed protein files.

Keywords: structural bioinformatics, protein tertiary structure prediction, modeling, comparative modeling, modeller

Procedia PDF Downloads 97
328 Machine Learning Models for the Prediction of Heating and Cooling Loads of a Residential Building

Authors: Aaditya U. Jhamb

Abstract:

Due to the current energy crisis that many countries are battling, energy-efficient buildings are the subject of extensive research in the modern technological era because of growing worries about energy consumption and its effects on the environment. The paper explores 8 factors that help determine energy efficiency for a building: (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution), with Tsanas and Xifara providing a dataset. The data set employed 768 different residential building models to anticipate heating and cooling loads with a low mean squared error. By optimizing these characteristics, machine learning algorithms may assess and properly forecast a building's heating and cooling loads, lowering energy usage while increasing the quality of people's lives. As a result, the paper studied the magnitude of the correlation between these input factors and the two output variables using various statistical methods of analysis after determining which input variable was most closely associated with the output loads. The most conclusive model was the Decision Tree Regressor, which had a mean squared error of 0.258, whilst the least definitive model was the Isotonic Regressor, which had a mean squared error of 21.68. This paper also investigated the KNN Regressor and the Linear Regression, which had to mean squared errors of 3.349 and 18.141, respectively. In conclusion, the model, given the 8 input variables, was able to predict the heating and cooling loads of a residential building accurately and precisely.

Keywords: energy efficient buildings, heating load, cooling load, machine learning models

Procedia PDF Downloads 96
327 Challenges in Translating Malay Idiomatic Expressions: A Study

Authors: Nor Ruba’Yah Binti Abd Rahim, Norsyahidah Binti Jaafar

Abstract:

Translating Malay idiomatic expressions into other languages presents unique challenges due to the deep cultural nuances and linguistic intricacies embedded within these expressions. This study examined these challenges through a two-pronged methodology: a comparative analysis using survey questionnaires and a quiz administered to 50 semester 6 students who are taking Translation 1 course, and in-depth interviews with their lecturers. The survey aimed to capture students’ experiences and difficulties in translating selected Malay idioms into English, highlighting common errors and misunderstandings. Complementing this, interviews with lecturers provided expert insights into the nuances of these expressions and effective translation strategies. The findings revealed that literal translations often fail to convey the intended meanings, underscoring the importance of cultural competence and contextual awareness. The study also identified key factors that contribute to successful translations, such as the translator’s familiarity with both source and target cultures and their ability to adapt expressions creatively. This research contributed to the field of translation studies by offering practical recommendations for improving the translation of idiomatic expressions, thereby enhancing cross-cultural communication. The insights gained from this study are valuable for translators, educators, and students, emphasizing the need for a nuanced approach that respects the cultural richness of the source language while ensuring clarity in the target language.

Keywords: idiomatic expressions, cultural competence, translation strategies, cross-cultural communication, students’ difficulties

Procedia PDF Downloads 14
326 Artificial Intelligence in the Design of a Retaining Structure

Authors: Kelvin Lo

Abstract:

Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.

Keywords: automation, numerical modelling, Python, retaining structures

Procedia PDF Downloads 51
325 Analytical Development of a Failure Limit and Iso-Uplift Curves for Eccentrically Loaded Shallow Foundations

Authors: N. Abbas, S. Lagomarsino, S. Cattari

Abstract:

Examining existing experimental results for shallow rigid foundations subjected to vertical centric load (N), accompanied or not with a bending moment (M), two main non-linear mechanisms governing the cyclic ‎response of the soil-foundation system can be distinguished: foundation uplift and soil yielding. A soil-foundation failure limit, is defined as a domain of resistance in the two dimensional (2D) load space (N, M) inside of which lie all the admissible combinations of loads; these latter correspond to a pure elastic, non-linear elastic or plastic behavior of the soil-foundation system, while the points lying on the failure limit correspond to a combination of loads leading to a failure of the soil-foundation system. In this study, the proposed resistance domain is constructed analytically based on mechanics. Original elastic limit, uplift initiation ‎limit and iso-uplift limits are constructed inside this domain. These limits give a prediction ‎of the mechanisms activated for each combination of loads applied to the ‎foundation. A comparison of the proposed failure limit with experimental tests existing in the literature shows interesting results. Also, the developed uplift initiation limit and iso-uplift curves are confronted with others already proposed in the literature and widely used due to the absence of other alternatives, and remarkable differences are noted, showing evident errors in the past proposals and relevant accuracy for those given in the present work.

Keywords: foundation uplift, iso-uplift curves, resistance domain, soil yield

Procedia PDF Downloads 383
324 High-Resolution Spatiotemporal Retrievals of Aerosol Optical Depth from Geostationary Satellite Using Sara Algorithm

Authors: Muhammad Bilal, Zhongfeng Qiu

Abstract:

Aerosols, suspended particles in the atmosphere, play an important role in the earth energy budget, climate change, degradation of atmospheric visibility, urban air quality, and human health. To fully understand aerosol effects, retrieval of aerosol optical properties such as aerosol optical depth (AOD) at high spatiotemporal resolution is required. Therefore, in the present study, hourly AOD observations at 500 m resolution were retrieved from the geostationary ocean color imager (GOCI) using the simplified aerosol retrieval algorithm (SARA) over the urban area of Beijing for the year 2016. The SARA requires top-of-the-atmosphere (TOA) reflectance, solar and sensor geometry information and surface reflectance observations to retrieve an accurate AOD. For validation of the GOCI retrieved AOD, AOD measurements were obtained from the aerosol robotic network (AERONET) version 3 level 2.0 (cloud-screened and quality assured) data. The errors and uncertainties were reported using the root mean square error (RMSE), relative percent mean error (RPME), and the expected error (EE = ± (0.05 + 0.15AOD). Results showed that the high spatiotemporal GOCI AOD observations were well correlated with the AERONET AOD measurements with a correlation coefficient (R) of 0.92, RMSE of 0.07, and RPME of 5%, and 90% of the observations were within the EE. The results suggested that the SARA is robust and has the ability to retrieve high-resolution spatiotemporal AOD observations over the urban area using the geostationary satellite.

Keywords: AEORNET, AOD, SARA, GOCI, Beijing

Procedia PDF Downloads 171
323 Improving Working Memory in School Children through Chess Training

Authors: Veena Easvaradoss, Ebenezer Joseph, Sumathi Chandrasekaran, Sweta Jain, Aparna Anna Mathai, Senta Christy

Abstract:

Working memory refers to a cognitive processing space where information is received, managed, transformed, and briefly stored. It is an operational process of transforming information for the execution of cognitive tasks in different and new ways. Many class room activities require children to remember information and mentally manipulate it. While the impact of chess training on intelligence and academic performance has been unequivocally established, its impact on working memory needs to be studied. This study, funded by the Cognitive Science Research Initiative, Department of Science & Technology, Government of India, analyzed the effect of one-year chess training on the working memory of children. A pretest–posttest with control group design was used, with 52 children in the experimental group and 50 children in the control group. The sample was selected from children studying in school (grades 3 to 9), which included both the genders. The experimental group underwent weekly chess training for one year, while the control group was involved in extracurricular activities. Working memory was measured by two subtests of WISC-IV INDIA. The Digit Span Subtest involves recalling a list of numbers of increasing length presented orally in forward and in reverse order, and the Letter–Number Sequencing Subtest involves rearranging jumbled alphabets and numbers presented orally following a given rule. Both tasks require the child to receive and briefly store information, manipulate it, and present it in a changed format. The Children were trained using Winning Moves curriculum, audio- visual learning method, hands-on- chess training and recording the games using score sheets, analyze their mistakes, thereby increasing their Meta-Analytical abilities. They were also trained in Opening theory, Checkmating techniques, End-game theory and Tactical principles. Pre equivalence of means was established. Analysis revealed that the experimental group had significant gains in working memory compared to the control group. The present study clearly establishes a link between chess training and working memory. The transfer of chess training to the improvement of working memory could be attributed to the fact that while playing chess, children evaluate positions, visualize new positions in their mind, analyze the pros and cons of each move, and choose moves based on the information stored in their mind. If working-memory’s capacity could be expanded or made to function more efficiently, it could result in the improvement of executive functions as well as the scholastic performance of the child.

Keywords: chess training, cognitive development, executive functions, school children, working memory

Procedia PDF Downloads 263