Search results for: network diagnostic tool
6116 The Value of Routine Terminal Ileal Biopsies for the Investigation of Diarrhea
Authors: Swati Bhasin, Ali Ahmed, Valence Xavier, Ben Liu
Abstract:
Aims: Diarrhea is a problem that is a frequent clinic referral to the gastroenterology and surgical team from the General practitioner. To establish a diagnosis, these patients undergo colonoscopy. The current practice at our district general hospital is to perform random left and right colonic biopsies. National guidelines issued by the British Society of Gastroenterology advise all patients presenting with chronic diarrhea should have an Ileoscopy as an indicator for colonoscopy completion. Our primary aim was to check if Terminal ileum (TI) biopsy is required to establish a diagnosis of inflammatory bowel disease (IBD). Methods: Data was collected retrospectively from November 2018 to November 2019. The target population were patients who underwent colonoscopies for diarrhea. Demographic data, endoscopic and histology findings of TI were assessed and analyzed. Results: 140 patients with a mean age of 57 years (19-84) underwent a colonoscopy (M: F; 1:2.3). 92 patients had random colonic biopsies taken and based on the histological results of these, 15 patients (16%) were diagnosed with IBD. The TI was successfully intubated in 40 patients, of which 32 patients had colonic biopsies taken as well. 8 patients did not have a colonic biopsy taken. Macroscopic abnormality in the TI was detected in 5 patients, all of whom were biopsied. Based on histological results of the biopsy, 3 patients (12%) were diagnosed with IBD. These 3 patients (100%) also had colonic biopsies taken simultaneously and showed inflammation. None of the patients had a diagnosis of IBD confirmed on TI intubation alone (where colonic biopsies were not done). None of the patients has a diagnosis of IBD confirmed on TI intubation alone (where colonic biopsies were negative). Conclusion: TI intubation is a highly-skilled, time-consuming procedure with a higher risk of perforation, which as per our study, has little additional diagnostic value in finding IBD for symptoms of diarrhea if colonic biopsies are taken. We propose that diarrhea is a colonic symptom; therefore, colonic biopsies are positive for inflammation if the diarrhea is secondary to IBD. We conclude that all of the IBDs can be diagnosed simply with colonic biopsies.Keywords: biopsy, colon, IBD, terminal ileum
Procedia PDF Downloads 1226115 Technological Approach in Question Formation for Assessment of Interviewees
Authors: S. Shujan, A. T. Rupasinghe, N. L. Gunawardena
Abstract:
Numerous studies have determined that there is a direct correlation between the successful interviewee and the nonverbal behavioral patterns of that person during the interview. In this study, we focus on formations of interview questions in such a way that, it gets an opportunity for assessing interviewee through the answers using the nonverbal behavioral cues. From all the nonverbal behavioral factors we have identified, in this study priority is given to the ‘facial expression variations’ with the assistance of facial expression analytics tool; this research proposes a novel approach in question formation for the assessment of interviewees in ‘Software Industry’.Keywords: assessments, hirability, interviews, non-verbal behaviour patterns, question formation
Procedia PDF Downloads 3186114 Predictive Analysis of the Stock Price Market Trends with Deep Learning
Authors: Suraj Mehrotra
Abstract:
The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.Keywords: machine learning, testing set, artificial intelligence, stock analysis
Procedia PDF Downloads 956113 Prediction of Wind Speed by Artificial Neural Networks for Energy Application
Authors: S. Adjiri-Bailiche, S. M. Boudia, H. Daaou, S. Hadouche, A. Benzaoui
Abstract:
In this work the study of changes in the wind speed depending on the altitude is calculated and described by the model of the neural networks, the use of measured data, the speed and direction of wind, temperature and the humidity at 10 m are used as input data and as data targets at 50m above sea level. Comparing predict wind speeds and extrapolated at 50 m above sea level is performed. The results show that the prediction by the method of artificial neural networks is very accurate.Keywords: MATLAB, neural network, power low, vertical extrapolation, wind energy, wind speed
Procedia PDF Downloads 6936112 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 1316111 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud
Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal
Abstract:
Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid
Procedia PDF Downloads 3186110 Assessing Usability of Behavior Coaching Organizer
Authors: Nathaniel A. Hoston
Abstract:
Teacher coaching is necessary for improving student behaviors. While coaching technologies (e.g., bug-in-ear coaching, video-coaching) can assist the coaching process, little is known about the usability of those tools. This study assessed the usability and perceived efficacy of the Behavior Coaching Organizer (BCO) using usability testing methods (i.e., concurrent think-aloud, retrospective probing) in a simulated learning environment. Participants found that the BCO is moderately usable while perceiving the tool as highly effective for addressing concerning student behaviors. Additionally, participants noted a general need for continued coaching support. The results indicate a need for further usability testing with education research.Keywords: behavioral interventions, Behavior Coaching Organizer, coaching technologies, usability methods
Procedia PDF Downloads 1246109 A Survey of Domain Name System Tunneling Attacks: Detection and Prevention
Authors: Lawrence Williams
Abstract:
As the mechanism which converts domains to internet protocol (IP) addresses, Domain Name System (DNS) is an essential part of internet usage. It was not designed securely and can be subject to attacks. DNS attacks have become more frequent and sophisticated and the need for detecting and preventing them becomes more important for the modern network. DNS tunnelling attacks are one type of attack that are primarily used for distributed denial-of-service (DDoS) attacks and data exfiltration. Discussion of different techniques to detect and prevent DNS tunneling attacks is done. The methods, models, experiments, and data for each technique are discussed. A proposal about feasibility is made. Future research on these topics is proposed.Keywords: DNS, tunneling, exfiltration, botnet
Procedia PDF Downloads 756108 Compact LWIR Borescope Sensor for Thermal Imaging of 2D Surface Temperature in Gas-Turbine Engines
Authors: Andy Zhang, Awnik Roy, Trevor B. Chen, Bibik Oleksandar, Subodh Adhikari, Paul S. Hsu
Abstract:
The durability of a combustor in gas-turbine engines is a strong function of its component temperatures and requires good control of these temperatures. Since the temperature of combustion gases frequently exceeds the melting point of the combustion liner walls, an efficient air-cooling system with optimized flow rates of cooling air is significantly important to elongate the lifetime of liner walls. To determine the effectiveness of the air-cooling system, accurate two-dimensional (2D) surface temperature measurement of combustor liner walls is crucial for advanced engine development. Traditional diagnostic techniques for temperature measurement in this application include the rmocouples, thermal wall paints, pyrometry, and phosphors. They have shown some disadvantages, including being intrusive and affecting local flame/flow dynamics, potential flame quenching, and physical damages to instrumentation due to harsh environments inside the combustor and strong optical interference from strong combustion emission in UV-Mid IR wavelength. To overcome these drawbacks, a compact and small borescope long-wave-infrared (LWIR) sensor is developed to achieve 2D high-spatial resolution, high-fidelity thermal imaging of 2D surface temperature in gas-turbine engines, providing the desired engine component temperature distribution. The compactLWIRborescope sensor makes it feasible to promote the durability of a combustor in gas-turbine engines and, furthermore, to develop more advanced gas-turbine engines.Keywords: borescope, engine, low-wave-infrared, sensor
Procedia PDF Downloads 1356107 Predicting High-Risk Endometrioid Endometrial Carcinomas Using Protein Markers
Authors: Yuexin Liu, Gordon B. Mills, Russell R. Broaddus, John N. Weinstein
Abstract:
The lethality of endometrioid endometrial cancer (EEC) is primarily attributable to the high-stage diseases. However, there are no available biomarkers that predict EEC patient staging at the time of diagnosis. We aim to develop a predictive scheme to help in this regards. Using reverse-phase protein array expression profiles for 210 EEC cases from The Cancer Genome Atlas (TCGA), we constructed a Protein Scoring of EEC Staging (PSES) scheme for surgical stage prediction. We validated and evaluated its diagnostic potential in an independent cohort of 184 EEC cases obtained at MD Anderson Cancer Center (MDACC) using receiver operating characteristic curve analyses. Kaplan-Meier survival analysis was used to examine the association of PSES score with patient outcome, and Ingenuity pathway analysis was used to identify relevant signaling pathways. Two-sided statistical tests were used. PSES robustly distinguished high- from low-stage tumors in the TCGA cohort (area under the ROC curve [AUC]=0.74; 95% confidence interval [CI], 0.68 to 0.82) and in the validation cohort (AUC=0.67; 95% CI, 0.58 to 0.76). Even among grade 1 or 2 tumors, PSES was significantly higher in high- than in low-stage tumors in both the TCGA (P = 0.005) and MDACC (P = 0.006) cohorts. Patients with positive PSES score had significantly shorter progression-free survival than those with negative PSES in the TCGA (hazard ratio [HR], 2.033; 95% CI, 1.031 to 3.809; P = 0.04) and validation (HR, 3.306; 95% CI, 1.836 to 9.436; P = 0.0007) cohorts. The ErbB signaling pathway was most significantly enriched in the PSES proteins and downregulated in high-stage tumors. PSES may provide clinically useful prediction of high-risk tumors and offer new insights into tumor biology in EEC.Keywords: endometrial carcinoma, protein, protein scoring of EEC staging (PSES), stage
Procedia PDF Downloads 2206106 Subsidying Local Health Policy Programs as a Public Management Tool in the Polish Health Care System
Authors: T. Holecki, J. Wozniak-Holecka, P. Romaniuk
Abstract:
Due to the highly centralized model of financing health care in Poland, local self-government rarely undertook their own initiatives in the field of public health, particularly health promotion. However, since 2017 the possibility of applying for a subsidy to health policy programs has been allowed, with the additional resources to be retrieved from the National Health Fund, which is the dominant payer in the health system. The amount of subsidy depends on the number of inhabitants in a given unit and ranges about 40% of the total cost of the program. The aim of this paper is to assess the impact of newly implemented solutions in financing health policy on the management of public finances, as well as on the activity provided by local self-government in health promotion. An effort to estimate the amount of expenses that both local governments, and the National Health Fund, spent on local health policy programs while implementing the new solutions. The research method is the analysis of financial data obtained from the National Health Fund and from local government units, as well as reports published by the Agency for Health Technology Assessment and Pricing, which holds substantive control over the health policy programs, and releases permission for their implementation. The study was based on a comparative analysis of expenditures on the implementation of health programs in Poland in years 2010-2018. The presentation of the results includes the inclusion of average annual expenditures of local government units per 1 inhabitant, the total number of positively evaluated applications and the percentage share in total expenditures of local governments (16 voivodships areas). The most essential purpose is to determine whether the assumptions of the subsidy program are working correctly in practice, and what are the real effects of introducing legislative changes into local government levels in the context of public health tasks. The assumption of the study was that the use of a new motivation tool in the field of public management would result in multiplication of resources invested in the provision of health policy programs. Preliminary conclusions show that financial expenditures changed significantly after the introduction of public funding at the level of 40%, obtaining an increase in funding from own funds of local governments at the level of 80 to 90%.Keywords: health care system, health policy programs, local self-governments, public health management
Procedia PDF Downloads 1566105 SIRT1 Gene Polymorphisms and Its Protein Level in Colorectal Cancer
Authors: Olfat Shaker, Miriam Wadie, Reham Ali, Ayman Yosry
Abstract:
Colorectal cancer (CRC) is a major cause of mortality and morbidity and accounts for over 9% of cancer incidence worldwide. Silent information regulator 2 homolog 1 (SIRT1) gene is located in the nucleus and exert its effects via modulation of histone and non-histone targets. They function in the cell via histone deacetylase (HDAC) and/or adenosine diphosphate ribosyl transferase (ADPRT) enzymatic activity. The aim of this work was to study the relationship between SIRT1 polymorphism and its protein level in colorectal cancer patients in comparison to control cases. This study includes 2 groups: thirty healthy subjects (control group) & one hundred CRC patients. All subjects were subjected to: SIRT-1 serum level was measured by ELISA and gene polymorphisms of rs12778366, rs375891 and rs3740051 were detected by real time PCR. For CRC patients clinical data were collected (size, site of tumor as well as its grading, obesity) CRC patients showed high significant increase in the mean level of serum SIRT-1 compared to control group (P<0.001). Mean serum level of SIRT-1 showed high significant increase in patients with tumor size ≥5 compared to the size < 5 cm (P<0.05). In CRC patients, percentage of T allele of rs12778366 was significantly lower than controls, CC genotype and C allele C of rs 375891 were significantly higher than control group. In CRC patients, the CC genotype of rs12778366, was 75% in rectosigmoid and 25% in cecum & ascending colon. According to tumor size, the percentage of CC genotype was 87.5% in tumor size ≥5 cm. Conclusion: serum level of SIRT-1 and T allele, C allele of rs12778366 and rs 375891 respectively can be used as diagnostic markers for CRC patients.Keywords: CRC, SIRT1, polymorphisms, ELISA
Procedia PDF Downloads 2186104 Regional Flood Frequency Analysis in Narmada Basin: A Case Study
Authors: Ankit Shah, R. K. Shrivastava
Abstract:
Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency
Procedia PDF Downloads 4196103 Using Technology to Deliver and Scale Early Childhood Development Services in Resource Constrained Environments: Case Studies from South Africa
Authors: Sonja Giese, Tess N. Peacock
Abstract:
South African based Innovation Edge is experimenting with technology to drive positive behavior change, enable data-driven decision making, and scale quality early years services. This paper uses five case studies to illustrate how technology can be used in resource-constrained environments to first, encourage parenting practices that build early language development (using a stage-based mobile messaging pilot, ChildConnect), secondly, to improve the quality of ECD programs (using a mobile application, CareUp), thirdly, how to affordably scale services for the early detection of visual and hearing impairments (using a mobile tool, HearX), fourthly, how to build a transparent and accountable system for the registration and funding of ECD (using a blockchain enabled platform, Amply), and finally enable rapid data collection and feedback to facilitate quality enhancement of programs at scale (the Early Learning Outcomes Measure). ChildConnect and CareUp were both developed using a design based iterative research approach. The usage and uptake of ChildConnect and CareUp was evaluated with qualitative and quantitative methods. Actual child outcomes were not measured in the initial pilots. Although parents who used and engaged on either platform felt more supported and informed, parent engagement and usage remains a challenge. This is contrast to ECD practitioners whose usage and knowledge with CareUp showed both sustained engagement and knowledge improvement. HearX is an easy-to-use tool to identify hearing loss and visual impairment. The tool was tested with 10000 children in an informal settlement. The feasibility of cost-effectively decentralising screening services was demonstrated. Practical and financial barriers remain with respect to parental consent and for successful referrals. Amply uses mobile and blockchain technology to increase impact and accountability of public services. In the pilot project, Amply is being used to replace an existing paper-based system to register children for a government-funded pre-school subsidy in South Africa. Early Learning Outcomes Measure defines what it means for a child to be developmentally ‘on track’ at aged 50-69 months. ELOM administration is enabled via a tablet which allows for easy and accurate data collection, transfer, analysis, and feedback. ELOM is being used extensively to drive quality enhancement of ECD programs across multiple modalities. The nature of ECD services in South Africa is that they are in large part provided by disconnected private individuals or Non-Governmental Organizations (in contrast to basic education which is publicly provided by the government). It is a disparate sector which means that scaling successful interventions is that much harder. All five interventions show the potential of technology to support and enhance a range of ECD services, but pathways to scale are still being tested.Keywords: assessment, behavior change, communication, data, disabilities, mobile, scale, technology, quality
Procedia PDF Downloads 1336102 Isolation and Characterization of Cotton Infecting Begomoviruses in Alternate Hosts from Cotton Growing Regions of Pakistan
Authors: M. Irfan Fareed, Muhammad Tahir, Alvina Gul Kazi
Abstract:
Castor bean (Ricinus communis; family Euphorbiaceae) is cultivated for the production of oil and as an ornamental plant throughout tropical regions. Leaf samples from castor bean plants with leaf curl and vein thickening were collected from areas around Okara (Pakistan) in 2011. PCR amplification using diagnostic primers showed the presence of a begomovirus and subsequently the specific pair (BurNF 5’- CCATGGTTGTGGCAGTTGATTGACAGATAC-3’, BurNR 5’- CCATGGATTCACGCACAGGGGAACCC-3’) was used to amplify and clone the whole genome of the virus. The complete nucleotide sequence was determined to be 2,759 nt (accession No. HE985227). Alignments showed the highest levels of nucleotide sequence identity (98.8%) with Cotton leaf curl Burewala virus (CLCuBuV; accession No. JF416947) No. JF416947). The virus in castor beans lacks on intact C2 gene, as is typical of CLCuBuV in cotton. An amplification product of ca. 1.4 kb was obtained in PCR with primers for betasatellites and the complete nucleotide sequence of a clone was determined to be 1373 nt (HE985228). The sequence showed 96.3% nucleotide sequence identity to the recombinant Cotton leaf curl Multan betasatellite (CLCuMB; JF502389). This is the first report of CLCuBuV and its betasatellite infecting castor bean, showing this plant species as an alternate host of the virus. Already many alternate host have been reported from different alternate host like tobacco, tomato, hibiscus, okra, ageratum, Digera arvensis, habiscus, Papaya and now in Ricinus communis. So, it is suggested that these alternate hosts should be avoided to grow near cotton growing regions.Keywords: Ricinus communis, begomovirus, betasatellite, agriculture
Procedia PDF Downloads 5336101 Translation as a Foreign Language Teaching Tool: Results of an Experiment with University Level Students in Spain
Authors: Nune Ayvazyan
Abstract:
Since the proclamation of monolingual foreign-language learning methods (the Berlitz Method in the early 20ᵗʰ century and the like), the dilemma has been to allow or not to allow learners’ mother tongue in the foreign-language learning process. The reason for not allowing learners’ mother tongue is reported to create a situation of immersion where students will only use the target language. It could be argued that this artificial monolingual situation is defective, mainly because there are very few real monolingual situations in the society. This is mainly due to the fact that societies are nowadays increasingly multilingual as plurilingual speakers are the norm rather than an exception. More recently, the use of learners’ mother tongue and translation has been put under the spotlight as valid foreign-language teaching tools. The logic dictates that if learners were permitted to use their mother tongue in the foreign-language learning process, that would not only be natural, but also would give them additional means of participation in class, which could eventually lead to learning. For example, when learners’ metalinguistic skills are poor in the target language, a question they might have could be asked in their mother tongue. Otherwise, that question might be left unasked. Attempts at empirically testing the role of translation as a didactic tool in foreign-language teaching are still very scant. In order to fill this void, this study looks into the interaction patterns between students in two kinds of English-learning classes: one with translation and the other in English only (immersion). The experiment was carried out with 61 students enrolled in a second-year university subject in English grammar in Spain. All the students underwent the two treatments, classes with translation and in English only, in order to see how they interacted under the different conditions. The analysis centered on four categories of interaction: teacher talk, teacher-initiated student interaction, student-initiated student-to-teacher interaction, and student-to-student interaction. Also, pre-experiment and post-experiment questionnaires and individual interviews gathered information about the students’ attitudes to translation. The findings show that translation elicited more student-initiated interaction than did the English-only classes, while the difference in teacher-initiated interactional turns was not statistically significant. Also, student-initiated participation was higher in comprehension-based activities (into L1) as opposed to production-based activities (into L2). As evidenced by the questionnaires, the students’ attitudes to translation were initially positive and mainly did not vary as a result of the experiment.Keywords: foreign language, learning, mother tongue, translation
Procedia PDF Downloads 1626100 Bacteremia Caused by Nontoxigenic Vibrio cholerae in an Immunocompromised Patient in Istanbul, Turkey
Authors: Fatma Koksal Çakirlar, Si̇nem Ozdemir, Selcan Akyol, Revazi̇ye Gulesen, Murat Gunaydin, Nevri̇ye Gonullu, Belkis Levent, Nuri̇ Kiraz
Abstract:
Vibrio cholerae O1 and O139 are the causative agent of epidemic or pandemic cholera. V. cholerae O1 is generally accepted as a non-invasive enterotoxigenic organism causing gastroenteritis of various severities. Non-O1 V. cholerae can cause small outbreaks of diarrhea due to consumption of contaminated food and water. Particularly, the patients with achlorydria have a risk for vibrio infections. There are numerous case reports of bacteremia caused by vibrio in patients with predisposing conditions like cirrhosis, nephrotic syndrome, diabetes, hematologic malignancy, gastrectomy, and AIDS. We described in this study the first case of nontoxigenic, non-01/non-O139 V. cholerae isolated from the blood culture of a 77-year-old female patient with hipertension, diabetes, coronary artery disease, gout and about 9 years ago migrated breast cancer history. The patient with complaints of shortness of breath, fever and malaise admitted to our emergency clinic were evaluated. There was no diarrhea or abdominal symptoms in the patient. No growth in her urine culture, but blood culture (BACTEC 9120 system, Becton Dickinson, USA) was positive for non-01/non-O139 V. cholerae that was identified by conventional methods and Phoenix automated system (BD Diagnostic Systems, Sparks, MD). It does not secrete the cholera toxin. The agglutination test was negative with polyvalent O1 antisera and O139 antiserum. Empirically ceftriaxone was administered to the patient and she was discharged with improvement in general condition. In this study we report bacteremia by non-01/non-O139 V. cholerae that is rare in the worldwide and first in Turkey.Keywords: bacteremia, blood culture, immunocompromised patient, Non-O1 vibrio cholerae
Procedia PDF Downloads 2196099 Quantitative Polymerase Chain Reaction Analysis of Phytoplankton Composition and Abundance to Assess Eutrophication: A Multi-Year Study in Twelve Large Rivers across the United States
Authors: Chiqian Zhang, Kyle D. McIntosh, Nathan Sienkiewicz, Ian Struewing, Erin A. Stelzer, Jennifer L. Graham, Jingrang Lu
Abstract:
Phytoplankton plays an essential role in freshwater aquatic ecosystems and is the primary group synthesizing organic carbon and providing food sources or energy to ecosystems. Therefore, the identification and quantification of phytoplankton are important for estimating and assessing ecosystem productivity (carbon fixation), water quality, and eutrophication. Microscopy is the current gold standard for identifying and quantifying phytoplankton composition and abundance. However, microscopic analysis of phytoplankton is time-consuming, has a low sample throughput, and requires deep knowledge and rich experience in microbial morphology to implement. To improve this situation, quantitative polymerase chain reaction (qPCR) was considered for phytoplankton identification and quantification. Using qPCR to assess phytoplankton composition and abundance, however, has not been comprehensively evaluated. This study focused on: 1) conducting a comprehensive performance comparison of qPCR and microscopy techniques in identifying and quantifying phytoplankton and 2) examining the use of qPCR as a tool for assessing eutrophication. Twelve large rivers located throughout the United States were evaluated using data collected from 2017 to 2019 to understand the relation between qPCR-based phytoplankton abundance and eutrophication. This study revealed that temporal variation of phytoplankton abundance in the twelve rivers was limited within years (from late spring to late fall) and among different years (2017, 2018, and 2019). Midcontinent rivers had moderately greater phytoplankton abundance than eastern and western rivers, presumably because midcontinent rivers were more eutrophic. The study also showed that qPCR- and microscope-determined phytoplankton abundance had a significant positive linear correlation (adjusted R² 0.772, p-value < 0.001). In addition, phytoplankton abundance assessed via qPCR showed promise as an indicator of the eutrophication status of those rivers, with oligotrophic rivers having low phytoplankton abundance and eutrophic rivers having (relatively) high phytoplankton abundance. This study demonstrated that qPCR could serve as an alternative tool to traditional microscopy for phytoplankton quantification and eutrophication assessment in freshwater rivers.Keywords: phytoplankton, eutrophication, river, qPCR, microscopy, spatiotemporal variation
Procedia PDF Downloads 1016098 Integrating Nursing Informatics to Improve Patient-Centered Care: A Project to Reduce Patient Waiting Time at the Blood Pressure Counter
Authors: Pi-Chi Wu, Tsui-Ping Chu, Hsiu-Hung Wang
Abstract:
Background: The ability to provide immediate medical service in outpatient departments is one of the keys to patient satisfaction. Objectives: This project used electronic equipment to integrate nursing care information to patient care at a blood pressure diagnostic counter. Through process reengineering, the average patient waiting time decreased from 35 minutes to 5 minutes, while service satisfaction increased from a score of 2.7 to 4.6. Methods: Data was collected from a local hospital in Southern Taiwan from a daily average of 2,200 patients in the outpatient department. Previous waiting times were affected by (1) space limitations, (2) the need to help guide patient mobility, (3) the need for nurses to appease irate patients and give instructions, (4), the need for patients to replace lost counter tickets, (5) the need to re-enter information, (6) the replacement of missing patient information. An ad hoc group was established to enhance patient satisfaction and shorten waiting times for patients to see a doctor. A four step strategy consisting of (1) counter relocation, (2) queue reorganization, (3) electronic information integration, (4) process reengineering was implemented. Results: Implementation of the developed strategy decreased patient waiting time from 35 minutes to an average of 5 minutes, and increased patient satisfaction scores from 2.7 to 6.4. Conclusion: Through the integration of information technology and process transformation, waiting times were drastically reduced, patient satisfaction increased, and nurses were allowed more time to engage in more cost-effective services. This strategy was simultaneously enacted in separate hospitals throughout Taiwan.Keywords: process reengineering, electronic information integration, patient satisfaction, patient waiting time
Procedia PDF Downloads 3786097 Learning Language through Story: Development of Storytelling Website Project for Amazighe Language Learning
Authors: Siham Boulaknadel
Abstract:
Every culture has its share of a rich history of storytelling in oral, visual, and textual form. The Amazigh language, as many languages, has its own which has entertained and informed across centuries and cultures, and its instructional potential continues to serve teachers. According to many researchers, listening to stories draws attention to the sounds of language and helps children develop sensitivity to the way language works. Stories including repetitive phrases, unique words, and enticing description encourage students to join in actively to repeat, chant, sing, or even retell the story. This kind of practice is important to language learners’ oral language development, which is believed to correlate completely with student’s academic success. Today, with the advent of multimedia, digital storytelling for instance can be a practical and powerful learning tool. It has the potential in transforming traditional learning into a world of unlimited imaginary environment. This paper reports on a research project on development of multimedia Storytelling Website using traditional Amazigh oral narratives called “tell me a story”. It is a didactic tool created for the learning of good moral values in an interactive multimedia environment combining on-screen text, graphics and audio in an enticing environment and enabling the positive values of stories to be projected. This Website developed in this study is based on various pedagogical approaches and learning theories deemed suitable for children age 8 to 9 year-old. The design and development of Website was based on a well-researched conceptual framework enabling users to: (1) re-play and share the stories in schools or at home, and (2) access the Website anytime and anywhere. Furthermore, the system stores the students work and activities over the system, allowing parents or teachers to monitor students’ works, and provide online feedback. The Website contains following main feature modules: Storytelling incorporates a variety of media such as audio, text and graphics in presenting the stories. It introduces the children to various kinds of traditional Amazigh oral narratives. The focus of this module is to project the positive values and images of stories using digital storytelling technique. Besides development good moral sense in children using projected positive images and moral values, it also allows children to practice their comprehending and listening skills. Reading module is developed based on multimedia material approach which offers the potential for addressing the challenges of reading instruction. This module is able to stimulate children and develop reading practice indirectly due to the tutoring strategies of scaffolding, self-explanation and hyperlinks offered in this module. Word Enhancement assists the children in understanding the story and appreciating the good moral values more efficiently. The difficult words or vocabularies are attached to present the explanation, which makes the children understand the vocabulary better. In conclusion, we believe that the interactive multimedia storytelling reveals an interesting and exciting tool for learning Amazigh. We plan to address some learning issues, in particularly the uses of activities to test and evaluate the children on their overall understanding of story and words presented in the learning modules.Keywords: Amazigh language, e-learning, storytelling, language teaching
Procedia PDF Downloads 4046096 3D Numerical Study of Tsunami Loading and Inundation in a Model Urban Area
Authors: A. Bahmanpour, I. Eames, C. Klettner, A. Dimakopoulos
Abstract:
We develop a new set of diagnostic tools to analyze inundation into a model district using three-dimensional CFD simulations, with a view to generating a database against which to test simpler models. A three-dimensional model of Oregon city with different-sized groups of building next to the coastline is used to run calculations of the movement of a long period wave on the shore. The initial and boundary conditions of the off-shore water are set using a nonlinear inverse method based on Eulerian spatial information matching experimental Eulerian time series measurements of water height. The water movement is followed in time, and this enables the pressure distribution on every surface of each building to be followed in a temporal manner. The three-dimensional numerical data set is validated against published experimental work. In the first instance, we use the dataset as a basis to understand the success of reduced models - including 2D shallow water model and reduced 1D models - to predict water heights, flow velocity and forces. This is because models based on the shallow water equations are known to underestimate drag forces after the initial surge of water. The second component is to identify critical flow features, such as hydraulic jumps and choked states, which are flow regions where dissipation occurs and drag forces are large. Finally, we describe how future tsunami inundation models should be modified to account for the complex effects of buildings through drag and blocking.Financial support from UCL and HR Wallingford is greatly appreciated. The authors would like to thank Professor Daniel Cox and Dr. Hyoungsu Park for providing the data on the Seaside Oregon experiment.Keywords: computational fluid dynamics, extreme events, loading, tsunami
Procedia PDF Downloads 1156095 Iterative Reconstruction Techniques as a Dose Reduction Tool in Pediatric Computed Tomography Imaging: A Phantom Study
Authors: Ajit Brindhaban
Abstract:
Background and Purpose: Computed Tomography (CT) scans have become the largest source of radiation in radiological imaging. The purpose of this study was to compare the quality of pediatric Computed Tomography (CT) images reconstructed using Filtered Back Projection (FBP) with images reconstructed using different strengths of Iterative Reconstruction (IR) technique, and to perform a feasibility study to assess the use of IR techniques as a dose reduction tool. Materials and Methods: An anthropomorphic phantom representing a 5-year old child was scanned, in two stages, using a Siemens Somatom CT unit. In stage one, scans of the head, chest and abdomen were performed using standard protocols recommended by the scanner manufacturer. Images were reconstructed using FBP and 5 different strengths of IR. Contrast-to-Noise Ratios (CNR) were calculated from average CT number and its standard deviation measured in regions of interest created in the lungs, bone, and soft tissues regions of the phantom. Paired t-test and the one-way ANOVA were used to compare the CNR from FBP images with IR images, at p = 0.05 level. The lowest strength value of IR that produced the highest CNR was identified. In the second stage, scans of the head was performed with decreased mA(s) values relative to the increase in CNR compared to the standard FBP protocol. CNR values were compared in this stage using Paired t-test at p = 0.05 level. Results: Images reconstructed using IR technique had higher CNR values (p < 0.01.) in all regions compared to the FBP images, at all strengths of IR. The CNR increased with increasing IR strength of up to 3, in the head and chest images. Increases beyond this strength were insignificant. In abdomen images, CNR continued to increase up to strength 5. The results also indicated that, IR techniques improve CNR by a up to factor of 1.5. Based on the CNR values at strength 3 of IR images and CNR values of FBP images, a reduction in mA(s) of about 20% was identified. The images of the head acquired at 20% reduced mA(s) and reconstructed using IR at strength 3, had similar CNR as FBP images at standard mA(s). In the head scans of the phantom used in this study, it was demonstrated that similar CNR can be achieved even when the mA(s) is reduced by about 20% if IR technique with strength of 3 is used for reconstruction. Conclusions: The IR technique produced better image quality at all strengths of IR in comparison to FBP. IR technique can provide approximately 20% dose reduction in pediatric head CT while maintaining the same image quality as FBP technique.Keywords: filtered back projection, image quality, iterative reconstruction, pediatric computed tomography imaging
Procedia PDF Downloads 1486094 Internet of Things Applications on Supply Chain Management
Authors: Beatriz Cortés, Andrés Boza, David Pérez, Llanos Cuenca
Abstract:
The Internet of Things (IoT) field is been applied in industries with different purposes. Sensing Enterprise (SE) is an attribute of an enterprise or a network that allows it to react to business stimuli originating on the internet. These fields have come into focus recently on the enterprises and there is some evidence of the use and implications in supply chain management while finding it as an interesting aspect to work on. This paper presents a revision and proposals of IoT applications in supply chain management.Keywords: industrial, internet of things, production systems, sensing enterprises, sensor, supply chain management
Procedia PDF Downloads 4236093 Energy Efficiency Index Applied to Reactive Systems
Abstract:
This paper focuses on the development of an energy efficiency index that will be applied to reactive systems, which is based in the First and Second Law of Thermodynamics, by giving particular consideration to the concept of maximum entropy. Among the requirements of such energy efficiency index, the practical feasibility must be essential. To illustrate the performance of the proposed index, such an index was used as decisive factor of evaluation for the optimization process of an industrial reactor. The results allow the conclusion to be drawn that the energy efficiency index applied to the reactive system is consistent because it extracts the information expected of an efficient indicator, and that it is useful as an analytical tool besides being feasible from a practical standpoint. Furthermore, it has proved to be much simpler to use than tools based on traditional methodologies.Keywords: energy, efficiency, entropy, reactive
Procedia PDF Downloads 4126092 Managing Pseudoangiomatous Stromal Hyperplasia Appropriately and Safely: A Retrospective Case Series Review
Authors: C. M. Williams, R. English, P. King, I. M. Brown
Abstract:
Introduction: Pseudoangiomatous Stromal Hyperplasia (PASH) is a benign fibrous proliferation of breast stroma affecting predominantly premenopausal women with no significant increased risk of breast cancer. Informal recommendations for management have continued to evolve over recent years from surgical excision to observation, although there are no specific national guidelines. This study assesses the safety of a non-surgical approach to PASH management by review of cases at a single centre. Methods: Retrospective case series review (January 2011 – August 2016) was conducted on consecutive PASH cases. Diagnostic classification (clinical, radiological and histological), management outcomes, and breast cancer incidence were recorded. Results: 43 patients were followed up for median of 25 months (3-64) with 75% symptomatic at presentation. 12% of cases (n=5) had a radiological score (BIRADS MMG or US) ≥ 4 of which 3 were confirmed malignant. One further malignancy was detected and proven radiologically occult and contralateral. No patients were diagnosed with a malignancy during follow-up. Treatment evolved from 67% surgical in 2011 to 33% in 2016. Conclusions: The management of PASH has transitioned in line with other published experience. The preliminary findings suggest this appears safe with no evidence of missed malignancies; however, longer follow up is required to confirm long-term safety. Recommendations: PASH with suspicious radiological findings ( ≥ U4/R4) warrants multidisciplinary discussion for excision. In the absence of histological or radiological suspicion of malignancy, PASH can be safely managed without surgery.Keywords: benign breast disease, conservative management, malignancy, pseudoangiomatous stromal hyperplasia, surgical excision
Procedia PDF Downloads 1326091 Opioid Administration on Patients Hospitalized in the Emergency Department
Authors: Mani Mofidi, Neda Valizadeh, Ali Hashemaghaee, Mona Hashemaghaee, Soudabeh Shafiee Ardestani
Abstract:
Background: Acute pain and its management remained the most complaint of emergency service admission. Diagnostic and therapeutic procedures add to patients’ pain. Diminishing the pain increases the quality of patient’s feeling and improves the patient-physician relationship. Aim: The aim of this study was to evaluate the outcomes and side effects of opioid administration in emergency patients. Material and Methods: patients admitted to ward II emergency service of Imam Khomeini hospital, who received one of the opioids: morphine, pethidine, methadone or fentanyl as an analgesic were evaluated. Their vital signs and general condition were examined before and after drug injection. Also, patient’s pain experience were recorded as numerical rating score (NRS) before and after analgesic administration. Results: 268 patients were studied. 34 patients were addicted to opioid drugs. Morphine had the highest rate of prescription (86.2%), followed by pethidine (8.5%), methadone (3.3%) and fentanyl (1.68). While initial NRS did not show significant difference between addicted patients and non-addicted ones, NRS decline and its score after drug injection were significantly lower in addicted patients. All patients had slight but statistically significant lower respiratory rate, heart rate, blood pressure and O2 saturation. There was no significant difference between different kind of opioid prescription and its outcomes or side effects. Conclusion: Pain management should be always in physicians’ mind during emergency admissions. It should not be assumed that an addicted patient complaining of pain is malingering to receive drug. Titration of drug and close monitoring must be in the curriculum to prevent any hazardous side effects.Keywords: numerical rating score, opioid, pain, emergency department
Procedia PDF Downloads 4266090 Development of Verification System of Workspace Clashes Between Construction Activities
Authors: Hyeon-Seung Kim, Sang-Mi Park, Min-Seo Kim, Jong-Myeung Shin, Leen-Seok Kang
Abstract:
Recently, the use of Building Information Modeling (BIM) in public construction works has become mandatory in some countries and it is anticipated that BIM will be applied to the actual field of civil engineering projects. However, the BIM system is still focused on the architectural project and the design phase. Because the civil engineering project is linear type project and is focused on the construction phase comparing with architectural project, 3D simulation is difficult to visualize them. This study suggests a method and a prototype system to solve workspace conflictions among construction activities using BIM simulation tool.Keywords: BIM, workspace, confliction, visualization
Procedia PDF Downloads 4086089 Lessons Learnt from a Patient with Pseudohyperkalaemia Secondary to Polycythaemia Rubra Vera in a Neuro-ICU Patient Resulting in Dangerous Interventions: Lessons Learnt on Patient Safety Improvement
Authors: Dinoo Kirthinanda, Sujani Wijeratne
Abstract:
Pseudohyperkalaemia is a common benign in vitro phenomenon caused by the release of potassium ions (K+) from cells during specimen processing. Analysis of haemolysed blood samples for predominantly intracellular electrolytes may lead to re-investigation and potentially harmful interventions. We report a case of a 52-year male with myeloproliferative disease manifested as Polycythaemia Rubra Vera, Hypertension and hypertensive nephropathy with stage 3 chronic kidney disease admitted to Neuro-intensive care unit (NICU) with an intra-cerebral haemorrhage secondary to hypertensive bleed. His initial blood investigations showed hyperkalemia with serum K+ 6.2 mmol/L yet the bedside arterial blood gas analysis yielded K+ of 4.6 mmol/L. The patient was however given hyperkalemia regime twice based on venous electrolyte analysis. The discrepancy between the bedside electrolyte analysis using arterial blood and venous blood prompted further evaluation. The 12 lead Electrocardiogram showed U waves and sinus bradycardia corresponding to the serum K+ of 2.8 mmol/L on arterial blood gas analysis. Immediate K+ replacement ensured the patient did not develop life-threatening cardiac complications. Pseudohyperkalaemia may pose diagnostic challenges in the absence of detectable haemolysis and should be suspected in susceptible patients with normal Electrocardiogram and Glomerular Filtration Rate to avoid potentially life-threatening interventions. When in doubt, rapid analysis of arterial blood gas may be useful for accurate quantification of potassium.Keywords: patient safety, pseudohyperkalaemia, haemolysis, myeloproliferative disorder
Procedia PDF Downloads 1526088 Study of a Crude Oil Desalting Plant of the National Iranian South Oil Company in Gachsaran by Using Artificial Neural Networks
Authors: H. Kiani, S. Moradi, B. Soltani Soulgani, S. Mousavian
Abstract:
Desalting/dehydration plants (DDP) are often installed in crude oil production units in order to remove water-soluble salts from an oil stream. In order to optimize this process, desalting unit should be modeled. In this research, artificial neural network is used to model efficiency of desalting unit as a function of input parameter. The result of this research shows that the mentioned model has good agreement with experimental data.Keywords: desalting unit, crude oil, neural networks, simulation, recovery, separation
Procedia PDF Downloads 4506087 A Typology System to Diagnose and Evaluate Environmental Affordances
Authors: Falntina Ahmad Alata, Natheer Abu Obeid
Abstract:
This paper is a research report of an experimental study on a proposed typology system to diagnose and evaluate the affordances of varying architectural environments. The study focused on architectural environments which have been developed with a shift in their use of adaptive reuse. The novelty in the newly developed environments was tested in terms of human responsiveness and interaction using a variety of selected cases. The study is a follow-up on previous research by the same authors, in which a typology of 16 categories of environmental affordances was developed and introduced. The current study introduced other new categories, which together with the previous ones establish what could be considered a basic language of affordance typology. The experiment was conducted on ten architectural environments while adopting two processes: 1. Diagnostic process, in which the environments were interpreted in terms of their affordances using the previously developed affordance typology, 2. The evaluation process, in which the diagnosed environments were evaluated using measures of emotional experience and architectural evaluation criteria of beauty, economy and function. The experimental study demonstrated that the typology system was capable of diagnosing different environments in terms of their affordances. It also introduced new categories of human interaction: “multiple affordances,” “conflict affordances,” and “mix affordances.” The different possible combinations and mixtures of categories demonstrated to be capable of producing huge numbers of other newly developed categories. This research is an attempt to draw a roadmap for designers to diagnose and evaluate the affordances within different architectural environments. It is hoped to provide future guidance for developing the best possible adaptive reuse according to the best affordance category within their proposed designs.Keywords: affordance theory, affordance categories, architectural environments, architectural evaluation criteria, adaptive reuse environment, emotional experience, shift in use environment
Procedia PDF Downloads 193