Search results for: abstract word
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1268

Search results for: abstract word

158 Language Skills in the Emergent Literacy of Spanish-Speaking Children with Autism Spectrum Disorders

Authors: Adriana Salgado, Sandra Castaneda, Ivan Perez

Abstract:

Learning to read and write is a complex process involving several cognitive skills, contextual, and cultural environments. The basis of this development is linguistic skills, such as the ability to name and understand vocabulary, retell a story, phonological awareness, letter knowledge, among others. In children with autism spectrum disorder (ASD), one of the main concerns is related to language disorders. Nevertheless, most of the children with ASD are able to decode written information but have difficulties in reading comprehension. The research of these processes in the Spanish-speaking population is limited. However, the increasing prevalence of this diagnosis (1 in 115 children) in Mexico has implications at different levels. Educational research is an important area of interest in ASD children, such as emergent literacy. Reading and writing expand the possibilities of academic, cultural, and social information access. Taking this information into account, the objective of this research was to identify the relationship between language skills, alphabet knowledge, phonological awareness, and early reading and writing in ASD Spanish-speaking children. The method used for this research was based on tasks that were selected, adapted and in some cases designed to measure initial reading and writing, as well as language skills (naming, receptive vocabulary, and narrative skills), phonological awareness (similar phonological word pairs, beginning sound awareness and spelling) and letter knowledge, in a sample of 45 children (38 boys and 7 girls) with prior diagnosis of ASD. Descriptive analyses, as well as bivariate correlations, cluster analysis, and canonical correspondence, were obtained for the data results. Results showed that variability was large; however, it was possible to characterize the sample in low, medium, and high score groups regarding children performance. The low score group (46.7% of the sample), had a null or deficient performance in language skills and phonological awareness, some could identify up to five letters of the alphabet, showed no early reading skills but they could scribble. The middle score group was characterized by a highly variable performance in different tasks, with better language skills in receptive and naming vocabulary, some narrative, letter knowledge, and phonological awareness (beginning sound awareness) skills. The high score group, (24.4% of the sample) had the best performance in language skills in relation to the sample data, as well as in the rest of the measured skills. Finally, scores were canonically correlated between naming, receptive vocabulary, narrative, phonological awareness, letter knowledge and initial learning of reading and writing skills for the high score group and letter knowledge, naming and receptive vocabulary for the lower score group, which is consistent with previous research in typical and ASD children. In conclusion, the obtained data is consistent with previous studies. Despite large variability, it was possible to identify performance profiles and relations based on linguistic, phonological awareness, and letter knowledge skills. These skills were predictor variables of the initial development of reading and writing. The above has implications for a future program and strategies development that may benefit the acquisition of reading and writing in ASD children.

Keywords: autism, autism spectrum disorders, early literacy, emergent literacy

Procedia PDF Downloads 118
157 New Knowledge Co-Creation in Mobile Learning: A Classroom Action Research with Multiple Case Studies Using Mobile Instant Messaging

Authors: Genevieve Lim, Arthur Shelley, Dongcheol Heo

Abstract:

Abstract—Mobile technologies can enhance the learning process as it enables social engagement around concepts beyond the classroom and the curriculum. Early results in this ongoing research is showing that when learning interventions are designed specifically to generate new insights, mobile devices support regulated learning and encourage learners to collaborate, socialize and co-create new knowledge. As students navigate across the space and time boundaries, the fundamental social nature of learning transforms into mobile computer supported collaborative learning (mCSCL). The metacognitive interaction in mCSCL via mobile applications reflects the regulation of learning among the students. These metacognitive experiences whether self-, co- or shared-regulated are significant to the learning outcomes. Despite some insightful empirical studies, there has not yet been significant research that investigates the actual practice and processes of the new knowledge co-creation. This leads to question as to whether mobile learning provides a new channel to leverage learning? Alternatively, does mobile interaction create new types of learning experiences and how do these experiences co-create new knowledge. The purpose of this research is to explore these questions and seek evidence to support one or the other. This paper addresses these questions from the students’ perspective to understand how students interact when constructing knowledge in mCSCL and how students’ self-regulated learning (SRL) strategies support the co-creation of new knowledge in mCSCL. A pilot study has been conducted among international undergraduates to understand students’ perspective of mobile learning and concurrently develops a definition in an appropriate context. Using classroom action research (CAR) with multiple case studies, this study is being carried out in a private university in Thailand to narrow the research gaps in mCSCL and SRL. The findings will allow teachers to see the importance of social interaction for meaningful student engagement and envisage learning outcomes from a knowledge management perspective and what role mobile devices can play in these. The findings will signify important indicators for academics to rethink what is to be learned and how it should be learned. Ultimately, the study will bring new light into the co-creation of new knowledge in a social interactive learning environment and challenges teachers to embrace the 21st century of learning with mobile technologies to deepen and extend learning opportunities.

Keywords: mobile computer supported collaborative learning, mobile instant messaging, mobile learning, new knowledge co-creation, self-regulated learning

Procedia PDF Downloads 212
156 A Versatile Standing Cum Sitting Device for Rehabilitation and Standing Aid for Paraplegic Patients

Authors: Sasibhushan Yengala, Nelson Muthu, Subramani Kanagaraj

Abstract:

The abstract reports on the design related to a modular and affordable standing cum sitting device to meet the requirements of paraplegic patients of the different physiques. Paraplegic patients need the assistance of an external arrangement to the lower limbs and trunk to help patients adopt the correct posture while standing abreast gravity. This support can be from a tilt table or a standing frame which the patient can use to stay in a vertical posture. Standing frames are devices fitting to support a person in a weight-bearing posture. Commonly, these devices support and lift the end-user in shifting from a sitting position to a standing position. The merits of standing for a paraplegic patient with a spinal injury are numerous. Even when there is limited control on muscles that ordinarily support the user using the standing frame in a vertical position, the standing stance improves the blood pressure, increases bone density, improves resilience and scope of motion, and improves the user's feelings of well-being by letting the patient stand. One limitation with standing frames is that these devices are typically function definitely; cannot be used for different purposes. Therefore, users are often compelled to purchase more than one of these devices, each being purposefully built for definite activities. Another concern frequent in standing frames is manoeuvrability; it is crucial to provide a convenient adjustment scope for all users. Thus, there is a need to provide a standing frame with multiple uses that can be economical for a larger population. There is also a need to equip added readjustment means in a standing frame to lessen the shear and to accommodate a broad range of users. The proposed Versatile Standing cum Sitting Device (VSD) is designed to change from standing to a comfortable sitting position using a series of mechanisms. First, a locking mechanism is provided to lock the VSD in a standing stance. Second, a dampening mechanism is provided to make sure that the VSD shifts from a standing to a sitting position gradually when the lock mechanism gets disengaged. An adjustment option is offered for the height of the headrest via the use of lock knobs. This device can be used in clinics for rehabilitation purposes irrespective of patient's anthropometric data due to its modular adjustments. It can facilitate the patient's daily life routine while in therapy and giving the patient the comfort to sit when tired. The device also provides the availability of rehabilitation to a common person.

Keywords: paraplegic, rehabilitation, spinal cord injury, standing frame

Procedia PDF Downloads 187
155 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning

Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu

Abstract:

Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.

Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.

Procedia PDF Downloads 71
154 Application of Neuroscience in Aligning Instructional Design to Student Learning Style

Authors: Jayati Bhattacharjee

Abstract:

Teaching is a very dynamic profession. Teaching Science is as much challenging as Learning the subject if not more. For instance teaching of Chemistry. From the introductory concepts of subatomic particles to atoms of elements and their symbols and further presenting the chemical equation and so forth is a challenge on both side of the equation Teaching Learning. This paper combines the Neuroscience of Learning and memory with the knowledge of Learning style (VAK) and presents an effective tool for the teacher to authenticate Learning. The model of ‘Working Memory’, the Visio-spatial sketchpad, the central executive and the phonological loop that transforms short-term memory to long term memory actually supports the psychological theory of Learning style i.e. Visual –Auditory-Kinesthetic. A closer examination of David Kolbe’s learning model suggests that learning requires abilities that are polar opposites, and that the learner must continually choose which set of learning abilities he or she will use in a specific learning situation. In grasping experience some of us perceive new information through experiencing the concrete, tangible, felt qualities of the world, relying on our senses and immersing ourselves in concrete reality. Others tend to perceive, grasp, or take hold of new information through symbolic representation or abstract conceptualization – thinking about, analyzing, or systematically planning, rather than using sensation as a guide. Similarly, in transforming or processing experience some of us tend to carefully watch others who are involved in the experience and reflect on what happens, while others choose to jump right in and start doing things. The watchers favor reflective observation, while the doers favor active experimentation. Any lesson plan based on the model of Prescriptive design: C+O=M (C: Instructional condition; O: Instructional Outcome; M: Instructional method). The desired outcome and conditions are independent variables whereas the instructional method is dependent hence can be planned and suited to maximize the learning outcome. The assessment for learning rather than of learning can encourage, build confidence and hope amongst the learners and go a long way to replace the anxiety and hopelessness that a student experiences while learning Science with a human touch in it. Application of this model has been tried in teaching chemistry to high school students as well as in workshops with teachers. The response received has proven the desirable results.

Keywords: working memory model, learning style, prescriptive design, assessment for learning

Procedia PDF Downloads 328
153 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 140
152 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 223
151 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings

Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian

Abstract:

Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.

Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM

Procedia PDF Downloads 80
150 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy

Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini

Abstract:

The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.

Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering

Procedia PDF Downloads 208
149 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 145
148 Rendering Religious References in English: Naguib Mahfouz in the Arabic as a Foreign Language Classroom

Authors: Shereen Yehia El Ezabi

Abstract:

The transition from the advanced to the superior level of Arabic proficiency is widely known to pose considerable challenges for English speaking students of Arabic as a Foreign Language (AFL). Apart from the increasing complexity of the grammar at this juncture, together with the sprawling vocabulary, to name but two of those challenges, there is also the somewhat less studied hurdle along the way to superior level proficiency, namely, the seeming opacity of many aspects of Arab/ic culture to such learners. This presentation tackles one specific dimension of such issues: religious references in literary texts. It illustrates how carefully constructed translation activities may be used to expand and deepen students’ understanding and use of them. This is shown to be vital for making the leap to the desired competency, given that such elements, as reflected in customs, traditions, institutions, worldviews, and formulaic expressions lie at the very core of Arabic culture and, as such, pervade all modes and levels of Arabic discourse. A short story from the collection “Stories from Our Alley”, by preeminent novelist Naguib Mahfouz is selected for use in this context, being particularly replete with such religious references, of which religious expressions will form the focus of the presentation. As a miniature literary work, it provides an organic whole, so to speak, within which to explore with the class the most precise denotation, as well as the subtlest connotation of each expression in an effort to reach the ‘best’ English rendering. The term ‘best’ refers to approximating the meaning in its full complexity from the source text, in this case Arabic, to the target text, English, according to the concept of equivalence in translation theory. The presentation will show how such a process generates the sort of thorough discussion and close text analysis which allows students to gain valuable insight into this central idiom of Arabic. A variety of translation methods will be highlighted, gleaned from the presenter’s extensive work with advanced/superior students in the Center for Arabic Study Abroad (CASA) program at the American University in Cairo. These begin with the literal rendering of expressions, with the purpose of reinforcing vocabulary learning and practicing the rules of derivational morphology as they form each word, since the larger context remains that of an AFL class, as opposed to a translation skills program. However, departures from the literal approach are subsequently explored by degrees, moving along the spectrum of functional and pragmatic freer translations in order to transmit the ‘real’ meaning in readable English to the target audience- no matter how culture/religion specific the expression- while remaining faithful to the original. Samples from students’ work pre and post discussion will be shared, demonstrating how class consensus is formed as to the final English rendering, proposed as the closest match to the Arabic, and shown to be the result of the above activities. Finally, a few examples of translation work which students have gone on to publish will be shared to corroborate the effectiveness of this teaching practice.

Keywords: superior level proficiency in Arabic as a foreign language, teaching Arabic as a foreign language, teaching idiomatic expressions, translation in foreign language teaching

Procedia PDF Downloads 174
147 Assessment of the Situation and the Cause of Junk Food Consumption in Iranians: A Qualitative Study

Authors: A. Rezazadeh, B Damari, S. Riazi-Esfahani, M. Hajian

Abstract:

The consumption of junk food in Iran is alarmingly increasing. This study aimed to investigate the influencing factors of junk food consumption and amendable interventions that are criticized and approved by stakeholders, in order to presented to health policy makers. The articles and documents related to the content of study were collected by using the appropriate key words such as junk food, carbonated beverage, chocolate, candy, sweets, industrial fruit juices, potato chips, French fries, puffed corn, cakes, biscuits, sandwiches, prepared foods and popsicles, ice cream, bar, chewing gum, pastilles and snack, in scholar.google.com, pubmed.com, eric.ed.gov, cochrane.org, magiran.com, medlib.ir, irandoc.ac.ir, who.int, iranmedex.com, sid.ir, pubmed.org and sciencedirect.com databases. The main key points were extracted and included in a checklist and qualitatively analyzed. Then a summarized abstract was prepared in a format of a questionnaire to be presented to stakeholders. The design of this was qualitative (Delphi). According to this method, a questionnaire was prepared based on reviewing the articles and documents and it was emailed to stakeholders, who were asked to prioritize and choose the main problems and effective interventions. After three rounds, consensus was obtained.            Studies revealed high consumption of junk foods in the Iranian population, especially in children and adolescents. The most important affecting factors include availability, low price, media advertisements, preference of fast foods taste, the variety of the packages and their attractiveness, low awareness and changing in lifestyle. Main interventions recommended by stakeholders include developing a protective environment, educational interventions, increasing healthy food access and controlling media advertisements and putting pressure from the Industry and Mining Ministry on producers to produce healthy snacks. According to the findings, the results of this study may be proposed to public health policymakers as an advocacy paper and to be integrated in the interventional programs of Health and Education ministries and the media. Also, implementation of supportive meetings with the producers of alternative healthy products is suggested.

Keywords: junk foods, situation, qualitative study, Iran

Procedia PDF Downloads 230
146 Evidence-Based Practices in Education: A General Review of the Literature on Elementary Classroom Setting

Authors: Carolina S. Correia, Thalita V. Thomé, Andersen Boniolo, Dhayana I. Veiga

Abstract:

Evidence-based practices (EBP) in education is a set of principles and practices used to raise educational policy, it involves the integration of professional expertise in education with the best empirical evidence in making decisions about how to deliver instruction. The purpose of this presentation is to describe and characterize studies about EBP in education in elementary classroom setting. Data here presented is part of an ongoing systematic review research. Articles were searched and selected from four academic databases: ProQuest, Scielo, Science Direct and Capes. The search terms were evidence-based practices or program effectiveness, and education or teaching or teaching practices or teaching methods. Articles were included according to the following criteria: The studies were explicitly described as evidence-based or discussed the most effective practices in education, they discussed teaching practices in classroom context in elementary school level. Document excerpts were extracted and recorded in Excel, organized by reference, descriptors, abstract, purpose, setting, participants, type of teaching practice, study design and main results. The total amount of articles selected were 1.185, 569 articles from Proquest Research Library; 216 from CAPES; 251 from ScienceDirect and 149 from Scielo Library. The potentially relevant references were 178, from which duplicates were removed. The final number of articles analyzed was 140. From 140 articles, are 47 theoretical studies and 93 empirical articles. The following research design methods were identified: longitudinal intervention study, cluster-randomized trial, meta-analysis and pretest-posttest studies. From 140 articles, 103 studies were about regular school teaching and 37 were on special education teaching practices. In several studies, used as teaching method: active learning, content acquisition podcast (CAP), precision teaching (PT), mediated reading practice, speech therapist programs and peer-assisted learning strategies (PALS). The countries of origin of the studies were United States of America, United Kingdom, Panama, Sweden, Scotland, South Korea, Argentina, Chile, New Zealand and Brunei. The present study in is an ongoing project, so some representative findings will be discussed, providing further acknowledgment on the best teaching practices in elementary classroom setting.

Keywords: best practices, children, evidence-based education, elementary school, teaching methods

Procedia PDF Downloads 316
145 Internet of Things in Higher Education: Implications for Students with Disabilities

Authors: Scott Hollier, Ruchi Permvattana

Abstract:

The purpose of this abstract is to share the findings of a recently completed disability-related Internet of Things (IoT) project undertaken at Curtin University in Australia. The project focused on identifying how IoT could support people with disabilities with their educational outcomes. To achieve this, the research consisted of an analysis of current literature and interviews conducted with students with vision, hearing, mobility and print disabilities. While the research acknowledged the ability to collect data with IoT is now a fairly common occurrence, its benefits and applicability still need to be grounded back into real-world applications. Furthermore, it is important to consider if there are sections of our society that may benefit from these developments and if those benefits are being fully realised in a rush by large companies to achieve IoT dominance for their particular product or digital ecosystem. In this context, it is important to consider a group which, to our knowledge, has had little specific mainstream focus in the IoT area –people with disabilities. For people with disabilities, the ability for every device to interact with us and with each other has the potential to yield significant benefits. In terms of engagement, the arrival of smart appliances is already offering benefits such as the ability for a person in a wheelchair to give verbal commands to an IoT-enabled washing machine if the buttons are out of reach, or for a blind person to receive a notification on a smartphone when dinner has finished cooking in an IoT-enabled microwave. With clear benefits of IoT being identified for people with disabilities, it is important to also identify what implications there are for education. With higher education being a critical pathway for many people with disabilities in finding employment, the question as to whether such technologies can support the educational outcomes of people with disabilities was what ultimately led to this research project. This research will discuss several significant findings that have emerged from the research in relation to how consumer-based IoT can be used in the classroom to support the learning needs of students with disabilities, how industrial-based IoT sensors and actuators can be used to monitor and improve the real-time learning outcomes for the delivery of lectures and student engagement, and a proposed method for students to gain more control over their learning environment. The findings shared in this presentation are likely to have significant implications for the use of IoT in the classroom through the implementation of affordable and accessible IoT solutions and will provide guidance as to how policies can be developed as the implications of both benefits and risks continue to be considered by educators.

Keywords: disability, higher education, internet of things, students

Procedia PDF Downloads 95
144 Understanding the Influence of Social Media on Individual’s Quality of Life Perceptions

Authors: Biljana Marković

Abstract:

Social networks are an integral part of our everyday lives, becoming an indispensable medium for communication in personal and business environments. New forms and ways of communication change the general mindset and significantly affect the quality of life of individuals. Quality of life is perceived as an abstract term, but often people are not aware that they directly affect the quality of their own lives, making minor but significant everyday choices and decisions. Quality of life can be defined broadly, but in the widest sense, it involves a subjective sense of satisfaction with one's life. Scientific knowledge about the impact of social networks on self-assessment of the quality of life of individuals is only just beginning to be researched. Available research indicates potential benefits as well as a number of disadvantages. In the context of the previous claims, the focus of the study conducted by the authors of this paper focuses on analyzing the impact of social networks on individual’s self-assessment of quality of life and the correlation between time spent on social networks, and the choice of content that individuals choose to share to present themselves. Moreover, it is aimed to explain how much and in what ways they critically judge the lives of others online. The research aspires to show the positive as well as negative aspects that social networks, primarily Facebook and Instagram, have on creating a picture of individuals and how they compare themselves with others. The topic of this paper is based on quantitative research conducted on a representative sample. An analysis of the results of the survey conducted online has elaborated a hypothesis which claims that content shared by individuals on social networks influences the image they create about themselves. A comparative analysis of the results obtained with the results of similar research has led to the conclusion about the synergistic influence of social networks on the feeling of the quality of life of respondents. The originality of this work is reflected in the approach of conducting research by examining attitudes about an individual's life satisfaction, the way he or she creates a picture of himself/herself through social networks, the extent to which he/she compares herself/himself with others, and what social media applications he/she uses. At the cognitive level, scientific contributions were made through the development of information concepts on quality of life, and at the methodological level through the development of an original methodology for qualitative alignment of respondents' attitudes using statistical analysis. Furthermore, at the practical level through the application of concepts in assessing the creation of self-image and the image of others through social networks.

Keywords: quality of life, social media, self image, influence of social media

Procedia PDF Downloads 107
143 Demographic Determinants of Spatial Patterns of Urban Crime

Authors: Natalia Sypion-Dutkowska

Abstract:

Abstract — The main research objective of the paper is to discover the relationship between the age groups of residents and crime in particular districts of a large city. The basic analytical tool is specific crime rates, calculated not in relation to the total population, but for age groups in a different social situation - property, housing, work, and representing different generations with different behavior patterns. They are the communities from which criminals and victims of crimes come. The analysis of literature and national police reports gives rise to hypotheses about the ability of a given age group to generate crime as a source of offenders and as a group of victims. These specific indicators are spatially differentiated, which makes it possible to detect socio-demographic determinants of spatial patterns of urban crime. A multi-feature classification of districts was also carried out, in which specific crime rates are the diagnostic features. In this way, areas with a similar structure of socio-demographic determinants of spatial patterns on urban crime were designated. The case study is the city of Szczecin in Poland. It has about 400,000 inhabitants and its area is about 300 sq km. Szczecin is located in the immediate vicinity of Germany and is the economic, academic and cultural capital of the region. It also has a seaport and an airport. Moreover, according to ESPON 2007, Szczecin is the Transnational and National Functional Urban Area. Szczecin is divided into 37 districts - auxiliary administrative units of the municipal government. The population of each of them in 2015-17 was divided into 8 age groups: babes (0-2 yrs.), children (3-11 yrs.), teens (12-17 yrs.), younger adults (18-30 yrs.), middle-age adults (31-45 yrs.), older adults (46-65 yrs.), early older (66-80) and late older (from 81 yrs.). The crimes reported in 2015-17 in each of the districts were divided into 10 groups: fights and beatings, other theft, car theft, robbery offenses, burglary into an apartment, break-in into a commercial facility, car break-in, break-in into other facilities, drug offenses, property damage. In total, 80 specific crime rates have been calculated for each of the districts. The analysis was carried out on an intra-city scale, this is a novel approach as this type of analysis is usually carried out at the national or regional level. Another innovative research approach is the use of specific crime rates in relation to age groups instead of standard crime rates. Acknowledgments: This research was funded by the National Science Centre, Poland, registration number 2019/35/D/HS4/02942.

Keywords: age groups, determinants of crime, spatial crime pattern, urban crime

Procedia PDF Downloads 156
142 Investigation of Preschool Children's Mathematics Concept Acquisition in Terms of Different Variables

Authors: Hilal Karakuş, Berrin Akman

Abstract:

Preschool years are considered as critical years because of shaping the future lives of individuals. All of the knowledge, skills, and concepts are acquired during this period. Also, basis of academic skills is based on this period. As all of the developmental areas are the fastest in that period, the basis of mathematics education should be given in this period, too. Mathematics is seen as a difficult and abstract course by the most people. Therefore, the enjoyable side of mathematics should be presented in a concrete way in this period to avoid any bias of children for mathematics. This study is conducted to examine mathematics concept acquisition of children in terms of different variables. Screening model is used in this study which is carried out in a quantity way. The study group of this research consists of total 300 children, selected from each class randomly in groups of five, who are from public and private preschools in Çankaya, which is district of Ankara, in 2014-2015 academic year and attending children in the nursery classes and preschool institutions are connected to the Ministry of National Education. The study group of the research was determined by stage sampling method. The schools, which formed study group, are chosen by easy sampling method and the children are chosen by simple random method. Research data were collected with Bracken Basic Concept Scale–Revised Form and Child’s Personal Information Form generated by the researcher in order to get information about children and their families. Bracken Basic Concept Scale-Revised Form consists of 11 sub-dimensions (color, letter, number, size, shape, comparison, direction-location, and quantity, individual and social awareness, building- material) and 307 items. Subtests related to the mathematics were used in this research. In the “Child Individual Information Form” there are items containing demographic information as followings: age of children, gender of children, attending preschools educational intuitions for children, school attendance, mother’s and father’s education levels. At the result of the study, while it was found that children’s mathematics skills differ from age, state of attending any preschool educational intuitions , time of attending any preschool educational intuitions, level of education of their mothers and their fathers; it was found that it does not differ by the gender and type of school they attend.

Keywords: preschool education, preschool period children, mathematics education, mathematics concept acquisitions

Procedia PDF Downloads 329
141 Active Learning through a Game Format: Implementation of a Nutrition Board Game in Diabetes Training for Healthcare Professionals

Authors: Li Jiuen Ong, Magdalin Cheong, Sri Rahayu, Lek Alexander, Pei Ting Tan

Abstract:

Background: Previous programme evaluations from the diabetes training programme conducted in Changi General Hospital revealed that healthcare professionals (HCPs) are keen to receive advance diabetes training and education, specifically in medical, nutritional therapy. HCPs also expressed a preference for interactive activities over didactic teaching methods to enhance their learning. Since the War on Diabetes was initiated by MOH in 2016, HCPs are challenged to be actively involved in continuous education to be better equipped to reduce the growing burden of diabetes. Hence, streamlining training to incorporate an element of fun is of utmost importance. Aim: The nutrition programme incorporates game play using an interactive board game that aims to provide a more conducive and less stressful environment for learning. The board game could be adapted for training of community HCPs, health ambassadors or caregivers to cope with the increasing demand of diabetes care in the hospital and community setting. Methodology: Stages for game’s conception (Jaffe, 2001) were adopted in the development of the interactive board game ‘Sweet Score™ ’ Nutrition concepts and topics in diabetes self-management are embedded into the game elements of varying levels of difficulty (‘Easy,’ ‘Medium,’ ‘Hard’) including activities such as a) Drawing/ sculpting (Pictionary-like) b)Facts/ Knowledge (MCQs/ True or False) Word definition) c) Performing/ Charades To study the effects of game play on knowledge acquisition and perceived experiences, participants were randomised into two groups, i.e., lecture group (control) and game group (intervention), to test the difference. Results: Participants in both groups (control group, n= 14; intervention group, n= 13) attempted a pre and post workshop quiz to assess the effectiveness of knowledge acquisition. The scores were analysed using paired T-test. There was an improvement of quiz scores after attending the game play (mean difference: 4.3, SD: 2.0, P<0.001) and the lecture (mean difference: 3.4, SD: 2.1, P<0.001). However, there was no significance difference in the improvement of quiz scores between gameplay and lecture (mean difference: 0.9, 95%CI: -0.8 to 2.5, P=0.280). This suggests that gameplay may be as effective as a lecture in terms of knowledge transfer. All the13 HCPs who participated in the game rated 4 out of 5 on the likert scale for the favourable learning experience and relevance of learning to their job, whereas only 8 out of 14 HCPs in the lecture reported a high rating in both aspects. 16. Conclusion: There is no known board game currently designed for diabetes training for HCPs.Evaluative data from future training can provide insights and direction to improve the game format and cover other aspects of diabetes management such as self-care, exercise, medications and insulin management. Further testing of the board game to ensure learning objectives are met is important and can assist in the development of awell-designed digital game as an alternative training approach during the COVID-19 pandemic. Learning through gameplay increases opportunities for HCPs to bond, interact and learn through games in a relaxed social setting and potentially brings more joy to the workplace.

Keywords: active learning, game, diabetes, nutrition

Procedia PDF Downloads 159
140 Investigating the Effect of Metaphor Awareness-Raising Approach on the Right-Hemisphere Involvement in Developing Japanese Learners’ Knowledge of Different Degrees of Politeness

Authors: Masahiro Takimoto

Abstract:

The present study explored how the metaphor awareness-raising approach affects the involvement of the right hemisphere in developing EFL learners’ knowledge regarding the different degrees of politeness embedded within different request expressions. The present study was motivated by theoretical considerations regarding the conceptual projection and the metaphorical idea of politeness is distance, as proposed; this study applied these considerations to develop Japanese learners’ knowledge regarding the different politeness degrees and to explore the connection between the metaphorical concept projection and right-hemisphere dominance. Japanese EFL learners do not know certain language strategies (e.g., English requests can be mitigated with biclausal downgraders, including the if-clause with past-tense modal verbs) and have difficulty adjusting the politeness degrees attached to request expressions according to situations. The present study used a pre/post-test design to reaffirm the efficacy of the cognitive technique and its connection to right-hemisphere involvement by mouth asymmetry technique. Mouth asymmetry measurement has been utilized because speech articulation, normally controlled mainly by one side of the brain, causes muscles on the opposite side of the mouth to move more during speech production. The present research did not administer the delayed post-test because it emphasized determining whether metaphor awareness-raising approaches for developing EFL learners’ pragmatic proficiency entailed right-hemisphere activation. Each test contained an acceptability judgment test (AJT) along with a speaking test in the post-test. The study results show that the metaphor awareness-raising group performed significantly better than the control group with regard to acceptability judgment and speaking tests post-test. These data revealed that the metaphor awareness-raising approach could promote L2 learning because it aided input enhancement and concept projection; through these aspects, the participants were able to comprehend an abstract concept: the degree of politeness in terms of the spatial concept of distance. Accordingly, the proximal-distal metaphor enabled the study participants to connect the newly spatio-visualized concept of distance to the different politeness degrees attached to different request expressions; furthermore, they could recall them with the left side of the mouth being wider than the right. This supported certain findings from previous studies that indicated the possible involvement of the brain's right hemisphere in metaphor processing.

Keywords: metaphor awareness-raising, right hemisphere, L2 politeness, mouth asymmetry

Procedia PDF Downloads 129
139 Dysphagia Tele Assessment Challenges Faced by Speech and Swallow Pathologists in India: Questionnaire Study

Authors: B. S. Premalatha, Mereen Rose Babu, Vaishali Prabhu

Abstract:

Background: Dysphagia must be assessed, either subjectively or objectively, in order to properly address the swallowing difficulty. Providing therapeutic care to patients with dysphagia via tele mode was one approach for providing clinical services during the COVID-19 epidemic. As a result, the teleassessment of dysphagia has increased in India. Aim: This study aimed to identify challenges faced by Indian SLPs while providing teleassessment to individuals with dysphagia during the outbreak of COVID-19 from 2020 to 2021. Method: After receiving approval from the institute's institutional review board and ethics committee, the current study was carried out. The study was cross-sectional in nature and lasted from 2020 to 2021. The study enrolled participants who met the inclusion and exclusion criteria of the study. It was decided to recruit roughly 246 people based on the sample size calculations. The research was done in three stages: questionnaire development and content validation, questionnaire administration. Five speech and hearing professionals' content verified the questionnaire for faults and clarity. Participants received questionnaires via various social media platforms such as e-mail and WhatsApp, which were written in Microsoft Word and then converted to Google Forms. SPSS software was used to examine the data. Results: In light of the obstacles that Indian SLPs encounter, the study's findings were examined. Only 135 people responded. During the COVID-19 lockdowns, 38% of participants said they did not deal with dysphagia patients. After the lockout, 70.4% of SLPs kept working with dysphagia patients, while 29.6% did not. From the beginning of the oromotor examination, the main problems in completing tele evaluation of dysphagia have been highlighted. Around 37.5% of SLPs said they don't undertake the OPME online because of difficulties doing the evaluation, such as the need for repeated instructions from patients and family members and trouble visualizing structures in various positions. The majority of SLPs' online assessments were inefficient and time-consuming. A bigger percentage of SLPs stated that they will not advocate tele evaluation in dysphagia to their colleagues. SLPs' use of dysphagia assessment has decreased as a result of the epidemic. When it came to the amount of food, the majority of people proposed a small amount. Apart from placing the patient for assessment and gaining less cooperation from the family, most SLPs found that Internet speed was a source of concern and a barrier. Hearing impairment and the presence of a tracheostomy in patients with dysphagia proved to be the most difficult conditions to treat online. For patients with NPO, the majority of SLPs did not advise tele-evaluation. In the anterior region of the oral cavity, oral meal residue was more visible. The majority of SLPs reported more anterior than posterior leakage. Even while the majority of SLPs could detect aspiration by coughing, many found it difficult to discern the gurgling tone of speech after swallowing. Conclusion: The current study sheds light on the difficulties that Indian SLPs experience when assessing dysphagia via tele mode, indicating that tele-assessment of dysphagia is still to gain importance in India.

Keywords: dysphagia, teleassessment, challenges, Indian SLP

Procedia PDF Downloads 111
138 Motherhood Factors Influencing the Business Growth of Women-Owned Sewing Businesses in Lagos, Nigeria: A Mixed Method Study

Authors: Oyedele Ogundana, Amon Simba, Kostas Galanakis, Lynn Oxborrow

Abstract:

The debate about factors influencing the business growth of women-owned businesses has been a topical issue in business management. Currently, scholars have identified the issues of access to money, market, and management as canvasing factors influencing the business growth of women-owned businesses. However, the influence of motherhood (household/family context) on business growth is inconclusive in the literature; despite that women are more family-oriented than their male counterparts. Therefore, this research study considers the influence of motherhood factor (household/family context) on the business growth of women-owned sewing businesses (WOSBs) in Lagos, Nigeria. The sewing business sector is chosen as the fashion industry (which includes sewing businesses) currently accounts for the second largest number of jobs in Sub-Saharan Africa, following agriculture. Thus, sewing businesses provide a rich ground for contributing to existing scholarly work. Research questions; (1) In what way does the motherhood factor influence the business growth of WOSBs in Lagos? (2) To what extent does the motherhood factor influence the business growth of WOSBs in Lagos? For the method design, a pragmatic approach, a mixed-methods technique and an abductive form of reasoning are adopted. The method design is chosen because it fits, better than other research perspectives, with the research questions posed in this study. For instance, using a positivist approach will not sufficiently answer research question 1, neither will an interpretive approach sufficiently answer research question 2. Therefore, the research method design is divided into 2 phases, and the results from one phase are used to inform the development of the subsequent phases (only phase 1 has been completed at the moment). The first phase uses qualitative data and analytical method to answer research question 1. While the second phase of the research uses quantitative data and analytical method to answer research question 2. For the qualitative phase, 5 WOSBs were purposefully selected and interviewed. The sampling technique is selected as it was not the intention of the researcher to make any statistical inferences, at this phase, rather the purpose was just exploratory. Therefore, the 5 sampled women comprised of 2 unmarried women, 1 married woman with no child, and 2 married women with children. A 40-60 minutes interview was conducted per participants. The interviews were audio-recorded and transcribed. Thereafter, the data were analysed using thematic analysis in order to unearth patterns and relationships. Findings for the first phase of this research reveals that motherhood (household/family context) directly influences (positively/negatively) the performance of WOSBs in Lagos. Apart from a direct influence on WOSBs, motherhood also moderates (positively/negatively) other factors–e.g., access to money, management/human resources and market/opportunities– influencing WOSBs in Lagos. To further strengthen this conclusion, a word frequency query result shows that ‘family,’ ‘husband’ and ‘children’ are among the 10 words used frequently in all the interview transcripts. This first phase contributes to existing studies by showing the various forms by which motherhood influences WOSBs. The second phase (which data are yet to be collected) would reveal the extent to which motherhood influence the business growth of WOSBs in Lagos.

Keywords: women-owned sewing businesses, business growth, motherhood, Lagos

Procedia PDF Downloads 144
137 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities

Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto

Abstract:

The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.

Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP

Procedia PDF Downloads 71
136 Save Balance of Power: Can We?

Authors: Swati Arun

Abstract:

The present paper argues that Balance of Power (BOP) needs to conjugate with certain contingencies like geography. It is evident that sea powers (‘insular’ for better clarity) are not balanced (if at all) in the same way as land powers. Its apparent that artificial insularity that the US has achieved reduces the chances of balancing (constant) and helps it maintain preponderance (variable). But how precise is this approach in assessing the dynamics between China’s rise and reaction of other powers and US. The ‘evolved’ theory can be validated by putting China and US in the equation. Systemic Relation between the nations was explained through the Balance of Power theory much before the systems theory was propounded. The BOP is the crux of functionality of ‘power relation’ dynamics which has played its role in the most astounding ways leading to situations of war and peace. Whimsical; but true that, the BOP has remained a complicated and indefinable concepts since Hans. Morganthau to Kenneth Waltz. A challenge of the BOP, however remains; “ that it has too many meanings”. In the recent times it has become evident that the myriad of expectations generated by BOP has not met the practicality of the current world politics. It is for this reason; the BoP has been replaced by Preponderance Theory (PT) to explain prevailing power situation. PT does provide an empirical reasoning for the success of this theory but fails in a abstract logical reasoning required for making a theory universal. Unipolarity clarifies the current system as one where balance of power has become redundant. It seems to reach beyond the contours of BoP, where a superpower does what it must to remain one. The centrality of this arguments pivots around - an exception, every time BOP fails to operate, preponderance of power emerges. PT does not sit well with the primary logic of a theory because it works on an exception. The evolution of such a pattern and system where BOP fails and preponderance emerges is absent. The puzzle here is- if BOP really has become redundant or it needs polishing. The international power structure changed from multipolar to bipolar to unipolar. BOP was looked at to provide inevitable logic behind such changes and answer the dilemma we see today- why US is unchecked, unbalanced? But why was Britain unchecked in 19th century and why China was unbalanced in 13th century? It is the insularity of the state that makes BOP reproduce “imbalance of power”, going a level up from off-shore balancer. This luxury of a state to maintain imbalance in the region of competition or threat is the causal relation between BOP’s and geography. America has applied imbalancing- meaning disequilibrium (in its favor) to maintain the regional balance so that over time the weaker does not get stronger and pose a competition. It could do that due to the significant parity present between the US and the rest.

Keywords: balance of power, china, preponderance of power, US

Procedia PDF Downloads 258
135 Prevalence of Rituximab Efficacy Over Immunosuppressants in Therapy of Systemic Sclerosis

Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva

Abstract:

Abstract Objectives. Rituximab (RTX) shown a positive effect in the treatment of systemic sclerosis (SSc). But there is still not enough data on comparing the effectiveness of RTX with immunosuppressants (IS). The aim of our study was to compare changes of lung function and skin score in SSc between two groups of patients (pts) - on RXT therapy (prescribed after ineffectiveness of previous therapy with IS) and on therapy with IS only. Methods. This study included 103 pts received RTX as an addition to previous therapy (group 1) and 65 pts received therapy with IS and prednisolone (group 2). The mean follow-up period was 12.6±10.7months. In group 1 the mean age was 47±12.9 years, female – 88 pts (84%), the diffuse cutaneous subset of the disease had 55 pts (53%). The mean disease duration was 6.2±5.5 years. 82% pts had interstitial lung disease (ILD) and 92% were positive for ANA, 67% of them were positive for antitopoisomerase-1. All pts received prednisolone at a dose of 11.3±4.5 mg/day, IS at inclusion received 47% of them. The cumulative mean dose of RTX was 1.7±0.6 g. In group 2 the mean age was 50.8±13.8 years, female-53 pts (82%), the diffuse cutaneous subset of the disease had 44 pts (68%). The mean disease duration was 8.8±7.7 years. 81% pts had ILD and 88% were positive for ANA, 58% of them were positive for antitopoisomerase-1. All pts received prednisolone at a dose of 8.69±4.28 mg/day, IS received 57% of them. Cyclophosphamide (CP) received 45% of pts. The cumulative mean dose of CP was 10.2±15.1g. D-penicillamine received 30% of pts. Other pts was on mycophenolate mofetil or methotrexate therapy in single cases. The pts of the compared groups did not differ in the main demographic and clinical parameters. The results are presented as delta (Δ) - difference between the baseline parameter and follow up point. Results. In group 1 there was an improvement of all outcome parameters: increased of forced vital capacity, % predicted - ΔFVC=4% (p=0.0004); Diffusing capacity for carbon monoxide, % predicted remained stable (ΔDLCO=0.1%); improvement of the Rodnan skin score-ΔmRss=3.4 (p=0.001); decrease of Activity index (EScSG-AI) - ΔActivity index=1.7 (p=0.001). In group 2 the changes was insignificant: ΔFVC=-2.3%, ΔmRss=0.87, ΔActivity index=0.3. But there was a significant decrease of DLCO: ΔDLCO=-5.1% (p=0.001). Conclusion. The results of our study confirm the data on the positive effect of RTX in complex therapy in pts with SSc (decrease of skin induration, increase of FVC, stabilization of DLCO). Meantime, pts on IS and prednisolone therapy shown the worsening of lung function and insignificant changes of other clinical parameters. RTX could be considered as a more effective option in complex treatment of SSc in comparison with IS therapy

Keywords: immunosuppressants, interstitial lung disease, systemic sclerosis, rituximab

Procedia PDF Downloads 63
134 The Economic Burden of Mental Disorders: A Systematic Review

Authors: Maria Klitgaard Christensen, Carmen Lim, Sukanta Saha, Danielle Cannon, Finley Prentis, Oleguer Plana-Ripoll, Natalie Momen, Kim Moesgaard Iburg, John J. McGrath

Abstract:

Introduction: About a third of the world’s population will develop a mental disorder over their lifetime. Having a mental disorder is a huge burden in health loss and cost for the individual, but also for society because of treatment cost, production loss and caregivers’ cost. The objective of this study is to synthesize the international published literature on the economic burden of mental disorders. Methods: Systematic literature searches were conducted in the databases PubMed, Embase, Web of Science, EconLit, NHS York Database and PsychInfo using key terms for cost and mental disorders. Searches were restricted to 1980 until May 2019. The inclusion criteria were: (1) cost-of-illness studies or cost-analyses, (2) diagnosis of at least one mental disorder, (3) samples based on the general population, and (4) outcome in monetary units. 13,640 publications were screened by their title/abstract and 439 articles were full-text screened by at least two independent reviewers. 112 articles were included from the systematic searches and 31 articles from snowball searching, giving a total of 143 included articles. Results: Information about diagnosis, diagnostic criteria, sample size, age, sex, data sources, study perspective, study period, costing approach, cost categories, discount rate and production loss method and cost unit was extracted. The vast majority of the included studies were from Western countries and only a few from Africa and South America. The disorder group most often investigated was mood disorders, followed by schizophrenia and neurotic disorders. The disorder group least examined was intellectual disabilities, followed by eating disorders. The preliminary results show a substantial variety in the used perspective, methodology, costs components and outcomes in the included studies. An online tool is under development enabling the reader to explore the published information on costs by type of mental disorder, subgroups, country, methodology, and study quality. Discussion: This is the first systematic review synthesizing the economic cost of mental disorders worldwide. The paper will provide an important and comprehensive overview over the economic burden of mental disorders, and the output from this review will inform policymaking.

Keywords: cost-of-illness, health economics, mental disorders, systematic review

Procedia PDF Downloads 109
133 A Corpus-Based Analysis of "MeToo" Discourse in South Korea: Coverage Representation in Korean Newspapers

Authors: Sun-Hee Lee, Amanda Kraley

Abstract:

The “MeToo” movement is a social movement against sexual abuse and harassment. Though the hashtag went viral in 2017 following different cultural flashpoints in different countries, the initial response was quiet in South Korea. This radically changed in January 2018, when a high-ranking senior prosecutor, Seo Ji-hyun, gave a televised interview discussing being sexually assaulted by a colleague. Acknowledging public anger, particularly among women, on the long-existing problems of sexual harassment and abuse, the South Korean media have focused on several high-profile cases. Analyzing the media representation of these cases is a window into the evolving South Korean discourse around “MeToo.” This study presents a linguistic analysis of “MeToo” discourse in South Korea by utilizing a corpus-based approach. The term corpus (pl. corpora) is used to refer to electronic language data, that is, any collection of recorded instances of spoken or written language. A “MeToo” corpus has been collected by extracting newspaper articles containing the keyword “MeToo” from BIGKinds, big data analysis, and service and Nexis Uni, an online academic database search engine, to conduct this language analysis. The corpus analysis explores how Korean media represent accusers and the accused, victims and perpetrators. The extracted data includes 5,885 articles from four broadsheet newspapers (Chosun, JoongAng, Hangyore, and Kyunghyang) and 88 articles from two Korea-based English newspapers (Korea Times and Korea Herald) between January 2017 and November 2020. The information includes basic data analysis with respect to keyword frequency and network analysis and adds refined examinations of select corpus samples through naming strategies, semantic relations, and pragmatic properties. Along with the exponential increase of the number of articles containing the keyword “MeToo” from 104 articles in 2017 to 3,546 articles in 2018, the network and keyword analysis highlights ‘US,’ ‘Harvey Weinstein’, and ‘Hollywood,’ as keywords for 2017, with articles in 2018 highlighting ‘Seo Ji-Hyun, ‘politics,’ ‘President Moon,’ ‘An Ui-Jeong, ‘Lee Yoon-taek’ (the names of perpetrators), and ‘(Korean) society.’ This outcome demonstrates the shift of media focus from international affairs to domestic cases. Another crucial finding is that word ‘defamation’ is widely distributed in the “MeToo” corpus. This relates to the South Korean legal system, in which a person who defames another by publicly alleging information detrimental to their reputation—factual or fabricated—is punishable by law (Article 307 of the Criminal Act of Korea). If the defamation occurs on the internet, it is subject to aggravated punishment under the Act on Promotion of Information and Communications Network Utilization and Information Protection. These laws, in particular, have been used against accusers who have publicly come forward in the wake of “MeToo” in South Korea, adding an extra dimension of risk. This corpus analysis of “MeToo” newspaper articles contributes to the analysis of the media representation of the “MeToo” movement and sheds light on the shifting landscape of gender relations in the public sphere in South Korea.

Keywords: corpus linguistics, MeToo, newspapers, South Korea

Procedia PDF Downloads 196
132 Financial Policies in the Process of Global Crisis: Case Study Kosovo, Case Kosovo

Authors: Shpetim Rezniqi

Abstract:

Financial Policies in the process of global crisis the current crisis has swept the world with special emphasis, most developed countries, those countries which have most gross -product world and you have a high level of living.Even those who are not experts can describe the consequences of the crisis to see the reality that is seen, but how far will it go this crisis is impossible to predict. Even the biggest experts have conjecture and large divergence, but agree on one thing: - The devastating effects of this crisis will be more severe than ever before and can not be predicted.Long time, the world was dominated economic theory of free market laws. With the belief that the market is the regulator of all economic problems. The market, as river water will flow to find the best and will find the necessary solution best. Therefore much less state market barriers, less state intervention and market itself is an economic self-regulation. Free market economy became the model of global economic development and progress, it transcends national barriers and became the law of the development of the entire world economy. Globalization and global market freedom were principles of development and international cooperation. All international organizations like the World Bank, states powerful economic, development and cooperation principles laid free market economy and the elimination of state intervention. The less state intervention much more freedom of action was this market- leading international principle. We live in an era of financial tragic. Financial markets and banking in particular economies are in a state of thy good, US stock markets fell about 40%, in other words, this time, was one of the darkest moments 5 since 1920. Prior to her rank can only "collapse" of the stock of Wall Street in 1929, technological collapse of 2000, the crisis of 1973 after the Yom Kippur war, while the price of oil quadrupled and famous collapse of 1937 / '38, when Europe was beginning World war II In 2000, even though it seems like the end of the world was the corner, the world economy survived almost intact. Of course, that was small recessions in the United States, Europe, or Japan. Much more difficult the situation was at crisis 30s, or 70s, however, succeeded the world. Regarding the recent financial crisis, it has all the signs to be much sharper and with more consequences. The decline in stock prices is more a byproduct of what is really happening. Financial markets began dance of death with the credit crisis, which came as a result of the large increase in real estate prices and household debt. It is these last two phenomena can be matched very well with the gains of the '20s, a period during which people spent fists as if there was no tomorrow. All is not away from the mouth of the word recession, that fact no longer a sudden and abrupt. But as much as the financial markets melt, the greater is the risk of a problematic economy for years to come. Thus, for example, the banking crisis in Japan proved to be much more severe than initially expected, partly because the assets which were based more loans had, especially the land that falling in value. The price of land in Japan is about 15 years that continues to fall. (ADRI Nurellari-Published in the newspaper "Classifieds"). At this moment, it is still difficult to çmosh to what extent the crisis has affected the economy and what would be the consequences of the crisis. What we know is that many banks will need more time to reduce the award of credit, but banks have this primary function, this means huge loss.

Keywords: globalisation, finance, crisis, recomandation, bank, credits

Procedia PDF Downloads 365
131 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 149
130 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 85
129 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 322