Search results for: mobile learning tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11410

Search results for: mobile learning tools

370 The Impact of the Lexical Quality Hypothesis and the Self-Teaching Hypothesis on Reading Ability

Authors: Anastasios Ntousas

Abstract:

The purpose of the following paper is to analyze the relationship between the lexical quality and the self-teaching hypothesis and their impact on the reading ability. The following questions emerged, is there a correlation between the effective reading experience that the lexical quality hypothesis proposes and the self-teaching hypothesis, would the ability to read by analogy facilitate and create stable, synchronized four-word representational, and would word morphological knowledge be a possible extension of the self-teaching hypothesis. The lexical quality hypothesis speculates that words include four representational attributes, phonology, orthography, morpho-syntax, and meaning. Those four-word representations work together to make word reading an effective task. A possible lack of knowledge in one of the representations might disrupt reading comprehension. The degree that the four-word features connect together makes high and low lexical word quality representations. When the four-word representational attributes connect together effectively, readers have a high lexical quality of words; however, when they hardly have a strong connection with each other, readers have a low lexical quality of words. Furthermore, the self-teaching hypothesis proposes that phonological recoding enables printed word learning. Phonological knowledge and reading experience facilitate the acquisition and consolidation of specific-word orthographies. The reading experience is related to strong reading comprehension. The more readers have contact with texts, the better readers they become. Therefore, their phonological knowledge, as the self-teaching hypothesis suggests, might have a facilitative impact on the consolidation of the orthographical, morphological-syntax and meaning representations of unknown words. The phonology of known words might activate effectively the rest of the representational features of words. Readers use their existing phonological knowledge of similarly spelt words to pronounce unknown words; a possible transference of this ability to read by analogy will appear with readers’ morphological knowledge. Morphemes might facilitate readers’ ability to pronounce and spell new unknown words in which they do not have lexical access. Readers will encounter unknown words with similarly phonemes and morphemes but with different meanings. Knowledge of phonology and morphology might support and increase reading comprehension. There was a careful selection, discussion of theoretical material and comparison of the two existing theories. Evidence shows that morphological knowledge improves reading ability and comprehension, so morphological knowledge might be a possible extension of the self-teaching hypothesis, the fundamental skill to read by analogy can be implemented to the consolidation of word – specific orthographies via readers’ morphological knowledge, and there is a positive correlation between effective reading experience and self-teaching hypothesis.

Keywords: morphology, orthography, reading ability, reading comprehension

Procedia PDF Downloads 117
369 Identification of a Panel of Epigenetic Biomarkers for Early Detection of Hepatocellular Carcinoma in Blood of Individuals with Liver Cirrhosis

Authors: Katarzyna Lubecka, Kirsty Flower, Megan Beetch, Lucinda Kurzava, Hannah Buvala, Samer Gawrieh, Suthat Liangpunsakul, Tracy Gonzalez, George McCabe, Naga Chalasani, James M. Flanagan, Barbara Stefanska

Abstract:

Hepatocellular carcinoma (HCC), the most prevalent type of primary liver cancer, is the second leading cause of cancer death worldwide. Late onset of clinical symptoms in HCC results in late diagnosis and poor disease outcome. Approximately 85% of individuals with HCC have underlying liver cirrhosis. However, not all cirrhotic patients develop cancer. Reliable early detection biomarkers that can distinguish cirrhotic patients who will develop cancer from those who will not are urgently needed and could increase the cure rate from 5% to 80%. We used Illumina-450K microarray to test whether blood DNA, an easily accessible source of DNA, bear site-specific changes in DNA methylation in response to HCC before diagnosis with conventional tools (pre-diagnostic). Top 11 differentially methylated sites were selected for validation by pyrosequencing. The diagnostic potential of the 11 pyrosequenced probes was tested in blood samples from a prospective cohort of cirrhotic patients. We identified 971 differentially methylated CpG sites in pre-diagnostic HCC cases as compared with healthy controls (P < 0.05, paired Wilcoxon test, ICC ≥ 0.5). Nearly 76% of differentially methylated CpG sites showed lower levels of methylation in cases vs. controls (P = 2.973E-11, Wilcoxon test). Classification of the CpG sites according to their location relative to CpG islands and transcription start site revealed that those hypomethylated loci are located in regulatory regions important for gene transcription such as CpG island shores, promoters, and 5’UTR at higher frequency than hypermethylated sites. Among 735 CpG sites hypomethylated in cases vs. controls, 482 sites were assigned to gene coding regions whereas 236 hypermethylated sites corresponded to 160 genes. Bioinformatics analysis using GO, KEGG and DAVID knowledgebase indicate that differentially methylated CpG sites are located in genes associated with functions that are essential for gene transcription, cell adhesion, cell migration, and regulation of signal transduction pathways. Taking into account the magnitude of the difference, statistical significance, location, and consistency across the majority of matched pairs case-control, we selected 11 CpG loci corresponding to 10 genes for further validation by pyrosequencing. We established that methylation of CpG sites within 5 out of those 10 genes distinguish cirrhotic patients who subsequently developed HCC from those who stayed cancer free (cirrhotic controls), demonstrating potential as biomarkers of early detection in populations at risk. The best predictive value was detected for CpGs located within BARD1 (AUC=0.70, asymptotic significance ˂0.01). Using an additive logistic regression model, we further showed that 9 CpG loci within those 5 genes, that were covered in pyrosequenced probes, constitute a panel with high diagnostic accuracy (AUC=0.887; 95% CI:0.80-0.98). The panel was able to distinguish pre-diagnostic cases from cirrhotic controls free of cancer with 88% sensitivity at 70% specificity. Using blood as a minimally invasive material and pyrosequencing as a straightforward quantitative method, the established biomarker panel has high potential to be developed into a routine clinical test after validation in larger cohorts. This study was supported by Showalter Trust, American Cancer Society (IRG#14-190-56), and Purdue Center for Cancer Research (P30 CA023168) granted to BS.

Keywords: biomarker, DNA methylation, early detection, hepatocellular carcinoma

Procedia PDF Downloads 291
368 Evaluating Multiple Diagnostic Tests: An Application to Cervical Intraepithelial Neoplasia

Authors: Areti Angeliki Veroniki, Sofia Tsokani, Evangelos Paraskevaidis, Dimitris Mavridis

Abstract:

The plethora of diagnostic test accuracy (DTA) studies has led to the increased use of systematic reviews and meta-analysis of DTA studies. Clinicians and healthcare professionals often consult DTA meta-analyses to make informed decisions regarding the optimum test to choose and use for a given setting. For example, the human papilloma virus (HPV) DNA, mRNA, and cytology can be used for the cervical intraepithelial neoplasia grade 2+ (CIN2+) diagnosis. But which test is the most accurate? Studies directly comparing test accuracy are not always available, and comparisons between multiple tests create a network of DTA studies that can be synthesized through a network meta-analysis of diagnostic tests (DTA-NMA). The aim is to summarize the DTA-NMA methods for at least three index tests presented in the methodological literature. We illustrate the application of the methods using a real data set for the comparative accuracy of HPV DNA, HPV mRNA, and cytology tests for cervical cancer. A search was conducted in PubMed, Web of Science, and Scopus from inception until the end of July 2019 to identify full-text research articles that describe a DTA-NMA method for three or more index tests. Since the joint classification of the results from one index against the results of another index test amongst those with the target condition and amongst those without the target condition are rarely reported in DTA studies, only methods requiring the 2x2 tables of the results of each index test against the reference standard were included. Studies of any design published in English were eligible for inclusion. Relevant unpublished material was also included. Ten relevant studies were finally included to evaluate their methodology. DTA-NMA methods that have been presented in the literature together with their advantages and disadvantages are described. In addition, using 37 studies for cervical cancer obtained from a published Cochrane review as a case study, an application of the identified DTA-NMA methods to determine the most promising test (in terms of sensitivity and specificity) for use as the best screening test to detect CIN2+ is presented. As a conclusion, different approaches for the comparative DTA meta-analysis of multiple tests may conclude to different results and hence may influence decision-making. Acknowledgment: This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Extension of Network Meta-Analysis for the Comparison of Diagnostic Tests ” (MIS 5047640).

Keywords: colposcopy, diagnostic test, HPV, network meta-analysis

Procedia PDF Downloads 128
367 Pathway Linking Early Use of Electronic Device and Psychosocial Wellbeing in Early Childhood

Authors: Rosa S. Wong, Keith T.S. Tung, Winnie W. Y. Tso, King-Wa Fu, Nirmala Rao, Patrick Ip

Abstract:

Electronic devices have become an essential part of our lives. Various reports have highlighted the alarming usage of electronic devices at early ages and its long-term developmental consequences. More sedentary screen time was associated with increased adiposity, worse cognitive and motor development, and psychosocial health. Apart from the problems caused by children’s own screen time, parents today are often paying less attention to their children due to hand-held device. Some anecdotes suggest that distracted parenting has negative impact on parent-child relationship. This study examined whether distracted parenting detrimentally affected parent-child activities which may, in turn, impair children’s psychosocial health. In 2018/19, we recruited a cohort of preschoolers from 32 local kindergartens in Tin Shui Wai and Sham Shui Po for a 5-year programme aiming to build stronger foundations for children from disadvantaged backgrounds through an integrated support model involving medical, education and social service sectors. A comprehensive set of questionnaires were used to survey parents on their frequency of being distracted while parenting and their frequency of learning and recreational activities with children. Furthermore, they were asked to report children’s screen time amount and their psychosocial problems. Mediation analyses were performed to test the direct and indirect effects of electronic device-distracted parenting on children’s psychosocial problems. This study recruited 873 children (448 females and 425 males, average age: 3.42±0.35). Longer screen time was associated with more psychosocial difficulties (Adjusted B=0.37, 95%CI: 0.12 to 0.62, p=0.004). Children’s screen time positively correlated with electronic device-distracted parenting (r=0.369, p < 01). We also found that electronic device-distracted parenting was associated with more hyperactive/inattentive problems (Adjusted B=0.66, p < 0.01), fewer prosocial behavior (Adjusted B=-0.74, p < 0.01), and more emotional symptoms (Adjusted B=0.61, p < 0.001) in children. Further analyses showed that electronic device-distracted parenting exerted influences both directly and indirectly through parent-child interactions but to different extent depending upon the outcome under investigation (38.8% for hyperactivity/inattention, 31.3% for prosocial behavior, and 15.6% for emotional symptoms). We found that parents’ use of devices and children’s own screen time both have negative effects on children’s psychosocial health. It is important for parents to set “device-free times” each day so as to ensure enough relaxed downtime for connecting with children and responding to their needs.

Keywords: early childhood, electronic device, psychosocial wellbeing, parenting

Procedia PDF Downloads 154
366 Vocal Advocacy: A Case Study at the First Black College Regarding Students Experiencing an Empowerment Workshop

Authors: Denise F. Brown, Melina McConatha

Abstract:

African Americans utilizing the art of vocal expressions, particularly for self-expression, has been a historical avenue of advocating for social justice and human rights. Vocal expressions can take many forms, such as singing, poetry, storytelling, and acting. Many well-known artists, politicians, leaders, and teachers used their voices to promote the causes and concerns of the African American community as well as the expression of their own experiences of being 'black' in America. The purpose of this project was to evaluate the perceptions of African American students in utilizing their voices for self-awareness, interview skills, and social change after attending a three-part workshop on vocal advocacy. This research utilized the framework of black feminism to understand empowerment in advocacy and self-expression. Students participated in learning about the power of their voices, and what purpose presence, and passion they discovered through the Immersive Voice workshop. There were three areas covered in the workshop. The first area was the power of the voice, the second area was the application of vocal passion, and the third area was applying the vocal power to express personal interest, interests of advocating for others, and confidence and speaking to others to further careers, i.e., using vocal power for job interviewing skills. The students were instructed to prepare for the workshops by completing a pre-workshop open-ended survey. There were a total of 15 students that participated. After the workshop ended, the students were instructed to complete a post-workshop survey. The surveys were assessed by evaluating both themes and codes from student's written feedback. From the pre-workshop survey, students were given a survey for them to provide feedback regarding the power of voice prior to participating in the workshops. From the student's responses, the theme (advocating for self and others) emerged as it related to student's feedback on what it means to advocate. There were three codes that led to the theme, having knowledge about advocating for self and others, gaining knowledge to advocate for self and others, and using that knowledge to advocate for self and others. After the students completed participation in the workshops, a post workshop- survey was given to the students. Students' feedback was assessed, and the same theme emerged, 'advocating for self and others.' The codes related to the theme, however, were different and included using vocal power (a term students learned during the workshop) to represent self, represent others, and obtain a job/career. In conclusion, the results of the survey showed that students still perceived advocating as speaking up for themselves and other people. After the workshop, students still continued to associate advocacy with helping themselves and helping others but were able to be more specific about how the sound of their voice could help in advocating, and how they could use their voice to represent themselves in getting a job or starting a career.

Keywords: advocacy, command, self-expression, voice

Procedia PDF Downloads 104
365 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application

Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz

Abstract:

Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.

Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation

Procedia PDF Downloads 89
364 Analysis Of Fine Motor Skills in Chronic Neurodegenerative Models of Huntington’s Disease and Amyotrophic Lateral Sclerosis

Authors: T. Heikkinen, J. Oksman, T. Bragge, A. Nurmi, O. Kontkanen, T. Ahtoniemi

Abstract:

Motor impairment is an inherent phenotypic feature of several chronic neurodegenerative diseases, and pharmacological therapies aimed to counterbalance the motor disability have a great market potential. Animal models of chronic neurodegenerative diseases display a number deteriorating motor phenotype during the disease progression. There is a wide array of behavioral tools to evaluate motor functions in rodents. However, currently existing methods to study motor functions in rodents are often limited to evaluate gross motor functions only at advanced stages of the disease phenotype. The most commonly applied traditional motor assays used in CNS rodent models, lack the sensitivity to capture fine motor impairments or improvements. Fine motor skill characterization in rodents provides a more sensitive tool to capture more subtle motor dysfunctions and therapeutic effects. Importantly, similar approach, kinematic movement analysis, is also used in clinic, and applied both in diagnosis and determination of therapeutic response to pharmacological interventions. The aim of this study was to apply kinematic gait analysis, a novel and automated high precision movement analysis system, to characterize phenotypic deficits in three different chronic neurodegenerative animal models, a transgenic mouse model (SOD1 G93A) for amyotrophic lateral sclerosis (ALS), and R6/2 and Q175KI mouse models for Huntington’s disease (HD). The readouts from walking behavior included gait properties with kinematic data, and body movement trajectories including analysis of various points of interest such as movement and position of landmarks in the torso, tail and joints. Mice (transgenic and wild-type) from each model were analyzed for the fine motor kinematic properties at young ages, prior to the age when gross motor deficits are clearly pronounced. Fine motor kinematic Evaluation was continued in the same animals until clear motor dysfunction with conventional motor assays was evident. Time course analysis revealed clear fine motor skill impairments in each transgenic model earlier than what is seen with conventional gross motor tests. Motor changes were quantitatively analyzed for up to ~80 parameters, and the largest data sets of HD models were further processed with principal component analysis (PCA) to transform the pool of individual parameters into a smaller and focused set of mutually uncorrelated gait parameters showing strong genotype difference. Kinematic fine motor analysis of transgenic animal models described in this presentation show that this method isa sensitive, objective and fully automated tool that allows earlier and more sensitive detection of progressive neuromuscular and CNS disease phenotypes. As a result of the analysis a comprehensive set of fine motor parameters for each model is created, and these parameters provide better understanding of the disease progression and enhanced sensitivity of this assay for therapeutic testing compared to classical motor behavior tests. In SOD1 G93A, R6/2, and Q175KI mice, the alterations in gait were evident already several weeks earlier than with traditional gross motor assays. Kinematic testing can be applied to a wider set of motor readouts beyond gait in order to study whole body movement patterns such as with relation to joints and various body parts longitudinally, providing a sophisticated and translatable method for disseminating motor components in rodent disease models and evaluating therapeutic interventions.

Keywords: Gait analysis, kinematic, motor impairment, inherent feature

Procedia PDF Downloads 347
363 A Study of Emotional Intelligence and Adjustment of Senior Secondary School Students in District Karnal, Haryana, India

Authors: Rooma Rani

Abstract:

The education is really important for the improvement of physical and mental well-being of the school students. It is used to express inner potential, acquire knowledge, develop skills, shape habits, attitudes, values, belief, etc. along with providing strengths and resilience to people to changing situations and allowing them to develop all those capacities which will enable individual to control surrounding environment. Education has a significant effect on the behavior of individuals which helps us in the new situations of everyday life. Educating the child is directing the child’s capacities, attitudes interest, urges, and needs into the most desirable channels. We are the part of 21st century and now a day emotional intelligence is considered more important than intelligence in the success of a person. Success depends on several intelligences and on the control of emotions too. Emotional Intelligence, like general intelligence is the product of one’s heredity and its interaction with his environmental forces. There are certain methods evolved in modern researches. Keeping in view the nature and purpose of the study, the descriptive survey method is preferred. This method is one of the important methods in education research because it describes the current position of the phenomenon under study. The term descriptive survey is generally used for the type of research which proposes to condition of practices of the present time. In the present study, a systematically random sampling method was used to select a representative sample. 50 students were selected from 2 schools. Out of 50 students, 25 were boys and 25 were girls. In the study, a) it has been found a significant difference in the level of adjustment between male and female students; b) it has been found a non-significant difference in the level of emotional intelligence between male and female students; c) it has been found a non-significant relationship between adjustment and emotional intelligence among male students; d) it has been found a significant relationship between adjustment and emotional intelligence among male students. The results of the study indicated that amongst the students those who possess high scores on emotional intelligence tests are high in level of adjustment. Measures should be adopted to improve and sustain the emotional intelligence level of students throughout their studies. Adolescent students are prone to many problems like physical, social and psychological. They need a congenial home atmosphere so that they grow into full-fledged citizens of our country. After understanding these, it helps in the development of personality which leads to a better learning situation and better thinking capacities, in turn, enhances adjustment and achievement along with a better perception of self.

Keywords: adjustment, education, emotional intelligence, students

Procedia PDF Downloads 122
362 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 289
361 Surface Sunctionalization Strategies for the Design of Thermoplastic Microfluidic Devices for New Analytical Diagnostics

Authors: Camille Perréard, Yoann Ladner, Fanny D'Orlyé, Stéphanie Descroix, Vélan Taniga, Anne Varenne, Cédric Guyon, Michael. Tatoulian, Frédéric Kanoufi, Cyrine Slim, Sophie Griveau, Fethi Bedioui

Abstract:

The development of micro total analysis systems is of major interest for contaminant and biomarker analysis. As a lab-on-chip integrates all steps of an analysis procedure in a single device, analysis can be performed in an automated format with reduced time and cost, while maintaining performances comparable to those of conventional chromatographic systems. Moreover, these miniaturized systems are either compatible with field work or glovebox manipulations. This work is aimed at developing an analytical microsystem for trace and ultra trace quantitation in complex matrices. The strategy consists in the integration of a sample pretreatment step within the lab-on-chip by a confinement zone where selective ligands are immobilized for target extraction and preconcentration. Aptamers were chosen as selective ligands, because of their high affinity for all types of targets (from small ions to viruses and cells) and their ease of synthesis and functionalization. This integrated target extraction and concentration step will be followed in the microdevice by an electrokinetic separation step and an on-line detection. Polymers consisting of cyclic olefin copolymer (COC) or fluoropolymer (Dyneon THV) were selected as they are easy to mold, transparent in UV-visible and have high resistance towards solvents and extreme pH conditions. However, because of their low chemical reactivity, surface treatments are necessary. For the design of this miniaturized diagnostics, we aimed at modifying the microfluidic system at two scales : (1) on the entire surface of the microsystem to control the surface hydrophobicity (so as to avoid any sample wall adsorption) and the fluid flows during electrokinetic separation, or (2) locally so as to immobilize selective ligands (aptamers) on restricted areas for target extraction and preconcentration. We developed different novel strategies for the surface functionalization of COC and Dyneon, based on plasma, chemical and /or electrochemical approaches. In a first approach, a plasma-induced immobilization of brominated derivatives was performed on the entire surface. Further substitution of the bromine by an azide functional group led to covalent immobilization of ligands through “click” chemistry reaction between azides and terminal alkynes. COC and Dyneon materials were characterized at each step of the surface functionalization procedure by various complementary techniques to evaluate the quality and homogeneity of the functionalization (contact angle, XPS, ATR). With the objective of local (micrometric scale) aptamer immobilization, we developed an original electrochemical strategy on engraved Dyneon THV microchannel. Through local electrochemical carbonization followed by adsorption of azide-bearing diazonium moieties and covalent linkage of alkyne-bearing aptamers through click chemistry reaction, typical dimensions of immobilization zones reached the 50 µm range. Other functionalization strategies, such as sol-gel encapsulation of aptamers, are currently investigated and may also be suitable for the development of the analytical microdevice. The development of these functionalization strategies is the first crucial step in the design of the entire microdevice. These strategies allow the grafting of a large number of molecules for the development of new analytical tools in various domains like environment or healthcare.

Keywords: alkyne-azide click chemistry (CuAAC), electrochemical modification, microsystem, plasma bromination, surface functionalization, thermoplastic polymers

Procedia PDF Downloads 433
360 Smart Interior Design: A Revolution in Modern Living

Authors: Fatemeh Modirzare

Abstract:

Smart interior design represents a transformative approach to creating living spaces that integrate technology seamlessly into our daily lives, enhancing comfort, convenience, and sustainability. This paper explores the concept of smart interior design, its principles, benefits, challenges, and future prospects. It also highlights various examples and applications of smart interior design to illustrate its potential in shaping the way we live and interact with our surroundings. In an increasingly digitized world, the boundaries between technology and interior design are blurring. Smart interior design, also known as intelligent or connected interior design, involves the incorporation of advanced technologies and automation systems into residential and commercial spaces. This innovative approach aims to make living environments more efficient, comfortable, and adaptable while promoting sustainability and user well-being. Smart interior design seamlessly integrates technology into the aesthetics and functionality of a space, ensuring that devices and systems do not disrupt the overall design. Sustainable materials, energy-efficient systems, and eco-friendly practices are central to smart interior design, reducing environmental impact. Spaces are designed to be adaptable, allowing for reconfiguration to suit changing needs and preferences. Smart homes and spaces offer greater comfort through features like automated climate control, adjustable lighting, and customizable ambiance. Smart interior design can significantly reduce energy consumption through optimized heating, cooling, and lighting systems. Smart interior design integrates security systems, fire detection, and emergency response mechanisms for enhanced safety. Sustainable materials, energy-efficient appliances, and waste reduction practices contribute to a greener living environment. Implementing smart interior design can be expensive, particularly when retrofitting existing spaces with smart technologies. The increased connectivity raises concerns about data privacy and cybersecurity, requiring robust measures to protect user information. Rapid advancements in technology may lead to obsolescence, necessitating updates and replacements. Users must be familiar with smart systems to fully benefit from them, requiring education and ongoing support. Residential spaces incorporate features like voice-activated assistants, automated lighting, and energy management systems. Intelligent office design enhances productivity and employee well-being through smart lighting, climate control, and meeting room booking systems. Hospitals and healthcare facilities use smart interior design for patient monitoring, wayfinding, and energy conservation. Smart retail design includes interactive displays, personalized shopping experiences, and inventory management systems. The future of smart interior design holds exciting possibilities, including AI-powered design tools that create personalized spaces based on user preferences. Smart interior design will increasingly prioritize factors that improve physical and mental health, such as air quality monitoring and mood-enhancing lighting. Smart interior design is revolutionizing the way we interact with our living and working spaces. By embracing technology, sustainability, and user-centric design principles, smart interior design offers numerous benefits, from increased comfort and convenience to energy efficiency and sustainability. Despite challenges, the future holds tremendous potential for further innovation in this field, promising a more connected, efficient, and harmonious way of living and working.

Keywords: smart interior design, home automation, sustainable living spaces, technological integration, user-centric design

Procedia PDF Downloads 59
359 Just a Heads Up: Approach to Head Shape Abnormalities

Authors: Noreen Pulte

Abstract:

Prior to the 'Back to Sleep' Campaign in 1992, 1 of every 300 infants seen by Advanced Practice Providers had plagiocephaly. Insufficient attention is given to plagiocephaly and brachycephaly diagnoses in practice and pediatric education. In this talk, Nurse Practitioners and Pediatric Providers will be able to: (1) identify red flags associated with head shape abnormalities, (2) learn techniques they can teach parents to prevent head shape abnormalities, and (3) differentiate between plagiocephaly, brachycephaly, and craniosynostosis. The presenter is a Primary Care Pediatric Nurse Practitioner at Ann & Robert H. Lurie Children's Hospital of Chicago and the primary provider for its head shape abnormality clinics. She will help participants translate key information obtained from birth history, review of systems, and developmental history to understand risk factors for head shape abnormalities and progression of deformities. Synostotic and non-synostotic head shapes will be explained to help participants differentiate plagiocephaly and brachycephaly from synostotic head shapes. This knowledge is critical for the prompt referral of infants with craniosynostosis for surgical evaluation and correction. Rapid referral for craniosynostosis can possibly direct the patient to a minimally invasive surgical procedure versus a craniectomy. As for plagiocephaly and brachycephaly, this timely referral can also aid in a physical therapy referral if necessitated, which treats torticollis and aids in improving head shape. A well-timed referral to a head shape clinic can possibly eliminate the need for a helmet and/or minimize the time in a helmet. Practitioners will learn the importance of obtaining head measurements using calipers. The presenter will explain head calculations and how the calculations are interpreted to determine the severity of the head shape abnormalities. Severity defines the treatment plan. Participants will learn when to refer patients to a head shape abnormality clinic and techniques they should teach parents to perform while waiting for the referral appointment. The purpose, mechanics, and logistics of helmet therapy, including optimal time to initiate helmet therapy, recommended helmet wear-time, and tips for helmet therapy compliance, will be described. Case scenarios will be incorporated into the presenter's presentation to support learning. The salient points of the case studies will be explained and discussed. Practitioners will be able to immediately translate the knowledge and skills gained in this presentation into their clinical practice.

Keywords: plagiocephaly, brachycephaly, craniosynostosis, red flags

Procedia PDF Downloads 87
358 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 228
357 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 86
356 Understanding Systemic Barriers (and Opportunities) to Increasing Uptake of Subcutaneous Medroxy Progesterone Acetate Self-Injection in Health Facilities in Nigeria

Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu

Abstract:

Background: The DISC project collaborated with partners to implement demand creation and service delivery interventions, including the MoT (Moment of Truth) innovation, in over 500 health facilities across 15 states. This has increased the voluntary conversion rate to self-injection among women who opt for injectable contraception. While some facilities recorded an increasing trend in key performance indicators, few others persistently performed sub-optimally due to provider and system-related barriers. Methodology: Twenty-two facilities performing sub-optimally were selected purposively from three Nigerian states. Low productivity was appraised using low reporting rates and poor SI conversion rates as indicators. Interviews were conducted with health providers across these health facilities using a rapid diagnosis tool. The project also conducted a data quality assessment that evaluated the veracity of data elements reported across the three major sources of family planning data in the facility. Findings: The inability and sometimes refusal of providers to support clients to self-inject effectively was associated with the misunderstanding of its value to their work experience. It was also observed that providers still held a strong influence over clients’ method choices. Furthermore, providers held biases and misconceptions about DMPA-SC that restricted the access of obese clients and new acceptors to services – a clear departure from the recommendations of the national guidelines. Additionally, quality of care standards was compromised because job aids were not used to inform service delivery. Facilities performing sub-optimally often under-reported DMPA-SC utilization data, and there were multiple uncoordinated responsibilities for recording and reporting. Additionally, data validation meetings were not regularly convened, and these meetings were ineffective in authenticating data received from health facilities. Other reasons for sub-optimal performance included poor documentation and tracking of stock inventory resulting in commodity stockouts, low client flow because of poor positioning of health facilities, and ineffective messaging. Some facilities lacked adequate human and material resources to provide services effectively and received very few supportive supervision visits. Supportive supervision visits and Data Quality Audits have been useful to address the aforementioned performance barriers. The project has deployed digital DMPA-SC self-injection checklists that have been aligned with nationally approved templates. During visits, each provider and community mobilizer is accorded special attention by the supervisor until he/she can perform procedures in line with best practice (protocol). Conclusion: This narrative provides a summary of a range of factors that identify health facilities performing sub-optimally in their provision of DMPA-SC services. Findings from this assessment will be useful during project design to inform effective strategies. As the project enters its final stages of implementation, it is transitioning high-impact activities to state institutions in the quest to sustain the quality of service beyond the tenure of the project. The project has flagged activities, as well as created protocols and tools aimed at placing state-level stakeholders at the forefront of improving productivity in health facilities.

Keywords: family planning, contraception, DMPA-SC, self-care, self-injection, barriers, opportunities, performance

Procedia PDF Downloads 71
355 Evaluation of the Quality of Education Offered to Students with Special Needs in Public Schools in the City of Bauru, Brazil

Authors: V. L. M. F. Capellini, A. P. P. M. Maturana, N. C. M. Brondino, M. B. C. L. B. M. Peixoto, A. J. Broughton

Abstract:

A paradigm shift is a process. The process of implementing inclusive education, a system constructed to support all learners, requires planning, identification, experimentation, and evaluation. In this vein, the purpose of the present study was to evaluate the capacity of one Brazilian state school systems to provide special education students with a quality inclusive education. This study originated at the behest of concerned families of students with special needs who filed complaints with the Municipality of Bauru, São Paulo. These families claimed, 1) children with learning differences and educational needs had not been identified for services, and 2) those who had been identified had not received sufficient specialized educational assistance (SEA) in schools across the City of Bauru. Hence, the Office of Civil Rights for the state of São Paulo (Ministério Público de São Paulo) summoned the local higher education institution, UNESP, to design a research study to investigate these allegations. In this exploratory study, descriptive data were gathered from all elementary and middle schools including 58 state schools and 17 city schools, for a total of 75 schools overall. Data collection consisted of each school's annual strategic action plan, surveys and interviews with all school stakeholders to determine their perceptions of the inclusive education available to students with Special Education Needs (SEN). The data were collected as one of four stages in a larger study which also included field observations of a focal students' experience and a continuing education course for all teachers and administrators in both state and city schools. For the purposes of this study, the researchers were interested in understanding the perceptions of school staff, parents, and students across all schools. Therefore, documents and surveys from 75 schools were analyzed for adherence to federal legislation guaranteeing students with SEN the right to special education assistance within the regular school setting. Results shows that while some schools recognized the legal rights of SEN students to receive special education, the plans to actually deliver services were absent. In conclusion, the results of this study revealed both school staff and families have insufficient planning and accessibility resources, and the schools have inadequate infrastructure for full-time support to SEN students, i.e., structures and systems to support the identification of SEN and delivery of services within schools of Bauru, SP. Having identified the areas of need, the city is now prepared to take next steps in the process toward preparing all schools to be inclusive.

Keywords: inclusion, school, special education, special needs

Procedia PDF Downloads 152
354 Evaluating Social Sustainability in Historical City Center in Turkey: Case Study of Bursa

Authors: Şeyda Akçalı

Abstract:

This study explores the concept of social sustainability and its characteristics in terms of neighborhood (mahalle) which is a social phenomenon in Turkish urban life. As social sustainability indicators that moving away traditional themes toward multi-dimensional measures, the solutions for urban strategies may be achieved through learning lessons from historical precedents. It considers the inherent values of traditional urban forms contribute to the evolution of the city as well as the social functions of it. The study aims to measure non-tangible issues in order to evaluate social sustainability in historic urban environments and how they could contribute to the current urban planning strategies. The concept of neighborhood (mahalle) refers to a way of living that represents the organization of Turkish social and communal life rather than defining an administrative unit for the city. The distinctive physical and social features of neighborhood illustrate the link between social sustainability and historic urban environment. Instead of having a nostalgic view of past, it identifies both the failures and successes and extract lessons of traditional urban environments and adopt them to modern context. First, the study determines the aspects of social sustainability which are issued as the key themes in the literature. Then, it develops a model by describing the social features of mahalle which show consistency within the social sustainability agenda. The model is used to analyze the performance of traditional housing area in the historical city center of Bursa, Turkey whether it meets the residents’ social needs and contribute collective functioning of the community. Through a questionnaire survey exercised in the historic neighborhoods, the residents are evaluated according to social sustainability criteria of neighborhood. The results derived from the factor analysis indicate that social aspects of neighborhood are social infrastructure, identity, attachment, neighborliness, safety and wellbeing. Qualitative evaluation shows the relationship between key aspects of social sustainability and demographic and socio-economic factors. The outcomes support that inherent values of neighborhood retain its importance for the sustainability of community although there must be some local arrangements for few factors with great attention not to compromise the others. The concept of neighborhood should be considered as a potential tool to support social sustainability in national political agenda and urban policies. The performance of underlying factors in historic urban environment proposes a basis for both examining and improving traditional urban areas and how it may contribute to the overall city.

Keywords: historical city center, mahalle, neighborhood, social sustainability, traditional urban environment, Turkey

Procedia PDF Downloads 278
353 Teaching Ethnic Relations in Social Work Education: A Study of Teachers' Strategies and Experiences in Sweden

Authors: Helene Jacobson Pettersson, Linda Lill

Abstract:

Demographic changes and globalization in society provide new opportunities for social work and social work education in Sweden. There has been an ambition to include these aspects into the Swedish social work education. However, the Swedish welfare state standard continued to be as affectionate as invisible starting point in discussions about people’s way of life and social problems. The aim of this study is to explore content given to ethnic relations in social work in the social work education in Sweden. Our standpoint is that the subject can be understood both from individual and structural levels, it changes over time, varies in different steering documents and differs from the perspectives of teachers and students. Our question is what content is given to ethnic relations in social work by the teachers in their strategies and teaching material. The study brings together research in the interface between education science, social work and research of international migration and ethnic relations. The presented narratives are from longer interviews with a total of 17 university teachers who teach in social work program at four different universities in Sweden. The universities have in different ways a curriculum that involves the theme of ethnic relations in social work, and the interviewed teachers are teaching and grading social workers on specific courses related to ethnic relations at undergraduate and graduate levels. Overall assesses these 17 teachers a large number of students during a semester. The questions were concerned on how the teachers handle ethnic relations in education in social work. The particular focus during the interviews has been the teacher's understanding of the documented learning objectives and content of literature and how this has implications for their teaching. What emerges is the teachers' own stories about the educational work and how they relate to the content of teaching, as well as the teaching strategies they use to promote the theme of ethnic relations in social work education. The analysis of this kind of pedagogy is that the teaching ends up at an individual level with a particular focus on the professional encounter with individuals. We can see the shortage of a critical analysis of the construction of social problems. The conclusion is that individual circumstance precedes theoretical perspective on social problems related to migration, transnational relations, globalization and social. This result has problematic implications from the perspective of sustainability in terms of ethnic diversity and integration in society. Thus these aspects have most relevance for social workers’ professional acting in social support and empowerment related activities, in supporting the social status and human rights and equality for immigrants.

Keywords: ethnic relations in Swedish social work education, teaching content, teaching strategies, educating for change, human rights and equality

Procedia PDF Downloads 238
352 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 45
351 Endotracheal Intubation Self-Confidence: Report of a Realistic Simulation Training

Authors: Cleto J. Sauer Jr., Rita C. Sauer, Chaider G. Andrade, Doris F. Rabelo

Abstract:

Introduction: Endotracheal Intubation (ETI) is a procedure for clinical management of patients with severe clinical presentation of COVID-19 disease. Realistic simulation (RS) is an active learning methodology utilized for clinical skill's improvement. To improve ETI skills of public health network's physicians from Recôncavo da Bahia region in Brazil, during COVID-19 outbreak, RS training was planned and carried out. Training scenario included the Nasco Lifeform realistic simulator, and three actions were simulated: ETI procedure, sedative drugs management, and bougie guide utilization. Training intervention occurred between May and June 2020, as an interinstitutional cooperation between the Health's Department of Bahia State and the Federal University from Recôncavo da Bahia. Objective: The main objective is to report the effects on participants' self-confidence perception for ETI procedure after RS based training. Methods: This is a descriptive study, with secondary data extracted from questionnaires applied throughout RS training. Priority workplace, time from last intubation, and knowledge about bougie were reported on a preparticipation questionnaire. Additionally, participants completed pre- and post-training qualitative self-assessment (10-point Likert scale) regarding self-confidence perception in performing each of simulated actions. Distribution analysis for qualitative data was performed with Wilcoxon Signed Rank Test, and self-confidence increase analysis in frequency contingency tables with Fisher's Exact Test. Results: 36 physicians participated of training, 25 (69%) from primary care setting, 25 (69%) performed ETI over a year ago, and only 4 (11%) had previous knowledge about the bougie guide utilization. There was an increase in self-confidence medians for all three simulated actions. Medians (variation) for self-confidence before and after training, for each simulated action were as follows: ETI [5 (1-9) vs. 8 (6-10) (p < 0.0001)]; Sedative drug management [5 (1-9) vs. 8 (4-10) (p < 0.0001)]; Bougie guide utilization [2.5 (1-7) vs. 8 (4-10) (p < 0.0001)]. Among those who performed ETI over a year ago (n = 25), an increase in self-confidence greater than 3 points for ETI was reported by 23 vs. 2 physicians (p = 0.0002), and by 21 vs. 4 (p = 0.03) for sedative drugs management. Conclusions: RS training contributed to self-confidence increase in performing ETI. Among participants who performed ETI over a year, there was a significant association between RS training and increase of more than 3 points in self-confidence, both for ETI and sedative drug management. Training with RS methodology is suitable for ETI confidence enhancement during COVID-19 outbreak.

Keywords: confidence, COVID-19, endotracheal intubation, realistic simulation

Procedia PDF Downloads 127
350 First Year Experience of International Students in Malaysian Universities

Authors: Nur Hidayah Iwani Mohd Kamal

Abstract:

The higher education institutions in Malaysia is challenged with a more socially and culturally diverse student population than ever before, especially with the increasing number of international students studying in Malaysia in the recent years. First year university is a critical time in students’ lives. Students are not only developing intelectually, they are also establishing and maintaining personal relationships, developing an identity, deciding about career and lifestyle, maintaining personal health and wellness, and developing an integrated philosohy of life. The higher education institutions work as a diverse community of learners to provide a supportive environment for these first year students in assisting them in their transition from high school to university. Although many universities are taking steps to improve the first year experience for their new local and international students, efforts must be taken to ensure organized and coordinated manner in order for the initiatives to be successful. The objectives of the study are to examine the international students’ perceptions and interpretation of their first year experiences in shaping and determining their attitudes toward study and the quality of their entire undergraduate academic career; and identify an appropriate mechanism to encounter the international students’ adjustment in the new environment in order to facilitate cross-functional communication and create a coherent and meaningful first year experience. A key construct in this study is that if universities wish to recruiting and retaining international students, it is their ethical responsibility to determine how they can best meet their needs at the academic and social level, create a supportive ‘learning community’ as a foundation of their educational experience, hence facilitate cross-cultural communication and create a coherent and meaningful first year experience. This study is simultaneously frames in relation to focus on the factors that influence a successful and satisfying transition to university life by the first year international students. The study employs a mixed-method data collection involving semi-structured interviews, questionnaire, classroom observation and document analysis. This study provides valuable insight into the struggles that many international students face as they attempt to make the adjustment not only to a new educational system but factors such as psychosocial and cultural problems. It would discuss some of the factors that impact the international students during their first year in university in their quest to be academically successful. It concludes with some recommendations on how Malaysian universities provide these students with a good first year experience based on some the best practices of universities around the world.

Keywords: first year experience, Malaysian universities, international students, education

Procedia PDF Downloads 276
349 Intercultural Competence among Jewish and Arab Students Studying Together in an Academic Institution in Israel

Authors: Orly Redlich

Abstract:

Since the establishment of the state of Israel, and as a result of various events that led to it, Jewish citizens and Arab citizens of the state have been in constant conflict, which finds its expression in most levels of life. Therefore, the attitude of one group member to the other group members is mostly tense, loaded, and saturated with mutual suspicion. Within this reality, in many higher education institutions in Israel, Jews and Arabs meet with each other intensively and for several years. For some students, this is their first opportunity for a meaningful cross-cultural encounter. These intercultural encounters, which allow positive interactions between members of different cultural groups, may contribute to the formation of "intercultural competence" which means long-term change in knowledge, attitudes, and behavior towards 'the other culture'. The current study examined the concept of the ‘other’ among Jewish and Arab students studying together and their "intercultural competence". The study also examined whether there is a difference in the perception of the ‘other’ between students studying in different academic programs, and between students taking academic courses on multiculturalism. This quantitative study was conducted among 274 Arab and Jewish students studying together, for bachelors or master's degree, in various academic programs at the Israel Academic College of Ramat-Gan. The background data of the participants are varied, in terms of religion, origin, religiosity, employment status, living area, and marital status. The main hypothesis is that academic, social, and intercultural encounters between Jewish and Arab students, who attend college together, will be a significant factor in building "intercultural competence". Additionally, the existence of "intercultural competence" has been linked to demographic characteristics of the students, as well as the nature of intercultural encounters between Jews and Arabs in a higher education institution. The dependent variables were measured by a self-report questionnaire, using the components of '"intercultural competence"' among students, which are: 1. Cognitive knowledge of the ‘others’, 2. Feelings towards the ‘others’, 3. Change in attitudes towards the 'others', and 4. Change in behavior towards the ‘others’. The findings indicate a higher "intercultural competence" among Arab students than Jews; it was also found higher level of "intercultural competence" among Educational Counseling students than the other respondents. The importance of this research lies in finding the means to develop "intercultural competence" among Jewish and Arab students, which may reduce prejudice and stereotypes towards the other culture and may even prevent occurrences of alienation and violence in cross-cultural encounters in Israel.

Keywords: cross-cultural learning, intercultural competence, Jewish and Arab students, multiculturalism

Procedia PDF Downloads 106
348 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 135
347 Students' ExperiEnce Enhancement Through Simulaton. A Process Flow in Logistics and Transportation Field

Authors: Nizamuddin Zainuddin, Adam Mohd Saifudin, Ahmad Yusni Bahaudin, Mohd Hanizan Zalazilah, Roslan Jamaluddin

Abstract:

Students’ enhanced experience through simulation is a crucial factor that brings reality to the classroom. The enhanced experience is all about developing, enriching and applications of a generic process flow in the field of logistics and transportations. As educational technology has improved, the effective use of simulations has greatly increased to the point where simulations should be considered a valuable, mainstream pedagogical tool. Additionally, in this era of ongoing (some say never-ending) assessment, simulations offer a rich resource for objective measurement and comparisons. Simulation is not just another in the long line of passing fads (or short-term opportunities) in educational technology. It is rather a real key to helping our students understand the world. It is a way for students to acquire experience about how things and systems in the world behave and react, without actually touching them. In short, it is about interactive pretending. Simulation is all about representing the real world which includes grasping the complex issues and solving intricate problems. Therefore, it is crucial before stimulate the real process of inbound and outbound logistics and transportation a generic process flow shall be developed. The paper will be focusing on the validization of the process flow by looking at the inputs gains from the sample. The sampling of the study focuses on multi-national and local manufacturing companies, third party companies (3PL) and government agency, which are selected in Peninsular Malaysia. A simulation flow chart was proposed in the study that will be the generic flow in logistics and transportation. A qualitative approach was mainly conducted to gather data in the study. It was found out from the study that the systems used in the process of outbound and inbound are System Application Products (SAP) and Material Requirement Planning (MRP). Furthermore there were some companies using Enterprises Resources Planning (ERP) and Electronic Data Interchange (EDI) as part of the Suppliers Own Inventories (SOI) networking as a result of globalized business between one countries to another. Computerized documentations and transactions were all mandatory requirement by the Royal Custom and Excise Department. The generic process flow will be the basis of developing a simulation program that shall be used in the classroom with the objective of further enhanced the students’ learning experience. Thus it will contributes to the body of knowledge on the enrichment of the student’s employability and also shall be one of the way to train new workers in the logistics and transportation filed.

Keywords: enhancement, simulation, process flow, logistics, transportation

Procedia PDF Downloads 323
346 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 109
345 Stabilizing Additively Manufactured Superalloys at High Temperatures

Authors: Keivan Davami, Michael Munther, Lloyd Hackel

Abstract:

The control of properties and material behavior by implementing thermal-mechanical processes is based on mechanical deformation and annealing according to a precise schedule that will produce a unique and stable combination of grain structure, dislocation substructure, texture, and dispersion of precipitated phases. The authors recently developed a thermal-mechanical technique to stabilize the microstructure of additively manufactured nickel-based superalloys even after exposure to high temperatures. However, the mechanism(s) that controls this stability is still under investigation. Laser peening (LP), also called laser shock peening (LSP), is a shock based (50 ns duration) post-processing technique used for extending performance levels and improving service life of critical components by developing deep levels of plastic deformation, thereby generating high density of dislocations and inducing compressive residual stresses in the surface and deep subsurface of components. These compressive residual stresses are usually accompanied with an increase in hardness and enhance the material’s resistance to surface-related failures such as creep, fatigue, contact damage, and stress corrosion cracking. While the LP process enhances the life span and durability of the material, the induced compressive residual stresses relax at high temperatures (>0.5Tm, where Tm is the absolute melting temperature), limiting the applicability of the technology. At temperatures above 0.5Tm, the compressive residual stresses relax, and yield strength begins to drop dramatically. The principal reason is the increasing rate of solid-state diffusion, which affects both the dislocations and the microstructural barriers. Dislocation configurations commonly recover by mechanisms such as climbing and recombining rapidly at high temperatures. Furthermore, precipitates coarsen, and grains grow; virtually all of the available microstructural barriers become ineffective.Our results indicate that by using “cyclic” treatments with sequential LP and annealing steps, the compressive stresses survive, and the microstructure is stable after exposure to temperatures exceeding 0.5Tm for a long period of time. When the laser peening process is combined with annealing, dislocations formed as a result of LPand precipitates formed during annealing have a complex interaction that provides further stability at high temperatures. From a scientific point of view, this research lays the groundwork for studying a variety of physical, materials science, and mechanical engineering concepts. This research could lead to metals operating at higher sustained temperatures enabling improved system efficiencies. The strengthening of metals by a variety of means (alloying, work hardening, and other processes) has been of interest for a wide range of applications. However, the mechanistic understanding of the often complex processes of interactionsbetween dislocations with solute atoms and with precipitates during plastic deformation have largely remained scattered in the literature. In this research, the elucidation of the actual mechanisms involved in the novel cyclic LP/annealing processes as a scientific pursuit is investigated through parallel studies of dislocation theory and the implementation of advanced experimental tools. The results of this research help with the validation of a novel laser processing technique for high temperature applications. This will greatly expand the applications of the laser peening technology originally devised only for temperatures lower than half of the melting temperature.

Keywords: laser shock peening, mechanical properties, indentation, high temperature stability

Procedia PDF Downloads 138
344 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception

Authors: Păcurar Diana Istina

Abstract:

The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.

Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception

Procedia PDF Downloads 76
343 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 482
342 Indicators of Sustainable Intensification: Views from British Stakeholders

Authors: N. Mahon, I. Crute, M. Di Bonito, E. Simmons, M. M. Islam

Abstract:

Growing interest in the concept of the sustainable intensification (SI) of agriculture has been shown by, national governments, transnational agribusinesses, intergovernmental organizations and research institutes, amongst others. This interest may be because SI is seen as a ‘third way’ for agricultural development, between the seemingly disparate paradigms of ‘intensive’ agriculture and more ‘sustainable’ forms of agriculture. However, there is a lack of consensus as to what SI means in practice and how it should be measured using indicators of change. This has led to growing confusion, disagreement and skepticism regarding the concept, especially amongst civil society organizations, both in the UK and other countries. This has prompted the need for bottom-up, participatory approaches to identify indicators of SI. Our aim is to identify the views of British stakeholders regarding the areas of agreement and disagreement as to what SI is and how it should be measured in the UK using indicators of change. Data for this investigation came from 32 semi-structured interviews, conducted between 2015 and 2016, with stakeholders from throughout the UK food system. In total 110 indicators of SI were identified. These indicators covered a wide variety of subjects including biophysical, social and political considerations. A number of indicators appeared to be widely applicable and were similar to those suggested in the global literature. These include indicators related to the management of the natural resources on which agriculture relies e.g., ‘Soil organic matter’, ‘Number of pollinators per hectare’ and ‘Depth of water table’. As well as those related to agricultural externalities, e.g., ‘Greenhouse gas emissions’ and ‘Concentrations of agro-chemicals in waterways’. However, many of the indicators were much more specific to the context of the UK. These included, ‘Areas of high nature value farmland’, ‘Length of hedgerows per hectare’ and ‘Age of farmers’. Furthermore, tensions could be seen when participants considered the relative importance of agricultural mechanization versus levels of agricultural employment, the pros and cons of intensive, housed livestock systems and value of wild biodiversity versus the desire to increase agricultural yields. These areas of disagreement suggest the need to carefully consider the trade-offs inherent in the concept. Our findings indicate that in order to begin to resolve the confusions surrounding SI it needs to be considered in a context specific manner, rather than as a single uniform concept. Furthermore, both the environmental and the social parameters in which agriculture operates need to be considered in order to operationalize SI in a meaningful way. We suggest that participatory approaches are key to this process, facilitating dialogue and collaborative-learning between all the stakeholders, allowing them to reach a shared vision for the future of agricultural development.

Keywords: agriculture, indicators, participatory approach, sustainable intensification

Procedia PDF Downloads 215
341 Inner and Outer School Contextual Factors Associated with Poor Performance of Grade 12 Students: A Case Study of an Underperforming High School in Mpumalanga, South Africa

Authors: Victoria L. Nkosi, Parvaneh Farhangpour

Abstract:

Often a Grade 12 certificate is perceived as a passport to tertiary education and the minimum requirement to enter the world of work. In spite of its importance, many students do not make this milestone in South Africa. It is important to find out why so many students still fail in spite of transformation in the education system in the post-apartheid era. Given the complexity of education and its context, this study adopted a case study design to examine one historically underperforming high school in Bushbuckridge, Mpumalanga Province, South Africa in 2013. The aim was to gain a understanding of the inner and outer school contextual factors associated with the high failure rate among Grade 12 students.  Government documents and reports were consulted to identify factors in the district and the village surrounding the school and a student survey was conducted to identify school, home and student factors. The randomly-sampled half of the population of Grade 12 students (53) participated in the survey and quantitative data are analyzed using descriptive statistical methods. The findings showed that a host of factors is at play. The school is located in a village within a municipality which has been one of the poorest three municipalities in South Africa and the lowest Grade 12 pass rate in the Mpumalanga province.   Moreover, over half of the families of the students are single parents, 43% are unemployed and the majority has a low level of education. In addition, most families (83%) do not have basic study materials such as a dictionary, books, tables, and chairs. A significant number of students (70%) are over-aged (+19 years old); close to half of them (49%) are grade repeaters. The school itself lacks essential resources, namely computers, science laboratories, library, and enough furniture and textbooks. Moreover, teaching and learning are negatively affected by the teachers’ occasional absenteeism, inadequate lesson preparation, and poor communication skills. Overall, the continuous low performance of students in this school mirrors the vicious circle of multiple negative conditions present within and outside of the school. The complexity of factors associated with the underperformance of Grade 12 students in this school calls for a multi-dimensional intervention from government and stakeholders. One important intervention should be the placement of over-aged students and grade-repeaters in suitable educational institutions for the benefit of other students.

Keywords: inner context, outer context, over-aged students, vicious cycle

Procedia PDF Downloads 195