Search results for: simulations study
1103 Effect of Timing and Contributing Factors for Early Language Intervention in Toddlers with Repaired Cleft Lip and Palate
Authors: Pushpavathi M., Kavya V., Akshatha V.
Abstract:
Introduction: Cleft lip and palate (CLP) is a congenital condition which hinders effectual communication due to associated speech and language difficulties. Expressive language delay (ELD) is a feature seen in this population which is influenced by factors such as type and severity of CLP, age at surgical and linguistic intervention and also the type and intensity of speech and language therapy (SLT). Since CLP is the most common congenital abnormality seen in Indian children, early intervention is a necessity which plays a critical role in enhancing their speech and language skills. The interaction between the timing of intervention and factors which contribute to effective intervention by caregivers is an area which needs to be explored. Objectives: The present study attempts to determine the effect of timing of intervention on the contributing maternal factors for effective linguistic intervention in toddlers with repaired CLP with respect to the awareness, home training patterns, speech and non-speech behaviors of the mothers. Participants: Thirty six toddlers in the age range of 1 to 4 years diagnosed as ELD secondary to repaired CLP, along with their mothers served as participants. Group I (Early Intervention Group, EIG) included 19 mother-child pairs who came to seek SLT soon after corrective surgery and group II (Delayed Intervention Group, DIG) included 16 mother-child pairs who received SLT after the age of 3 years. Further, the groups were divided into group A, and group B. Group ‘A’ received SLT for 60 sessions by Speech Language Pathologist (SLP), while Group B received SLT for 30 sessions by SLP and 30 sessions only by mother without supervision of SLP. Method: The mothers were enrolled for the Early Language Intervention Program and following this, their awareness about CLP was assessed through the Parental awareness questionnaire. The quality of home training was assessed through Mohite’s Inventory. Subsequently, the speech and non-speech behaviors of the mothers were assessed using a Mother’s behavioral checklist. Detailed counseling and orientation was done to the mothers, and SLT was initiated for toddlers. After 60 sessions of intensive SLT, the questionnaire and checklists were re-administered to find out the changes in scores between the pre- and posttest measurements. Results: The scores obtained under different domains in the awareness questionnaire, Mohite’s inventory and Mothers behavior checklist were tabulated and subjected to statistical analysis. Since the data did not follow normal distribution (i.e. p > 0.05), Mann-Whitney U test was conducted which revealed that there was no significant difference between groups I and II as well as groups A and B. Further, Wilcoxon Signed Rank test revealed that mothers had better awareness regarding issues related to CLP and improved home-training abilities post-orientation (p ≤ 0.05). A statistically significant difference was also noted for speech and non-speech behaviors of the mothers (p ≤ 0.05). Conclusions: Extensive orientation and counseling helped mothers of both EI and DI groups to improve their knowledge about CLP. Intensive SLT using focused stimulation and a parent-implemented approach enabled them to carry out the intervention in an effectual manner.Keywords: awareness, cleft lip and palate, early language intervention program, home training, orientation, timing of intervention
Procedia PDF Downloads 1221102 The Language of COVID-19: Psychological Effects of the Label 'Essential Worker' on Spanish-Speaking Adults
Authors: Natalia Alvarado, Myldred Hernandez-Gonzalez, Mary Laird, Madeline Phillips, Elizabeth Miller, Luis Mendez, Teresa Satterfield Linares
Abstract:
Objectives: Focusing on the reported levels of depressive symptoms from Hispanic individuals in the U.S. during the ongoing COVID-19 pandemic, we analyze the psychological effects of being labeled an ‘essential worker/trabajador(a) esencial.’ We situate this attribute within the complex context of how an individual’s mental health is linked to work status and his/her community’s attitude toward such a status. Method: 336 Spanish-speaking adults (Mage = 34.90; SD = 11.00; 46% female) living in the U.S. participated in a mixed-method study. Participants completed a self-report Spanish-language survey consisting of COVID-19 prompts (e.g., Soy un trabajador esencial durante la pandemia. I am an ‘essential worker’ during the pandemic), civic engagement scale (CES) attitudes (e.g., Me siento responsable de mi comunidad. I feel responsible for my community) and behaviors (e.g., Ayudo a los miembros de mi comunidad. I help members of my community), and the Center for Epidemiological Studies Depression Scale (e.g., Me sentía deprimido/a. I felt depressed). The survey was conducted several months into the pandemic and before the vaccine distribution. Results: Regression analyses show that being labeled an essential worker was correlated to CES attitudes (b= .28, p < .001) and higher CES behaviors (b= .32, p < .001). Essential worker status also reported higher levels of depressive symptoms (b= .17, p < .05). In addition, we found that CES attitudes and CES behaviors were related to higher levels of depressive symptoms (b= .11, p <.05, b = .22, p < .001, respectively). These findings suggest that those who are on the frontlines during the COVID-19 pandemic suffer higher levels of depressive symptoms, despite their affirming community attitudes and behaviors. Discussion: Hispanics/Latinxs make up 53% of the high-proximity employees who must work in person and in close contact with others; this is the highest rate of any racial or ethnic category. Moreover, 31% of Hispanics are classified as essential workers. Our outcomes show that those labeled as trabajadores esenciales convey attitudes of remaining strong and resilient for COVID-19 victims. They also express community attitudes and behaviors reflecting a sense of responsibility to continue working to help others during these unprecedented times. However, we also find that the pressure of maintaining basic needs for others exacerbates mental health challenges and stressors, as many essential workers are anxious and stressed about their physical and economic security. As a result, community attitudes do not protect from depressive symptoms as Hispanic essential workers are failing to balance everyone’s needs, including their own (e.g., physical exhaustion and psychological distress). We conclude with a discussion on alternatives to the phrase ‘essential worker’ and of incremental steps that can be taken to address pandemic-related mental health issues targeting US Hispanic workers.Keywords: COVID-19, essential worker, mental health, race and ethnicity
Procedia PDF Downloads 1291101 Pre-Industrial Local Architecture According to Natural Properties
Authors: Selin Küçük
Abstract:
Pre-industrial architecture is integration of natural and subsequent properties by intelligence and experience. Since various settlements relatively industrialized or non-industrialized at any time, ‘pre-industrial’ term does not refer to a definite time. Natural properties, which are existent conditions and materials in natural local environment, are climate, geomorphology and local materials. Subsequent properties, which are all anthropological comparatives, are culture of societies, requirements of people and construction techniques that people use. Yet, after industrialization, technology took technique’s place, cultural effects are manipulated, requirements are changed and local/natural properties are almost disappeared in architecture. Technology is universal, global and expands simply; conversely technique is time and experience dependent and should has a considerable cultural background. This research is about construction techniques according to natural properties of a region and classification of these techniques. Understanding local architecture is only possible by searching its background which is hard to reach. There are always changes in positive and negative in architectural techniques through the time. Archaeological layers of a region sometimes give more accurate information about transformation of architecture. However, natural properties of any region are the most helpful elements to perceive construction techniques. Many international sources from different cultures are interested in local architecture by mentioning natural properties separately. Unfortunately, there is no literature deals with this subject as far as systematically in the correct way. This research aims to improve a clear perspective of local architecture existence by categorizing archetypes according to natural properties. The ultimate goal of this research is generating a clear classification of local architecture independent from subsequent (anthropological) properties over the world such like a handbook. Since local architecture is the most sustainable architecture with refer to its economic, ecologic and sociological properties, there should be an excessive information about construction techniques to be learned from. Constructing the same buildings in all over the world is one of the main criticism of modern architectural system. While this critics going on, the same buildings without identity increase incrementally. In post-industrial term, technology widely took technique’s place, yet cultural effects are manipulated, requirements are changed and natural local properties are almost disappeared in architecture. These study does not offer architects to use local techniques, but it indicates the progress of pre-industrial architectural evolution which is healthier, cheaper and natural. Immigration from rural areas to developing/developed cities should be prohibited, thus culture and construction techniques can be preserved. Since big cities have psychological, sensational and sociological impact on people, rural settlers can be convinced to not to immigrate by providing new buildings designed according to natural properties and maintaining their settlements. Improving rural conditions would remove the economical and sociological gulf between cities and rural. What result desired to arrived in, is if there is no deformation (adaptation process of another traditional buildings because of immigration) or assimilation in a climatic region, there should be very similar solutions in the same climatic regions of the world even if there is no relationship (trade, communication etc.) among them.Keywords: climate zones, geomorphology, local architecture, local materials
Procedia PDF Downloads 4291100 The Role of People in Continuing Airworthiness: A Case Study Based on the Royal Thai Air Force
Authors: B. Ratchaneepun, N.S. Bardell
Abstract:
It is recognized that people are the main drivers in almost all the processes that affect airworthiness assurance. This is especially true in the area of aircraft maintenance, which is an essential part of continuing airworthiness. This work investigates what impact English language proficiency, the intersection of the military and Thai cultures, and the lack of initial and continuing human factors training have on the work performance of maintenance personnel in the Royal Thai Air Force (RTAF). A quantitative research method based on a cross-sectional survey was used to gather data about these three key aspects of “people” in a military airworthiness environment. 30 questions were developed addressing the crucial topics of English language proficiency, impact of culture, and human factors training. The officers and the non-commissioned officers (NCOs) who work for the Aeronautical Engineering Divisions in the RTAF comprised the survey participants. The survey data were analysed to support various hypotheses by using a t-test method. English competency in the RTAF is very important since all of the service manuals for Thai military aircraft are written in English. Without such competency, it is difficult for maintenance staff to perform tasks and correctly interpret the relevant maintenance manual instructions; any misunderstandings could lead to potential accidents. The survey results showed that the officers appreciated the importance of this more than the NCOs, who are the people actually doing the hands-on maintenance work. Military culture focuses on the success of a given mission, and leverages the power distance between the lower and higher ranks. In Thai society, a power distance also exists between younger and older citizens. In the RTAF, such a combination tends to inhibit a just reporting culture and hence hinders safety. The survey results confirmed this, showing that the older people and higher ranks involved with RTAF aircraft maintenance believe that the workplace has a positive safety culture and climate, whereas the younger people and lower ranks think the opposite. The final area of consideration concerned human factors training and non-technical skills training. The survey revealed that those participants who had previously attended such courses appreciated its value and were aware of its benefits in daily life. However, currently there is no regulation in the RTAF to mandate recurrent training to maintain such knowledge and skills. The findings from this work suggest that the people involved in assuring the continuing airworthiness of the RTAF would benefit from: (i) more rigorous requirements and standards in the recruitment, initial training and continuation training regarding English competence; (ii) the development of a strong safety culture that exploits the uniqueness of both the military culture and the Thai culture; and (iii) providing more initial and recurrent training in human factors and non-technical skills.Keywords: aircraft maintenance, continuing airworthiness, military culture, people, Royal Thai Air Force
Procedia PDF Downloads 1301099 Calculation of A Sustainable Quota Harvesting of Long-tailed Macaque (Macaca fascicularis Raffles) in Their Natural Habitats
Authors: Yanto Santosa, Dede Aulia Rahman, Cory Wulan, Abdul Haris Mustari
Abstract:
The global demand for long-tailed macaques for medical experimentation has continued to increase. Fulfillment of Indonesian export demands has been mostly from natural habitats, based on a harvesting quota. This quota has been determined according to the total catch for a given year, and not based on consideration of any demographic parameters or physical environmental factors with regard to the animal; hence threatening the sustainability of the various populations. It is therefore necessary to formulate a method for calculating a sustainable harvesting quota, based on population parameters in natural habitats. Considering the possibility of variations in habitat characteristics and population parameters, a time series observation of demographic and physical/biotic parameters, in various habitats, was performed on 13 groups of long-tailed macaques, distributed throughout the West Java, Lampung and Yogyakarta areas of Indonesia. These provinces were selected for comparison of the influence of human/tourism activities. Data on population parameters that was collected included data on life expectancy according to age class, numbers of individuals by sex and age class, and ‘ratio of infants to reproductive females’. The estimation of population growth was based on a population dynamic growth model: the Leslie matrix. The harvesting quota was calculated as being the difference between the actual population size and the MVP (minimum viable population) for each sex and age class. Observation indicated that there were variations within group size (24 – 106 individuals), gender (sex) ratio (1:1 to 1:1.3), life expectancy value (0.30 to 0.93), and ‘ratio of infants to reproductive females’ (0.23 to 1.56). Results of subsequent calculations showed that sustainable harvesting quotas for each studied group of long-tailed macaques, ranged from 29 to 110 individuals. An estimation model of the MVP for each age class was formulated as Log Y = 0.315 + 0.884 Log Ni (number of individual on ith age class). This study also found that life expectancy for the juvenile age class was affected by the humidity under tree stands, and dietary plants’ density at sapling, pole and tree stages (equation: Y= 2.296 – 1.535 RH + 0.002 Kpcg – 0.002 Ktg – 0.001 Kphn, R2 = 89.6% with a significance value of 0.001). By contrast, for the sub-adult-adult age class, life expectancy was significantly affected by slope (equation: Y=0.377 = 0.012 Kml, R2 = 50.4%, with significance level of 0.007). The infant to reproductive female ratio was affected by humidity under tree stands, and dietary plant density at sapling and pole stages (equation: Y = -1.432 + 2.172 RH – 0.004 Kpcg + 0.003 Ktg, R2 = 82.0% with significance level of 0.001). This research confirmed the importance of population parameters in determining the minimum viable population, and that MVP varied according to habitat characteristics (especially food availability). It would be difficult therefore, to formulate a general mathematical equation model for determining a harvesting quota for the species as a whole.Keywords: harvesting, long-tailed macaque, population, quota
Procedia PDF Downloads 4241098 Diversity of Rhopalocera in Different Vegetation Types of PC Hills, Philippines
Authors: Sean E. Gregory P. Igano, Ranz Brendan D. Gabor, Baron Arthur M. Cabalona, Numeriano Amer E. Gutierrez
Abstract:
Distribution patterns and abundance of butterflies respond in the long term to variations in habitat quality. Studying butterfly populations would give evidence on how vegetation types influence their diversity. In this research, the Rhopalocera diversity of PC Hills was assessed to provide information on diversity trends in varying vegetation types. PC Hills, located in Palo, Leyte, Philippines, is a relatively undisturbed area having forests and rivers. Despite being situated nearby inhabited villages; the area is observed to have a possible rich butterfly population. To assess the Rhopalocera species richness and diversity, transect sampling technique was applied to monitor and document butterflies. Transects were placed in locations that can be mapped, described and relocated easily. Three transects measuring three hundred meters each with a 5-meter diameter were established based on the different vegetation types present. The three main vegetation types identified were the agroecosystem (transect 1), dipterocarp forest (transect 2), and riparian (transect 3). Sample collections were done only from 9:00 A.M to 3:00 P.M. under warm and bright weather, with no more than moderate winds and when it was not raining. When weather conditions did not permit collection, it was moved to another day. A GPS receiver was used to record the location of the selected sample sites and the coordinates of where each sample was collected. Morphological analysis was done for the first phase of the study to identify the voucher specimen to the lowest taxonomic level possible using books about butterfly identification guides and species lists as references. For the second phase, DNA barcoding will be used to further identify the voucher specimen into the species taxonomic level. After eight (8) sampling sessions, seven hundred forty-two (742) individuals were seen, and twenty-two (22) Rhopalocera genera were identified through morphological identification. Nymphalidae family of genus Ypthima and the Pieridae family of genera Eurema and Leptosia were the most dominant species observed. Twenty (20) of the thirty-one (31) voucher specimen were already identified to their species taxonomic level using DNA Barcoding. Shannon-Weiner index showed that the highest diversity level was observed in the third transect (H’ = 2.947), followed by the second transect (H’ = 2.6317) and the lowest being in the first transect (H’ = 1.767). This indicates that butterflies are likely to inhabit dipterocarp and riparian vegetation types than agroecosystem, which influences their species composition and diversity. Moreover, the appearance of a river in the riparian vegetation supported its diversity value since butterflies have the tendency to fly into areas near rivers. Species identification of other voucher specimen will be done in order to compute the overall species richness in PC Hills. Further butterfly sampling sessions of PC Hills is recommended for a more reliable diversity trend and to discover more butterfly species. Expanding the research by assessing the Rhopalocera diversity in other locations should be considered along with studying factors that affect butterfly species composition other than vegetation types.Keywords: distribution patterns, DNA barcoding, morphological analysis, Rhopalocera
Procedia PDF Downloads 1541097 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 1831096 Effects of Virtual Reality Treadmill Training on Gait and Balance Performance of Patients with Stroke: Review
Authors: Hanan Algarni
Abstract:
Background: Impairment of walking and balance skills has negative impact on functional independence and community participation after stroke. Gait recovery is considered a primary goal in rehabilitation by both patients and physiotherapists. Treadmill training coupled with virtual reality technology is a new emerging approach that offers patients with feedback, open and random skills practice while walking and interacting with virtual environmental scenes. Objectives: To synthesize the evidence around the effects of the VR treadmill training on gait speed and balance primarily, functional independence and community participation secondarily in stroke patients. Methods: Systematic review was conducted; search strategy included electronic data bases: MEDLINE, AMED, Cochrane, CINAHL, EMBASE, PEDro, Web of Science, and unpublished literature. Inclusion criteria: Participant: adult >18 years, stroke, ambulatory, without severe visual or cognitive impartments. Intervention: VR treadmill training alone or with physiotherapy. Comparator: any other interventions. Outcomes: gait speed, balance, function, community participation. Characteristics of included studies were extracted for analysis. Risk of bias assessment was performed using Cochrane's ROB tool. Narrative synthesis of findings was undertaken and summary of findings in each outcome was reported using GRADEpro. Results: Four studies were included involving 84 stroke participants with chronic hemiparesis. Interventions intensity ranged (6-12 sessions, 20 minutes-1 hour/session). Three studies investigated the effects on gait speed and balance. 2 studies investigated functional outcomes and one study assessed community participation. ROB assessment showed 50% unclear risk of selection bias and 25% of unclear risk of detection bias across the studies. Heterogeneity was identified in the intervention effects at post training and follow up. Outcome measures, training intensity and durations also varied across the studies, grade of evidence was low for balance, moderate for speed and function outcomes, and high for community participation. However, it is important to note that grading was done on few numbers of studies in each outcome. Conclusions: The summary of findings suggests positive and statistically significant effects (p<0.05) of VR treadmill training compared to other interventions on gait speed, dynamic balance skills, function and participation directly after training. However, the effects were not sustained at follow up in two studies (2 weeks-1 month) and other studies did not perform follow up measurements. More RCTs with larger sample sizes and higher methodological quality are required to examine the long term effects of VR treadmill effects on function independence and community participation after stroke, in order to draw conclusions and produce stronger robust evidence.Keywords: virtual reality, treadmill, stroke, gait rehabilitation
Procedia PDF Downloads 2741095 Performing Arts and Performance Art: Interspaces and Flexible Transitions
Authors: Helmi Vent
Abstract:
This four-year artistic research project has set the goal of exploring the adaptable transitions within the realms between the two genres. This paper will single out one research question from the entire project for its focus, namely on how and under what circumstances such transitions between a reinterpretation and a new creation can take place during the performative process. The film documentation that accompany the project were produced at the Mozarteum University in Salzburg, Austria, as well as on diverse everyday stages at various locations. The model institution that hosted the project is the LIA – Lab Inter Arts, under the direction of Helmi Vent. LIA combines artistic research with performative applications. The project participants are students from various artistic fields of study. The film documentation forms a central platform for the entire project. They function as audiovisual records of performative performative origins and development processes, while serving as the basis for analysis and evaluation, including the self-evaluation of the recorded material and they also serve as illustrative and discussion material in relation to the topic of this paper. Regarding the “interspaces” and variable 'transitions': The performing arts in the western cultures generally orient themselves toward existing original compositions – most often in the interconnected fields of music, dance and theater – with the goal of reinterpreting and rehearsing a pre-existing score, choreographed work, libretto or script and presenting that respective piece to an audience. The essential tool in this reinterpretation process is generally the artistic ‘language’ performers learn over the course of their main studies. Thus, speaking is combined with singing, playing an instrument is combined with dancing, or with pictorial or sculpturally formed works, in addition to many other variations. If the Performing Arts would rid themselves of their designations from time to time and initially follow the emerging, diffusely gliding transitions into the unknown, the artistic language the performer has learned then becomes a creative resource. The illustrative film excerpts depicting the realms between Performing Arts and Performance Art present insights into the ways the project participants embrace unknown and explorative processes, thus allowing the genesis of new performative designs or concepts to be invented between the participants’ acquired cultural and artistic skills and their own creations – according to their own ideas and issues, sometimes with their direct involvement, fragmentary, provisional, left as a rough draft or fully composed. All in all, it is an evolutionary process and its key parameters cannot be distilled down to their essence. Rather, they stem from a subtle inner perception, from deep-seated emotions, imaginations, and non-discursive decisions, which ultimately result in an artistic statement rising to the visible and audible surface. Within these realms between performing arts and performance art and their extremely flexible transitions, exceptional opportunities can be found to grasp and realise art itself as a research process.Keywords: art as research method, Lab Inter Arts ( LIA ), performing arts, performance art
Procedia PDF Downloads 2721094 Multilocal Youth and the Berlin Digital Industry: Productive Leisure as a Key Factor in European Migration
Authors: Stefano Pelaggi
Abstract:
The research is focused on youth labor and mobility in Berlin. Mobility has become a common denominator in our daily lives but it does not primarily move according to monetary incentives. Labor, knowledge and leisure overlap on this point as cities are trying to attract people who could participate in production of the innovations while the new migrants are experiencing the lifestyle of the host cities. The research will present the project of empirical study focused on Italian workers in the digital industry in Berlin, trying to underline the connection between pleasure, leisure with the choice of life abroad. Berlin has become the epicenter of the European Internet start-up scene, but people suitable to work for digital industries are not moving in Berlin to make a career, most of them are attracted to the city for different reasons. This point makes a clear exception to traditional migration flows, which are always originated from a specific search of employment opportunities or strong ties, usually families, in a place that could guarantee success in finding a job. Even the skilled migration has always been originated from a specific need, finding the right path for a successful professional life. In a society where the lack of free time in our calendar seems to be something to be ashamed, the actors of youth mobility incorporate some categories of experiential tourism within their own life path. Professional aspirations, lifestyle choices of the protagonists of youth mobility are geared towards meeting the desires and aspirations that define leisure. While most of creative work places, in particular digital industries, uses the category of fun as a primary element of corporate policy, virtually extending the time to work for the whole day; more and more people around the world are deciding their path in life, career choices on the basis of indicators linked to the realization of the self, which may include factors like a warm climate, cultural environment. All indicators that are usually eradicated from the hegemonic approach to labor. The interpretative framework commonly used seems to be mostly focused on a dualism between Florida's theories and those who highlight the absence of conflict in his studies. While the flexibility of the new creative industries is minimizing leisure, incorporating elements of leisure itself in work activities, more people choose their own path of life by placing great importance to basic needs, through a gaze on pleasure that is only partially driven by consumption. The multi localism is the co-existence of different identities and cultures that do not conflict because they reject the bind on territory. Local loses its strength of opposition to global, with an attenuation of the whole concept of citizenship, territory and even integration. A similar perspective could be useful to search a new approach to all the studies dedicated to the gentrification process, while studying the new migrations flow.Keywords: brain drain, digital industry, leisure and gentrification, multi localism
Procedia PDF Downloads 2431093 An Exploration of the Emergency Staff’s Perceptions and Experiences of Teamwork and the Skills Required in the Emergency Department in Saudi Arabia
Authors: Sami Alanazi
Abstract:
Teamwork practices have been recognized as a significant strategy to improve patient safety, quality of care, and staff and patient satisfaction in healthcare settings, particularly within the emergency department (ED). The EDs depend heavily on teams of interdisciplinary healthcare staff to carry out their operational goals and core business of providing care to the serious illness and injured. The ED is also recognized as a high-risk area in relation to service demand and the potential for human error. Few studies have considered the perceptions and experiences of the ED staff (physicians, nurses, allied health professionals, and administration staff) about the practice of teamwork, especially in Saudi Arabia (SA), and no studies have been conducted to explore the practices of teamwork in the EDs. Aim: To explore the practices of teamwork from the perspectives and experiences of staff (physicians, nurses, allied health professionals, and administration staff) when interacting with each other in the admission areas in the ED of a public hospital in the Northern Border region of SA. Method: A qualitative case study design was utilized, drawing on two methods for the data collection, comprising of semi-structured interviews (n=22) with physicians (6), nurses (10), allied health professionals (3), and administrative members (3) working in the ED of a hospital in the Northern Border region of SA. The second method is non-participant direct observation. All data were analyzed using thematic analysis. Findings: The main themes that emerged from the analysis were as follows: the meaningful of teamwork, reasons of teamwork, the ED environmental factors, the organizational factors, the value of communication, leadership, teamwork skills in the ED, team members' behaviors, multicultural teamwork, and patients and families behaviors theme. Discussion: Working in the ED environment played a major role in affecting work performance as well as team dynamics. However, Communication, time management, fast-paced performance, multitasking, motivation, leadership, and stress management were highlighted by the participants as fundamental skills that have a major impact on team members and patients in the ED. It was found that the behaviors of the team members impacted the team dynamics as well as ED health services. Behaviors such as disputes among team members, conflict, cooperation, uncooperative members, neglect, and emotions of the members. Besides that, the behaviors of the patients and their accompanies had a direct impact on the team and the quality of the services. In addition, the differences in the cultures have separated the team members and created undesirable gaps such the gender segregation, national origin discrimination, and similarity and different in interests. Conclusion: Effective teamwork, in the context of the emergency department, was recognized as an essential element to obtain the quality of care as well as improve staff satisfaction.Keywords: teamwork, barrier, facilitator, emergencydepartment
Procedia PDF Downloads 1401092 Understanding the Impact of Out-of-Sequence Thrust Dynamics on Earthquake Mitigation: Implications for Hazard Assessment and Disaster Planning
Authors: Rajkumar Ghosh
Abstract:
Earthquakes pose significant risks to human life and infrastructure, highlighting the importance of effective earthquake mitigation strategies. Traditional earthquake modelling and mitigation efforts have largely focused on the primary fault segments and their slip behaviour. However, earthquakes can exhibit complex rupture dynamics, including out-of-sequence thrust (OOST) events, which occur on secondary or subsidiary faults. This abstract examines the impact of OOST dynamics on earthquake mitigation strategies and their implications for hazard assessment and disaster planning. OOST events challenge conventional seismic hazard assessments by introducing additional fault segments and potential rupture scenarios that were previously unrecognized or underestimated. Consequently, these events may increase the overall seismic hazard in affected regions. The study reviews recent case studies and research findings that illustrate the occurrence and characteristics of OOST events. It explores the factors contributing to OOST dynamics, such as stress interactions between fault segments, fault geometry, and mechanical properties of fault materials. Moreover, it investigates the potential triggers and precursory signals associated with OOST events to enhance early warning systems and emergency response preparedness. The abstract also highlights the significance of incorporating OOST dynamics into seismic hazard assessment methodologies. It discusses the challenges associated with accurately modelling OOST events, including the need for improved understanding of fault interactions, stress transfer mechanisms, and rupture propagation patterns. Additionally, the abstract explores the potential for advanced geophysical techniques, such as high-resolution imaging and seismic monitoring networks, to detect and characterize OOST events. Furthermore, the abstract emphasizes the practical implications of OOST dynamics for earthquake mitigation strategies and urban planning. It addresses the need for revising building codes, land-use regulations, and infrastructure designs to account for the increased seismic hazard associated with OOST events. It also underscores the importance of public awareness campaigns to educate communities about the potential risks and safety measures specific to OOST-induced earthquakes. This sheds light on the impact of out-of-sequence thrust dynamics in earthquake mitigation. By recognizing and understanding OOST events, researchers, engineers, and policymakers can improve hazard assessment methodologies, enhance early warning systems, and implement effective mitigation measures. By integrating knowledge of OOST dynamics into urban planning and infrastructure development, societies can strive for greater resilience in the face of earthquakes, ultimately minimizing the potential for loss of life and infrastructure damage.Keywords: earthquake mitigation, out-of-sequence thrust, seismic, satellite imagery
Procedia PDF Downloads 881091 Active Development of Tacit Knowledge: Knowledge Management, High Impact Practices and Experiential Learning
Authors: John Zanetich
Abstract:
Due to their positive associations with student learning and retention, certain undergraduate opportunities are designated ‘high-impact.’ High-Impact Practices (HIPs) such as, learning communities, community based projects, research, internships, study abroad and culminating senior experience, share several traits bin common: they demand considerable time and effort, learning occurs outside of the classroom, and they require meaningful interactions between faculty and students, they encourage collaboration with diverse others, and they provide frequent and substantive feedback. As a result of experiential learning in these practices, participation in these practices can be life changing. High impact learning helps individuals locate tacit knowledge, and build mental models that support the accumulation of knowledge. On-going learning from experience and knowledge conversion provides the individual with a way to implicitly organize knowledge and share knowledge over a lifetime. Knowledge conversion is a knowledge management component which focuses on the explication of the tacit knowledge that exists in the minds of students and that knowledge which is embedded in the process and relationships of the classroom educational experience. Knowledge conversion is required when working with tacit knowledge and the demand for a learner to align deeply held beliefs with the cognitive dissonance created by new information. Knowledge conversion and tacit knowledge result from the fact that an individual's way of knowing, that is, their core belief structure, is considered generalized and tacit instead of explicit and specific. As a phenomenon, tacit knowledge is not readily available to the learner for explicit description unless evoked by an external source. The development of knowledge–related capabilities such as Aggressive Development of Tacit Knowledge (ADTK) can be used in experiential educational programs to enhance knowledge, foster behavioral change, improve decision making, and overall performance. ADTK allows the student in HIPs to use their existing knowledge in a way that allows them to evaluate and make any necessary modifications to their core construct of reality in order to amalgamate new information. Based on the Lewin/Schein Change Theory, the learner will reach for tacit knowledge as a stabilizing mechanism when they are challenged by new information that puts them slightly off balance. As in word association drills, the important concept is the first thought. The reactionary outpouring to an experience is the programmed or tacit memory and knowledge of their core belief structure. ADTK is a way to help teachers design their own methods and activities to unfreeze, create new learning, and then refreeze the core constructs upon which future learning in a subject area is built. This paper will explore the use of ADTK as a technique for knowledge conversion in the classroom in general and in HIP programs specifically. It will focus on knowledge conversion in curriculum development and propose the use of one-time educational experiences, multi-session experiences and sequential program experiences focusing on tacit knowledge in educational programs.Keywords: tacit knowledge, knowledge management, college programs, experiential learning
Procedia PDF Downloads 2621090 Identification of Three Strategies to Enhance University Students’ Professional Identity, Using Hierarchical Regression Analysis
Authors: Alba Barbara-i-Molinero, Rosalia Cascon-Pereira, Ana Beatriz Hernandez
Abstract:
Students’ transitions from high school to the university have been challenged by the lack of continuity between both contexts. This mismatch directly affects students by generating feelings of anxiety and uncertainty, which increases the dropout rates and reduces students’ academic success. This discontinuity emanates because ‘transitions concern a restructuring of what the person does and who the person perceives him or herself to be’. Hence, identity becomes essential in these transitions. Generally, identity is the answer to questions such as who am I? or who are we? This is integrated by personal identity, and as many social identities as groups, the individual feels he/she is a part. A case in point to construct a social identity is the identification with a profession. For this reason, a way to lighten the generated tension during transitions is applying strategies orientated to enhance students’ professional identity in their point of entry to the higher education institution. That would create a sense of continuity between high school and higher education contexts, increasing their Professional Identity Strength. To develop the strategies oriented to enhance students Professional Identity, it is important to analyze what influences it. There exist several influencing factors that influence Professional Identity (e.g., professional status, the recommendation of family and peers, the academic environment, or the chosen bachelor degree). There is a gap in the literature analyzing the impact of these factors on more than one bachelor degree. In this regards, our study takes an additional step with the aim of evaluating the influence of several factors on Professional Identity using a cohort of university students from multiple degrees between the ages of 17-19 years. To do so, we used hierarchical regression analyses to assess the impact of the following factors: External Motivation Conditionals (EMC), Educational Experience Conditionals (EEC) and Personal Motivational Conditional (PMP). After conducting the analyses, we found that the assessed factors influenced students’ professional identity differently according to their bachelor degree and discipline. For example, PMC and EMC positively affected science students, while architecture, law and economics and engineering students were just influenced by PMC. Basing on that influences, we proposed three different strategies aimed to enhance students’ professional identity, in the short and long term. These strategies are: to enhance students’ professional identity before the incorporation to university through campuses and icebreaker activities; to apply recruitment strategies aimed to provide realistic information of the bachelor degree; and to incorporate different activities, such as in-vitro, in situ and self-directed activities aimed to enhance longitudinally students’ professional identity from the university. From these results, theoretical contributions and practical implications arise. First, we contribute to the literature by identifying which factors influence students from different bachelor degrees since there is still no evidence. And, second, using as a benchmark the obtained results, we contribute from a practical perspective, by proposing several alternative strategies to increase students’ professional identity strength aiming to lighten their transition from high school to higher education.Keywords: professional identity, higher education, educational strategies , students
Procedia PDF Downloads 1441089 White Individuals' Perception On Whiteness
Authors: Sebastian Del Corral Winder, Kiriana Sanchez, Mixalis Poulakis, Samantha Gray
Abstract:
This paper seeks to explore White privilege and Whiteness. Being White in the U.S. is often perceived as the norm and it brings significant social, economic, educational, and health privileges that often are hidden in social interactions. One quality of Whiteness has been its invisibility given its intrinsic impact on the system, which becomes only visible when paying close attention to White identity and culture and during cross-cultural interactions. The cross-cultural interaction provides an emphasis on differences between the participants and people of color are often viewed as “the other.” These interactions may promote an increased opportunity for discrimination and negative stereotypes against a person of color. Given the recent increase of violence against culturally diverse groups, there has been an increased sense of otherness and division in the country. Furthermore, the accent prestige theory has found that individuals who speak English with a foreign accent are perceived as less educated, competent, friendly, and trustworthy by White individuals in the United States. Using the consensual qualitative research (CQR) methodology, this study explored the cross-cultural dyad from the White individual’s perspective focusing on the psychotherapeutic relationship. The participants were presented with an audio recording of a conversation between a psychotherapist with a Hispanic accent and a patient with an American English accent. Then, the participants completed an interview regarding their perceptions of race, culture, and cross-cultural interactions. The preliminary results suggested that the Hispanic accent alone was enough for the participants to assign stereotypical ethnic and cultural characteristics to the individual with the Hispanic accent. Given the quality of the responses, the authors completed a secondary analysis to explore Whiteness and White privilege in more depth. Participants were found to be on a continuum in their understanding and acknowledgment of systemic racism; while some participants listed examples of inequality, other participants noted: “all people are treated equally.” Most participants noted their feelings of discomfort in discussing topics of cultural diversity and systemic racism by fearing to “say the ‘wrong thing.” Most participants placed the responsibility of discussing cultural differences with the person of color, which has been observed to create further alienation and otherness for culturally diverse individuals. The results indicate the importance of examining racial and cultural biases from White individuals to promote an anti-racist stance. The results emphasize the need for greater systemic changes in education, policies, and individual awareness regarding cultural identity. The results suggest the importance for White individuals to take ownership of their own cultural biases in order to promote equity and engage in cultural humility in a multicultural world. Future research should continue exploring the role of White ethnic identity and education as they appear to moderate White individuals’ attitudes and beliefs regarding other races and cultures.Keywords: culture, qualitative research, whiteness, white privilege
Procedia PDF Downloads 1581088 The Influence of Microsilica on the Cluster Cracks' Geometry of Cement Paste
Authors: Maciej Szeląg
Abstract:
The changing nature of environmental impacts, in which cement composites are operating, are causing in the structure of the material a number of phenomena, which result in volume deformation of the composite. These strains can cause composite cracking. Cracks are merging by propagation or intersect to form a characteristic structure of cracks known as the cluster cracks. This characteristic mesh of cracks is crucial to almost all building materials, which are working in service loads conditions. Particularly dangerous for a cement matrix is a sudden load of elevated temperature – the thermal shock. Resulting in a relatively short period of time a large value of a temperature gradient between the outer surface and the material’s interior can result in cracks formation on the surface and in the volume of the material. In the paper, in order to analyze the geometry of the cluster cracks of the cement pastes, the image analysis tools were used. Tested were 4 series of specimens made of two different Portland cement. In addition, two series include microsilica as a substitute for the 10% of the cement. Within each series, specimens were performed in three w/b indicators (water/binder): 0.4; 0.5; 0.6. The cluster cracks were created by sudden loading the samples by elevated temperature of 250°C. Images of the cracked surfaces were obtained via scanning at 2400 DPI. Digital processing and measurements were performed using ImageJ v. 1.46r software. To describe the structure of the cluster cracks three stereological parameters were proposed: the average cluster area - A ̅, the average length of cluster perimeter - L ̅, and the average opening width of a crack between clusters - I ̅. The aim of the study was to identify and evaluate the relationships between measured stereological parameters, and the compressive strength and the bulk density of the modified cement pastes. The tests of the mechanical and physical feature have been carried out in accordance with EN standards. The curves describing the relationships have been developed using the least squares method, and the quality of the curve fitting to the empirical data was evaluated using three diagnostic statistics: the coefficient of determination – R2, the standard error of estimation - Se, and the coefficient of random variation – W. The use of image analysis allowed for a quantitative description of the cluster cracks’ geometry. Based on the obtained results, it was found a strong correlation between the A ̅ and L ̅ – reflecting the fractal nature of the cluster cracks formation process. It was noted that the compressive strength and the bulk density of cement pastes decrease with an increase in the values of the stereological parameters. It was also found that the main factors, which impact on the cluster cracks’ geometry are the cement particles’ size and the general content of the binder in a volume of the material. The microsilica caused the reduction in the A ̅, L ̅ and I ̅ values compared to the values obtained by the classical cement paste’s samples, which is caused by the pozzolanic properties of the microsilica.Keywords: cement paste, cluster cracks, elevated temperature, image analysis, microsilica, stereological parameters
Procedia PDF Downloads 2461087 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 1911086 Embodied Neoliberalism and the Mind as Tool to Manage the Body: A Descriptive Study Applied to Young Australian Amateur Athletes
Authors: Alicia Ettlin
Abstract:
Amid the rise of neoliberalism to the leading economic policy model in Western societies in the 1980s, people have started to internalise a neoliberal way of thinking, whereby the human body has become an entity that can and needs to be precisely managed through free yet rational decision-making processes. The neoliberal citizen has consequently become an entrepreneur of the self who is free, independent, rational, productive and responsible for themselves, their health and wellbeing as well as their appearance. The focus on individuals as entrepreneurs who manage their bodies through the rationally thinking mind has, however, become increasingly criticised for viewing the social actor as ‘disembodied’, as a detached, social actor whose powerful mind governs over the passive body. On the other hand, the discourse around embodiment seeks to connect rational decision-making processes to the dominant neoliberal discourse which creates an embodied understanding that the body, just as other areas of people’s lives, can and should be shaped, monitored and managed through cognitive and rational thinking. This perspective offers an understanding of the body regarding its connections with the social environment that reaches beyond the debates around mind-body binary thinking. Hence, following this argument, body management should not be thought of as either solely guided by embodied discourses nor as merely falling into a mind-body dualism, but rather, simultaneously and inseparably as both at once. The descriptive, qualitative analysis of semi-structured in-depth interviews conducted with young Australian amateur athletes between the age of 18 and 24 has shown that most participants are interested in measuring and managing their body to create self-knowledge and self-improvement. The participants thereby connected self-improvement to weight loss, muscle gain or simply staying fit and healthy. Self-knowledge refers to body measurements including weight, BMI or body fat percentage. Self-management and self-knowledge that are reliant on one another to take rational and well-thought-out decisions, are both characteristic values of the neoliberal doctrine. A neoliberal way of thinking and looking after the body has also by many been connected to rewarding themselves for their discipline, hard work or achievement of specific body management goals (e.g. eating chocolate for reaching the daily step count goal). A few participants, however, have shown resistance against these neoliberal values, and in particular, against the precise monitoring and management of the body with the help of self-tracking devices. Ultimately, however, it seems that most participants have internalised the dominant discourses around self-responsibility, and by association, a sense of duty to discipline their body in normative ways. Even those who have indicated their resistance against body work and body management practices that follow neoliberal thinking and measurement systems, are aware and have internalised the concept of the rational operating mind that needs or should decide how to look after the body in terms of health but also appearance ideals. The discussion around the collected data thereby shows that embodiment and the mind/body dualism constitute two connected, rather than two separate or opposing concepts.Keywords: dualism, embodiment, mind, neoliberalism
Procedia PDF Downloads 1631085 A Critical Analysis of the Current Concept of Healthy Eating and Its Impact on Food Traditions
Authors: Carolina Gheller Miguens
Abstract:
Feeding is, and should be, pleasurable for living beings so they desire to nourish themselves while preserving the continuity of the species. Social rites usually revolve around the table and are closely linked to the cultural traditions of each region and social group. Since the beginning, food has been closely linked with the products each region provides, and, also, related to the respective seasons of production. With the globalization and facilities of modern life we are able to find an ever increasing variety of products at any time of the year on supermarket shelves. These lifestyle changes end up directly influencing food traditions. With the era of uncontrolled obesity caused by the dazzle with the large and varied supply of low-priced to ultra-processed industrial products now in the past, today we are living a time when people are putting aside the pleasure of eating to exclusively eat food dictated by the media as healthy. Recently the medicalization of food in our society has become so present in daily life that almost without realizing we make food choices conditioned to the studies of the properties of these foods. The fact that people are more attentive to their health is interesting. However, when this care becomes an obsessive disorder, which imposes itself on the pleasure of eating and extinguishes traditional customs, it becomes dangerous for our recognition as citizens belonging to a culture and society. This new way of living generates a rupture with the social environment of origin, possibly exposing old traditions to oblivion after two or three generations. Based on these facts, the presented study analyzes these social transformations that occur in our society that triggered the current medicalization of food. In order to clarify what is actually a healthy diet, this research proposes a critical analysis on the subject aiming to understand nutritional rationality and relate how it acts in the medicalization of food. A wide bibliographic review on the subject was carried out followed by an exploratory research in online (especially social) media, a relevant source in this context due to the perceived influence of such media in contemporary eating habits. Finally, this data was crossed, critically analyzing the current situation of the concept of healthy eating and medicalization of food. Throughout this research, it was noticed that people are increasingly seeking information about the nutritional properties of food, but instead of seeking the benefits of products that traditionally eat in their social environment, they incorporate external elements that often bring benefits similar to the food already consumed. This is because the access to information is directed by the media and exalts the exotic, since this arouses more interest of the population in general. Efforts must be made to clarify that traditional products are also healthy foods, rich in history, memory and tradition and cannot be replaced by a standardized diet little concerned with the construction of taste and pleasure, having a relationship with food as if it were a Medicinal product.Keywords: food traditions, food transformations, healthy eating, medicalization of food
Procedia PDF Downloads 3281084 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 1701083 Functional Switching of Serratia marcescens Transcriptional Regulator from Activator to Inhibitor of Quorum Sensing by Exogenous Addition
Authors: Norihiro Kato, Yuriko Takayama
Abstract:
Some gram-negative bacteria enable the simultaneous activation of gene expression involved in N-acylhomoserine lactone (AHL) dependent cell-to-cell communication system. Such regulatory system for the bacterial group behavior is termed as quorum sensing (QS) because a diffusible AHL signal can accumulate around the cell during the increase of the cell density and trigger activation of the sequential QS process. By blocking the QS, the expression of diverse genes related to infection, antibiotic production, and biofilm formation is inhibited. Conditioning of QS by regulation of the DNA-receptor-AHL interaction is a potential target for enhancing host defenses against pathogenicity. We focused on engineered application of transcriptional regulator SpnR produced in opportunistic human pathogen Serratia marcescens. The SpnR can interact with AHL signals at an N-terminal domain and also with a promoter region of a QS target gene at a C-terminal domain. As the initial process of the QS activation, the SpnR forms a complex with the AHL to enhance the expression of pig cluster; the SpnR normally acts as an activator for the expression of the QS-dependent gene. In this research, we attempt to artificially control QS by changing the role of SpnR. The QS-dependent prodigiosin production is expected to inhibit by externally added SpnR in the culture broth of AS-1 strain because the AHL concentration was kept below the threshold by AHL-SpnR complex formation. Maltose-binding protein (MBP)-tagged SpnR (MBP-SpnR) was overexpressed in Escherichia coli and purified using an affinity chromatography equipped with an amylose resin column. The specific interaction between AHL and MBP-SpnR was demonstrated by quartz crystal microbalance (QCM) sensor. AHL with amino end-group was coupled with COOH-terminated self-assembled monolayer prepared on a gold electrode of 27-MHz quartz crystal sensor using water-soluble carbodiimide. After the injection of MBP-SpnR into a cup-type sensor cell filled with the buffer solution, time course of resonant frequency change (ΔFs) was determined. A decrease of ΔFs clearly showed the uptake of MBP-SpnR onto the AHL-immobilized electrode. Furthermore, no binding affinity was observed after the heat-inactivation of MBP-SpnR at 80ºC. These results suggest that MBP-SpnR possesses a specific affinity for AHL. MBP-SpnR was added to the culture medium as an AHL trap to study inhibitory effects on intracellularly accumulated prodigiosin. With approximately 2 µM MBP-SpnR, the amount of prodigiosin induced was half that of the control without any additives. In conclusion, the function of SpnR could be switched by adding it to the cell culture. Exogenously added MBP-SpnR possesses high affinity for AHL derived from cells and acts as an inhibitor of AHL-mediated QS.Keywords: intracellular signaling, microbial biotechnology, quorum sensing, transcriptional regulator
Procedia PDF Downloads 2671082 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂
Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine
Abstract:
Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).Keywords: devulcanization, recycling, rubber, waste
Procedia PDF Downloads 3851081 Exploring Accessible Filmmaking and Video for Deafblind Audiences through Multisensory Participatory Design
Authors: Aikaterini Tavoulari, Mike Richardson
Abstract:
Objective: This abstract presents a multisensory participatory design project, inspired by a deafblind PhD student's ambition to climb Mount Everest. The project aims to explore accessible routes for filmmaking and video content creation, catering to the needs of individuals with hearing and sight loss. By engaging participants from the Southwest area of England, recruited through multiple networks, the project seeks to gather qualitative data and insights to inform the development of inclusive media practices. Design: It will be a community-based participatory research design. The workshop will feature various stations that stimulate different senses, such as scent, touch, sight, hearing as well as movement. Participants will have the opportunity to engage with these multisensory experiences, providing valuable feedback on their effectiveness and potential for enhancing accessibility in filmmaking and video content. Methods: Brief semi-structured interviews will be conducted to collect qualitative data, allowing participants to share their perspectives, challenges, and suggestions for improvement. The participatory design approach emphasizes the importance of involving the target audience in the creative process. By actively engaging individuals with hearing and sight loss, the project aims to ensure that their needs and preferences are central to the development of accessible filmmaking techniques and video content. This collaborative effort seeks to bridge the gap between content creators and diverse audiences, fostering a more inclusive media landscape. Results: The findings from this study will contribute to the growing body of research on accessible filmmaking and video content creation. Via inductive thematic analysis of the qualitative data collected through interviews and observations, the researchers aim to identify key themes, challenges, and opportunities for creating engaging and inclusive media experiences for deafblind audiences. The insights will inform the development of best practices and guidelines for accessible filmmaking, empowering content creators to produce more inclusive and immersive video content. Conclusion: The abstract targets the hybrid International Conference for Disability and Diversity in Canada (January 2025), as this platform provides an excellent opportunity to share the outcomes of the project with a global audience of researchers, practitioners, and advocates working towards inclusivity and accessibility in various disability domains. By presenting this research at the conference in person, the authors aim to contribute to the ongoing discourse on disability and diversity, highlighting the importance of multisensory experiences and participatory design in creating accessible media content for the deafblind community and the community with sensory impairments more broadly.Keywords: vision impairment, hearing impairment, deafblindness, accessibility, filmmaking
Procedia PDF Downloads 431080 Impact of Material Chemistry and Morphology on Attrition Behavior of Excipients during Blending
Authors: Sri Sharath Kulkarni, Pauline Janssen, Alberto Berardi, Bastiaan Dickhoff, Sander van Gessel
Abstract:
Blending is a common process in the production of pharmaceutical dosage forms where the high shear is used to obtain a homogenous dosage. The shear required can lead to uncontrolled attrition of excipients and affect API’s. This has an impact on the performance of the formulation as this can alter the structure of the mixture. Therefore, it is important to understand the driving mechanisms for attrition. The aim of this study was to increase the fundamental understanding of the attrition behavior of excipients. Attrition behavior of the excipients was evaluated using a high shear blender (Procept Form-8, Zele, Belgium). Twelve pure excipients are tested, with morphologies varying from crystalline (sieved), granulated to spray dried (round to fibrous). Furthermore, materials include lactose, microcrystalline cellulose (MCC), di-calcium phosphate (DCP), and mannitol. The rotational speed of the blender was set at 1370 rpm to have the highest shear with a Froude (Fr) number 9. Varying blending times of 2-10 min were used. Subsequently, after blending, the excipients were analyzed for changes in particle size distribution (PSD). This was determined (n = 3) by dry laser diffraction (Helos/KR, Sympatec, Germany). Attrition was found to be a surface phenomenon which occurs in the first minutes of the high shear blending process. An increase of blending time above 2 mins showed no change in particle size distribution. Material chemistry was identified as a key driver for differences in the attrition behavior between different excipients. This is mainly related to the proneness to fragmentation, which is known to be higher for materials such as DCP and mannitol compared to lactose and MCC. Secondly, morphology also was identified as a driver of the degree of attrition. Granular products consisting of irregular surfaces showed the highest reduction in particle size. This is due to the weak solid bonds created between the primary particles during the granulation process. Granular DCP and mannitol show a reduction of 80-90% in x10(µm) compared to a 20-30% drop for granular lactose (monohydrate and anhydrous). Apart from the granular lactose, all the remaining morphologies of lactose (spray dried-round, sieved-tomahawk, milled) show little change in particle size. Similar observations have been made for spray-dried fibrous MCC. All these morphologies have little irregular or sharp surfaces and thereby are less prone to fragmentation. Therefore, products containing brittle materials such as mannitol and DCP are more prone to fragmentation when exposed to shear. Granular products with irregular surfaces lead to an increase in attrition. While spherical, crystalline, or fibrous morphologies show reduced impact during high shear blending. These changes in size will affect the functionality attributes of the formulation, such as flow, API homogeneity, tableting, formation of dust, etc. Hence it is important for formulators to fully understand the excipients to make the right choices.Keywords: attrition, blending, continuous manufacturing, excipients, lactose, microcrystalline cellulose, shear
Procedia PDF Downloads 1111079 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining
Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj
Abstract:
Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.Keywords: data mining, SME growth, success factors, web mining
Procedia PDF Downloads 2671078 The Joy of Painless Maternity: The Reproductive Policy of the Bolsheviks in the 1930s
Authors: Almira Sharafeeva
Abstract:
In the Soviet Union of the 1930s, motherhood was seen as a natural need of women. The masculine Bolshevik state did not see the emancipated woman as free from her maternal burden. In order to support the idea of "joyful motherhood," a medical discourse on the anesthesia of childbirth emerges. In March 1935 at the IX Congress of obstetricians and gynecologists the People's Commissar of Public Health of the RSFSR G.N. Kaminsky raised the issue of anesthesia of childbirth. It was also from that year that medical, literary and artistic editions with enviable frequency began to publish articles, studies devoted to the issue, the goal - to anesthetize all childbirths in the USSR - was proclaimed. These publications were often filled with anti-German and anti-capitalist propaganda, through which the advantages of socialism over Capitalism and Nazism were demonstrated. At congresses, in journals, and at institute meetings, doctors' discussions around obstetric anesthesia were accompanied by discussions of shortening the duration of the childbirth process, the prevention and prevention of disease, the admission of nurses to the procedure, and the proper behavior of women during the childbirth process. With the help of articles from medical periodicals of the 1930s., brochures, as well as documents from the funds of the Institute of Obstetrics and Gynecology of the Academy of Medical Sciences of the USSR (TsGANTD SPb) and the Department of Obstetrics and Gynecology of the NKZ USSR (GARF) in this paper we will show, how the advantages of the Soviet system and the socialist way of life were constructed through the problem of childbirth pain relief, and we will also show how childbirth pain relief in the USSR was related to the foreign policy situation and how projects of labor pain relief were related to the anti-abortion policy of the state. This study also attempts to answer the question of why anesthesia of childbirth in the USSR did not become widespread and how, through this medical procedure, the Soviet authorities tried to take control of a female function (childbirth) that was not available to men. Considering this subject from the perspective of gender studies and the social history of medicine, it is productive to use the term "biopolitics. Michel Foucault and Antonio Negri, wrote that biopolitics takes under its wing the control and management of hygiene, nutrition, fertility, sexuality, contraception. The central issue of biopolitics is population reproduction. It includes strategies for intervening in collective existence in the name of life and health, ways of subjectivation by which individuals are forced to work on themselves. The Soviet state, through intervention in the reproductive lives of its citizens, sought to realize its goals of population growth, which was necessary to demonstrate the benefits of living in the Soviet Union and to train a pool of builders of socialism. The woman's body was seen as the object over which the socialist experiment of reproductive policy was being conducted.Keywords: labor anesthesia, biopolitics of stalinism, childbirth pain relief, reproductive policy
Procedia PDF Downloads 701077 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 1461076 Modeling the International Economic Relations Development: The Prospects for Regional and Global Economic Integration
Authors: M. G. Shilina
Abstract:
The interstate economic interaction phenomenon is complex. ‘Economic integration’, as one of its types, can be explored through the prism of international law, the theories of the world economy, politics and international relations. The most objective study of the phenomenon requires a comprehensive multifactoral approach. In new geopolitical realities, the problems of coexistence and possible interconnection of various mechanisms of interstate economic interaction are actively discussed. Currently, the Eurasian continent states support the direction to economic integration. At the same time, the existing international economic law fragmentation in Eurasia is seen as the important problem. The Eurasian space is characterized by a various types of interstate relations: international agreements (multilateral and bilateral), and a large number of cooperation formats (from discussion platforms to organizations aimed at deep integration). For their harmonization, it is necessary to have a clear vision to the phased international economic relations regulation options. In the conditions of rapid development of international economic relations, the modeling (including prognostic) can be optimally used as the main scientific method for presenting the phenomenon. On the basis of this method, it is possible to form the current situation vision and the best options for further action. In order to determine the most objective version of the integration development, the combination of several approaches were used. The normative legal approach- the descriptive method of legal modeling- was taken as the basis for the analysis. A set of legal methods was supplemented by the international relations science prognostic methods. The key elements of the model are the international economic organizations and states' associations existing in the Eurasian space (the Eurasian Economic Union (EAEU), the European Union (EU), the Shanghai Cooperation Organization (SCO), Chinese project ‘One belt-one road’ (OBOR), the Commonwealth of Independent States (CIS), BRICS, etc.). A general term for the elements of the model is proposed - the interstate interaction mechanisms (IIM). The aim of building a model of current and future Eurasian economic integration is to show optimal options for joint economic development of the states and IIMs. The long-term goal of this development is the new economic and political space, so-called the ‘Great Eurasian Community’. The process of achievement this long-term goal consists of successive steps. Modeling the integration architecture and dividing the interaction into stages led us to the following conclusion: the SCO is able to transform Eurasia into a single economic space. Gradual implementation of the complex phased model, in which the SCO+ plays a key role, will allow building an effective economic integration for all its participants, to create an economically strong community. The model can have practical value for politicians, lawyers, economists and other participants involved in the economic integration process. A clear, systematic structure can serve as a basis for further governmental action.Keywords: economic integration, The Eurasian Economic Union, The European Union, The Shanghai Cooperation Organization, The Silk Road Economic Belt
Procedia PDF Downloads 1501075 The Role of Building Information Modeling as a Design Teaching Method in Architecture, Engineering and Construction Schools in Brazil
Authors: Aline V. Arroteia, Gustavo G. Do Amaral, Simone Z. Kikuti, Norberto C. S. Moura, Silvio B. Melhado
Abstract:
Despite the significant advances made by the construction industry in recent years, the crystalized absence of integration between the design and construction phases is still an evident and costly problem in building construction. Globally, the construction industry has sought to adopt collaborative practices through new technologies to mitigate impacts of this fragmented process and to optimize its production. In this new technological business environment, professionals are required to develop new methodologies based on the notion of collaboration and integration of information throughout the building lifecycle. This scenario also represents the industry’s reality in developing nations, and the increasing need for overall efficiency has demanded new educational alternatives at the undergraduate and post-graduate levels. In countries like Brazil, it is the common understanding that Architecture, Engineering and Building Construction educational programs are being required to review the traditional design pedagogical processes to promote a comprehensive notion about integration and simultaneity between the phases of the project. In this context, the coherent inclusion of computation design to all segments of the educational programs of construction related professionals represents a significant research topic that, in fact, can affect the industry practice. Thus, the main objective of the present study was to comparatively measure the effectiveness of the Building Information Modeling courses offered by the University of Sao Paulo, the most important academic institution in Brazil, at the Schools of Architecture and Civil Engineering and the courses offered in well recognized BIM research institutions, such as the School of Design in the College of Architecture of the Georgia Institute of Technology, USA, to evaluate the dissemination of BIM knowledge amongst students in post graduate level. The qualitative research methodology was developed based on the analysis of the program and activities proposed by two BIM courses offered in each of the above-mentioned institutions, which were used as case studies. The data collection instruments were a student questionnaire, semi-structured interviews, participatory evaluation and pedagogical practices. The found results have detected a broad heterogeneity of the students regarding their professional experience, hours dedicated to training, and especially in relation to their general knowledge of BIM technology and its applications. The research observed that BIM is mostly understood as an operational tool and not as methodological project development approach, relevant to the whole building life cycle. The present research offers in its conclusion an assessment about the importance of the incorporation of BIM, with efficiency and in its totality, as a teaching method in undergraduate and graduate courses in the Brazilian architecture, engineering and building construction schools.Keywords: building information modeling (BIM), BIM education, BIM process, design teaching
Procedia PDF Downloads 1541074 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction
Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini
Abstract:
Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable
Procedia PDF Downloads 280