Search results for: assessment of basic language and learning skills
472 The Macrophage Migration Inhibitory Factor and Stem Cell Factor Levels in Serum of Adolescent and Young Adults with Mood Disorders: A Two Year Follow-Up Study
Authors: Aleksandra Rajewska-Rager, Maria Skibinska, Monika Dmitrzak-Weglarz, Natalia Lepczynska, Pawel Kapelski, Joanna Pawlak, Joanna Hauser
Abstract:
Introduction: Inflammation and cytokines have emerged as a promising target in mood disorders research; however there are still very limited numbers of study regarding inflammatory alterations among adolescents and young adults with mood disorders. The Macrophage Migration Inhibitory Factor (MIF) and Stem Cell Factor (SCF) are the pleiotropic cytokines which may play an important role in mood disorders pathophysiology. The aim of this study was to investigate levels of these factors in serum of adolescent and young adults with mood disorders compared to healthy controls. Subjects: We involved 79 patients aged 12-24 years in 2-year follow-up study with a primary diagnosis of mood disorders: bipolar disorder (BP) and unipolar disorder with BP spectrum. Study group includes 23 males (mean age 19.08, SD 3.3) and 56 females (18.39, SD 3.28). Control group consisted 35 persons: 7 males (20.43, SD 4.23) and 28 females (21.25, SD 2.11). Clinical diagnoses according to DSM-IV-TR criteria were assessed using Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version (K-SADS-PL) and Structured Clinical Interview for the Diagnostic and Statistical Manual (SCID) in young adults respectively. Clinical assessment includes evaluation of clinical factors and symptoms severity (rated using the Hamilton Depression Rating Scale and Young Mania Rating Scale). Clinical and biological evaluations were made at control visits respectively at baseline (week 0), euthymia (at month 3 or 6) and after 12 and 24 months. Methods: Serum protein concentration was determined by Enzyme-Linked Immunosorbent Assays (ELISA) method. Human MIF and SCF DuoSet ELISA kits were used. In the analyses non-parametric tests were used: Mann-Whitney U test, Kruskal-Wallis ANOVA, Friedman’s ANOVA, Wilcoxon signed rank test, Spearman correlation. We defined statistical significance as p < 0.05. Results: Comparing MIF and SCF levels between acute episode of depression/hypo/mania at baseline and euthymia (at month 3 or 6) we did not find any statistical differences. At baseline patients with age above 18 years old had decreased MIF level compared to patients younger than 18 years. MIF level at baseline positively correlated with age (p=0.004). Positive correlations of SCF level at month 3 and 6 with depression or mania occurrence at month 24 (p=0.03 and p=0.04, respectively) was detected. Strong correlations between MIF and SCF levels at baseline (p=0.0005) and month 3 (p=0.03) were observed. Discussion: Our results did not show any differences in MIF and SCF levels between acute episode of depression/hypo/mania and euthymia in young patients. Further studies on larger groups are recommended. Grant was founded by National Science Center in Poland no 2011/03/D/NZ5/06146.Keywords: cytokines, MIF, mood disorders, SCF
Procedia PDF Downloads 201471 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa
Authors: Yohana Fessehazion
Abstract:
Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth
Procedia PDF Downloads 223470 Impedimetric Phage-Based Sensor for the Rapid Detection of Staphylococcus aureus from Nasal Swab
Authors: Z. Yousefniayejahr, S. Bolognini, A. Bonini, C. Campobasso, N. Poma, F. Vivaldi, M. Di Luca, A. Tavanti, F. Di Francesco
Abstract:
Pathogenic bacteria represent a threat to healthcare systems and the food industry because their rapid detection remains challenging. Electrochemical biosensors are gaining prominence as a novel technology for the detection of pathogens due to intrinsic features such as low cost, rapid response time, and portability, which make them a valuable alternative to traditional methodologies. These sensors use biorecognition elements that are crucial for the identification of specific bacteria. In this context, bacteriophages are promising tools for their inherent high selectivity towards bacterial hosts, which is of fundamental importance when detecting bacterial pathogens in complex biological samples. In this study, we present the development of a low-cost and portable sensor based on the Zeno phage for the rapid detection of Staphylococcus aureus. Screen-printed gold electrodes functionalized with the Zeno phage were used, and electrochemical impedance spectroscopy was applied to evaluate the change of the charge transfer resistance (Rct) as a result of the interaction with S. aureus MRSA ATCC 43300. The phage-based biosensor showed a linear range from 101 to 104 CFU/mL with a 20-minute response time and a limit of detection (LOD) of 1.2 CFU/mL under physiological conditions. The biosensor’s ability to recognize various strains of staphylococci was also successfully demonstrated in the presence of clinical isolates collected from different geographic areas. Assays using S. epidermidis were also carried out to verify the species-specificity of the phage sensor. We only observed a remarkable change of the Rct in the presence of the target S. aureus bacteria, while no substantial binding to S. epidermidis occurred. This confirmed that the Zeno phage sensor only targets S. aureus species within the genus Staphylococcus. In addition, the biosensor's specificity with respect to other bacterial species, including gram-positive bacteria like Enterococcus faecium and the gram-negative bacterium Pseudomonas aeruginosa, was evaluated, and a non-significant impedimetric signal was observed. Notably, the biosensor successfully identified S. aureus bacterial cells in a complex matrix such as a nasal swab, opening the possibility of its use in a real-case scenario. We diluted different concentrations of S. aureus from 108 to 100 CFU/mL with a ratio of 1:10 in the nasal swap matrices collected from healthy donors. Three different sensors were applied to measure various concentrations of bacteria. Our sensor indicated high selectivity to detect S. aureus in biological matrices compared to time-consuming traditional methods, such as enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and radioimmunoassay (RIA), etc. With the aim to study the possibility to use this biosensor to address the challenge associated to pathogen detection, ongoing research is focused on the assessment of the biosensor’s analytical performances in different biological samples and the discovery of new phage bioreceptors.Keywords: electrochemical impedance spectroscopy, bacteriophage, biosensor, Staphylococcus aureus
Procedia PDF Downloads 66469 Construction of Graph Signal Modulations via Graph Fourier Transform and Its Applications
Authors: Xianwei Zheng, Yuan Yan Tang
Abstract:
Classical window Fourier transform has been widely used in signal processing, image processing, machine learning and pattern recognition. The related Gabor transform is powerful enough to capture the texture information of any given dataset. Recently, in the emerging field of graph signal processing, researchers devoting themselves to develop a graph signal processing theory to handle the so-called graph signals. Among the new developing theory, windowed graph Fourier transform has been constructed to establish a time-frequency analysis framework of graph signals. The windowed graph Fourier transform is defined by using the translation and modulation operators of graph signals, following the similar calculations in classical windowed Fourier transform. Specifically, the translation and modulation operators of graph signals are defined by using the Laplacian eigenvectors as follows. For a given graph signal, its translation is defined by a similar manner as its definition in classical signal processing. Specifically, the translation operator can be defined by using the Fourier atoms; the graph signal translation is defined similarly by using the Laplacian eigenvectors. The modulation of the graph can also be established by using the Laplacian eigenvectors. The windowed graph Fourier transform based on these two operators has been applied to obtain time-frequency representations of graph signals. Fundamentally, the modulation operator is defined similarly to the classical modulation by multiplying a graph signal with the entries in each Fourier atom. However, a single Laplacian eigenvector entry cannot play a similar role as the Fourier atom. This definition ignored the relationship between the translation and modulation operators. In this paper, a new definition of the modulation operator is proposed and thus another time-frequency framework for graph signal is constructed. Specifically, the relationship between the translation and modulation operations can be established by the Fourier transform. Specifically, for any signal, the Fourier transform of its translation is the modulation of its Fourier transform. Thus, the modulation of any signal can be defined as the inverse Fourier transform of the translation of its Fourier transform. Therefore, similarly, the graph modulation of any graph signal can be defined as the inverse graph Fourier transform of the translation of its graph Fourier. The novel definition of the graph modulation operator established a relationship of the translation and modulation operations. The new modulation operation and the original translation operation are applied to construct a new framework of graph signal time-frequency analysis. Furthermore, a windowed graph Fourier frame theory is developed. Necessary and sufficient conditions for constructing windowed graph Fourier frames, tight frames and dual frames are presented in this paper. The novel graph signal time-frequency analysis framework is applied to signals defined on well-known graphs, e.g. Minnesota road graph and random graphs. Experimental results show that the novel framework captures new features of graph signals.Keywords: graph signals, windowed graph Fourier transform, windowed graph Fourier frames, vertex frequency analysis
Procedia PDF Downloads 341468 The Impact of the Lexical Quality Hypothesis and the Self-Teaching Hypothesis on Reading Ability
Authors: Anastasios Ntousas
Abstract:
The purpose of the following paper is to analyze the relationship between the lexical quality and the self-teaching hypothesis and their impact on the reading ability. The following questions emerged, is there a correlation between the effective reading experience that the lexical quality hypothesis proposes and the self-teaching hypothesis, would the ability to read by analogy facilitate and create stable, synchronized four-word representational, and would word morphological knowledge be a possible extension of the self-teaching hypothesis. The lexical quality hypothesis speculates that words include four representational attributes, phonology, orthography, morpho-syntax, and meaning. Those four-word representations work together to make word reading an effective task. A possible lack of knowledge in one of the representations might disrupt reading comprehension. The degree that the four-word features connect together makes high and low lexical word quality representations. When the four-word representational attributes connect together effectively, readers have a high lexical quality of words; however, when they hardly have a strong connection with each other, readers have a low lexical quality of words. Furthermore, the self-teaching hypothesis proposes that phonological recoding enables printed word learning. Phonological knowledge and reading experience facilitate the acquisition and consolidation of specific-word orthographies. The reading experience is related to strong reading comprehension. The more readers have contact with texts, the better readers they become. Therefore, their phonological knowledge, as the self-teaching hypothesis suggests, might have a facilitative impact on the consolidation of the orthographical, morphological-syntax and meaning representations of unknown words. The phonology of known words might activate effectively the rest of the representational features of words. Readers use their existing phonological knowledge of similarly spelt words to pronounce unknown words; a possible transference of this ability to read by analogy will appear with readers’ morphological knowledge. Morphemes might facilitate readers’ ability to pronounce and spell new unknown words in which they do not have lexical access. Readers will encounter unknown words with similarly phonemes and morphemes but with different meanings. Knowledge of phonology and morphology might support and increase reading comprehension. There was a careful selection, discussion of theoretical material and comparison of the two existing theories. Evidence shows that morphological knowledge improves reading ability and comprehension, so morphological knowledge might be a possible extension of the self-teaching hypothesis, the fundamental skill to read by analogy can be implemented to the consolidation of word – specific orthographies via readers’ morphological knowledge, and there is a positive correlation between effective reading experience and self-teaching hypothesis.Keywords: morphology, orthography, reading ability, reading comprehension
Procedia PDF Downloads 128467 Hybrid Renewable Energy Systems for Electricity and Hydrogen Production in an Urban Environment
Authors: Same Noel Ngando, Yakub Abdulfatai Olatunji
Abstract:
Renewable energy micro-grids, such as those powered by solar or wind energy, are often intermittent in nature. This means that the amount of energy generated by these systems can vary depending on weather conditions or other factors, which can make it difficult to ensure a steady supply of power. To address this issue, energy storage systems have been developed to increase the reliability of renewable energy micro-grids. Battery systems have been the dominant energy storage technology for renewable energy micro-grids. Batteries can store large amounts of energy in a relatively small and compact package, making them easy to install and maintain in a micro-grid setting. Additionally, batteries can be quickly charged and discharged, allowing them to respond quickly to changes in energy demand. However, the process involved in recycling batteries is quite costly and difficult. An alternative energy storage system that is gaining popularity is hydrogen storage. Hydrogen is a versatile energy carrier that can be produced from renewable energy sources such as solar or wind. It can be stored in large quantities at low cost, making it suitable for long-distance mass storage. Unlike batteries, hydrogen does not degrade over time, so it can be stored for extended periods without the need for frequent maintenance or replacement, allowing it to be used as a backup power source when the micro-grid is not generating enough energy to meet demand. When hydrogen is needed, it can be converted back into electricity through a fuel cell. Energy consumption data is got from a particular residential area in Daegu, South Korea, and the data is processed and analyzed. From the analysis, the total energy demand is calculated, and different hybrid energy system configurations are designed using HOMER Pro (Hybrid Optimization for Multiple Energy Resources) and MATLAB software. A techno-economic and environmental comparison and life cycle assessment (LCA) of the different configurations using battery and hydrogen as storage systems are carried out. The various scenarios included PV-hydrogen-grid system, PV-hydrogen-grid-wind, PV-hydrogen-grid-biomass, PV-hydrogen-wind, PV-hydrogen-biomass, biomass-hydrogen, wind-hydrogen, PV-battery-grid-wind, PV- battery -grid-biomass, PV- battery -wind, PV- battery -biomass, and biomass- battery. From the analysis, the least cost system for the location was the PV-hydrogen-grid system, with a net present cost of about USD 9,529,161. Even though all scenarios were environmentally friendly, taking into account the recycling cost and pollution involved in battery systems, all systems with hydrogen as a storage system produced better results. In conclusion, hydrogen is becoming a very prominent energy storage solution for renewable energy micro-grids. It is easier to store compared with electric power, so it is suitable for long-distance mass storage. Hydrogen storage systems have several advantages over battery systems, including flexibility, long-term stability, and low environmental impact. The cost of hydrogen storage is still relatively high, but it is expected to decrease as more hydrogen production, and storage infrastructure is built. With the growing focus on renewable energy and the need to reduce greenhouse gas emissions, hydrogen is expected to play an increasingly important role in the energy storage landscape.Keywords: renewable energy systems, microgrid, hydrogen production, energy storage systems
Procedia PDF Downloads 94466 Evaluation of Redundancy Architectures Based on System on Chip Internal Interfaces for Future Unmanned Aerial Vehicles Flight Control Computer
Authors: Sebastian Hiergeist
Abstract:
It is a common view that Unmanned Aerial Vehicles (UAV) tend to migrate into the civil airspace. This trend is challenging UAV manufacturer in plenty ways, as there come up a lot of new requirements and functional aspects. On the higher application levels, this might be collision detection and avoidance and similar features, whereas all these functions only act as input for the flight control components of the aircraft. The flight control computer (FCC) is the central component when it comes up to ensure a continuous safe flight and landing. As these systems are flight critical, they have to be built up redundantly to be able to provide a Fail-Operational behavior. Recent architectural approaches of FCCs used in UAV systems are often based on very simple microprocessors in combination with proprietary Application-Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA) extensions implementing the whole redundancy functionality. In the future, such simple microprocessors may not be available anymore as they are more and more replaced by higher sophisticated System on Chip (SoC). As the avionic industry cannot provide enough market power to significantly influence the development of new semiconductor products, the use of solutions from foreign markets is almost inevitable. Products stemming from the industrial market developed according to IEC 61508, or automotive SoCs, according to ISO 26262, can be seen as candidates as they have been developed for similar environments. Current available SoC from the industrial or automotive sector provides quite a broad selection of interfaces like, i.e., Ethernet, SPI or FlexRay, that might come into account for the implementation of a redundancy network. In this context, possible network architectures shall be investigated which could be established by using the interfaces stated above. Of importance here is the avoidance of any single point of failures, as well as a proper segregation in distinct fault containment regions. The performed analysis is supported by the use of guidelines, published by the aviation authorities (FAA and EASA), on the reliability of data networks. The main focus clearly lies on the reachable level of safety, but also other aspects like performance and determinism play an important role and are considered in the research. Due to the further increase in design complexity of recent and future SoCs, also the risk of design errors, which might lead to common mode faults, increases. Thus in the context of this work also the aspect of dissimilarity will be considered to limit the effect of design errors. To achieve this, the work is limited to broadly available interfaces available in products from the most common silicon manufacturer. The resulting work shall support the design of future UAV FCCs by giving a guideline on building up a redundancy network between SoCs, solely using on board interfaces. Therefore the author will provide a detailed usability analysis on available interfaces provided by recent SoC solutions, suggestions on possible redundancy architectures based on these interfaces and an assessment of the most relevant characteristics of the suggested network architectures, like e.g. safety or performance.Keywords: redundancy, System-on-Chip, UAV, flight control computer (FCC)
Procedia PDF Downloads 219465 An Architecture of Ingenuity and Empowerment
Authors: Timothy Gray
Abstract:
This paper will present work and discuss lessons learned during a semester-long travel study based in Southeast Asia, which was run in the Spring Semester of 2019 and again in the summer of 2023. The first travel group consisted of fifteen students, and the second group consisted of twelve students ranging from second-year to graduate level, student participants majoring in either architecture or planning. Students worked in interdisciplinary teams, each team beginning their travel study, living together in a separate small town for over a month in (relatively) remote conditions in rural Thailand. Students became intimately familiar with these towns, forged strong personal relationships, and built reservoirs of knowledge one conversation at a time. Rather than impose external ideas and solutions, students were asked to learn from and be open to lessons from the people and the place. The following design statement was used as a point of departure for their investigations: It is our shared premise that architecture exists in small villages and towns of Southeast Asia in the ingenuity of the people, that architecture exists in a shared language of making, modifying, and reusing. It is a modest but vibrant architecture, an architecture that is alive and evolving, an architecture that is small in scale, accessible, and one that emerges from the people. It is an architecture that can exist in a modified bicycle, a woven bamboo bridge, or a self-built community. Students were challenged to engage in existing conditions as design professionals, both empowering and lending coherence to the energies that already existed in the place. As one of the student teams noted in their design narrative: “During our field study, we had the unique opportunity to tour a number of informal settlements and meet and talk to residents through interpreters. We found that many of the residents work in nearby factories for dollars a day. Others find employment in self-generated informal economies such as hand carving and textiles. Despite extreme poverty, we found these places to be vibrant and full of life as people navigate these challenging conditions to live lives with purpose and dignity.” Students worked together with local community members and colleagues to develop a series of varied proposals that emerged from their interrogations of place and partnered with community members and professional colleagues in the development of these proposals. Project partners included faculty and student colleagues Yangon University, the mayor's Office, Planning Department Officials and religious leaders in Sawankhalok, Thailand, and community leaders in Natonchan, Thailand, to name a few. This paper will present a series of student community-based design projects that emerged from these conditions. The paper will also discuss this model of travel study as a way of building an architecture which uses social and cultural issues as a catalyst for design. The paper will discuss lessons relative to sustainable development that the Western students learned through their travels in Southeast Asia.Keywords: travel study, CAPasia, architecture of empowerment, modular housing
Procedia PDF Downloads 47464 Evaluating Multiple Diagnostic Tests: An Application to Cervical Intraepithelial Neoplasia
Authors: Areti Angeliki Veroniki, Sofia Tsokani, Evangelos Paraskevaidis, Dimitris Mavridis
Abstract:
The plethora of diagnostic test accuracy (DTA) studies has led to the increased use of systematic reviews and meta-analysis of DTA studies. Clinicians and healthcare professionals often consult DTA meta-analyses to make informed decisions regarding the optimum test to choose and use for a given setting. For example, the human papilloma virus (HPV) DNA, mRNA, and cytology can be used for the cervical intraepithelial neoplasia grade 2+ (CIN2+) diagnosis. But which test is the most accurate? Studies directly comparing test accuracy are not always available, and comparisons between multiple tests create a network of DTA studies that can be synthesized through a network meta-analysis of diagnostic tests (DTA-NMA). The aim is to summarize the DTA-NMA methods for at least three index tests presented in the methodological literature. We illustrate the application of the methods using a real data set for the comparative accuracy of HPV DNA, HPV mRNA, and cytology tests for cervical cancer. A search was conducted in PubMed, Web of Science, and Scopus from inception until the end of July 2019 to identify full-text research articles that describe a DTA-NMA method for three or more index tests. Since the joint classification of the results from one index against the results of another index test amongst those with the target condition and amongst those without the target condition are rarely reported in DTA studies, only methods requiring the 2x2 tables of the results of each index test against the reference standard were included. Studies of any design published in English were eligible for inclusion. Relevant unpublished material was also included. Ten relevant studies were finally included to evaluate their methodology. DTA-NMA methods that have been presented in the literature together with their advantages and disadvantages are described. In addition, using 37 studies for cervical cancer obtained from a published Cochrane review as a case study, an application of the identified DTA-NMA methods to determine the most promising test (in terms of sensitivity and specificity) for use as the best screening test to detect CIN2+ is presented. As a conclusion, different approaches for the comparative DTA meta-analysis of multiple tests may conclude to different results and hence may influence decision-making. Acknowledgment: This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Extension of Network Meta-Analysis for the Comparison of Diagnostic Tests ” (MIS 5047640).Keywords: colposcopy, diagnostic test, HPV, network meta-analysis
Procedia PDF Downloads 139463 Pathway Linking Early Use of Electronic Device and Psychosocial Wellbeing in Early Childhood
Authors: Rosa S. Wong, Keith T.S. Tung, Winnie W. Y. Tso, King-Wa Fu, Nirmala Rao, Patrick Ip
Abstract:
Electronic devices have become an essential part of our lives. Various reports have highlighted the alarming usage of electronic devices at early ages and its long-term developmental consequences. More sedentary screen time was associated with increased adiposity, worse cognitive and motor development, and psychosocial health. Apart from the problems caused by children’s own screen time, parents today are often paying less attention to their children due to hand-held device. Some anecdotes suggest that distracted parenting has negative impact on parent-child relationship. This study examined whether distracted parenting detrimentally affected parent-child activities which may, in turn, impair children’s psychosocial health. In 2018/19, we recruited a cohort of preschoolers from 32 local kindergartens in Tin Shui Wai and Sham Shui Po for a 5-year programme aiming to build stronger foundations for children from disadvantaged backgrounds through an integrated support model involving medical, education and social service sectors. A comprehensive set of questionnaires were used to survey parents on their frequency of being distracted while parenting and their frequency of learning and recreational activities with children. Furthermore, they were asked to report children’s screen time amount and their psychosocial problems. Mediation analyses were performed to test the direct and indirect effects of electronic device-distracted parenting on children’s psychosocial problems. This study recruited 873 children (448 females and 425 males, average age: 3.42±0.35). Longer screen time was associated with more psychosocial difficulties (Adjusted B=0.37, 95%CI: 0.12 to 0.62, p=0.004). Children’s screen time positively correlated with electronic device-distracted parenting (r=0.369, p < 01). We also found that electronic device-distracted parenting was associated with more hyperactive/inattentive problems (Adjusted B=0.66, p < 0.01), fewer prosocial behavior (Adjusted B=-0.74, p < 0.01), and more emotional symptoms (Adjusted B=0.61, p < 0.001) in children. Further analyses showed that electronic device-distracted parenting exerted influences both directly and indirectly through parent-child interactions but to different extent depending upon the outcome under investigation (38.8% for hyperactivity/inattention, 31.3% for prosocial behavior, and 15.6% for emotional symptoms). We found that parents’ use of devices and children’s own screen time both have negative effects on children’s psychosocial health. It is important for parents to set “device-free times” each day so as to ensure enough relaxed downtime for connecting with children and responding to their needs.Keywords: early childhood, electronic device, psychosocial wellbeing, parenting
Procedia PDF Downloads 164462 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application
Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz
Abstract:
Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation
Procedia PDF Downloads 103461 Slope Stabilisation of Highly Fractured Geological Strata Consisting of Mica Schist Layers While Construction of Tunnel Shaft
Authors: Saurabh Sharma
Abstract:
Introduction: The case study deals with the ground stabilisation of Nabi Karim Metro Station in Delhi, India, wherein an extremely complex geology was encountered while excavating the tunnelling shaft for launching Tunnel Boring Machine. The borelog investigation and the Seismic Refraction Technique (SRT) indicated towards the presence of an extremely hard rocky mass from a depth of 3-4 m itself, and accordingly, the Geotechnical Interpretation Report (GIR) concluded the presence of Grade-IV rock from 3m onwards and presence of Grade-III and better rock from 5-6m onwards. Accordingly, it was planned to retain the ground by providing secant piles all around the launching shaft and then excavating the shaft vertically after leaving a berm of 1.5m to prevent secant piles from getting exposed. To retain the side slopes, rock bolting with shotcreting and wire meshing were proposed, which is a normal practice in such strata. However, with the increase in depth of excavation, the rock quality kept on decreasing at an unexpected and surprising pace, with the Grade-III rock mass at 5-6 m converting to conglomerate formation at the depth of 15m. This worsening of geology from high grade rock to slushy conglomerate formation can never be predicted and came as a surprise to even the best geotechnical engineers. Since the excavation had already been cut down vertically to manage the shaft size, the execution was continued with enhanced cautions to stabilise the side slopes. But, when the shaft work was about to finish, a collapse was encountered on one side of the excavation shaft. This collapse was unexpected and surprising since all measures to stabilise the side slopes had been taken after face mapping, and the grid size, diameter, and depth of the rockbolts had already been readjusted to accommodate rock fractures. The above scenario was baffling even to the best geologists and geotechnical engineers, and it was decided that any further slope stabilisation scheme shall have to be designed in such a way to ensure safe completion of works. Accordingly, following revisions to excavation scheme were made: The excavation would be carried while maintaining a slope based on type of soil/rock. The rock bolt type was changed from SN rockbolts to Self Drilling type anchor. The grid size of the bolts changed on real time assessment. the excavation carried out by implementing a ‘Bench Release Approach’. Aggressive Real Time Instrumentation Scheme. Discussion: The above case Study again asserts vitality of correct interpretation of the geological strata and the need of real time revisions of the construction schemes based on the actual site data. The excavation is successfully being done with the above revised scheme, and further details of the Revised Slope Stabilisation Scheme, Instrumentation Schemes, Monitoring results, along with the actual site photographs, shall form the part of the final Paper.Keywords: unconfined compressive strength (ucs), rock mass rating (rmr), rock bolts, self drilling anchors, face mapping of rock, secant pile, shotcrete
Procedia PDF Downloads 66460 Genetically Informed Precision Drug Repurposing for Rheumatoid Arthritis
Authors: Sahar El Shair, Laura Greco, William Reay, Murray Cairns
Abstract:
Background: Rheumatoid arthritis (RA) is a chronic, systematic, inflammatory, autoimmune disease that involves damages to joints and erosions to the associated bones and cartilage, resulting in reduced physical function and disability. RA is a multifactorial disorder influenced by heterogenous genetic and environmental factors. Whilst different medications have proven successful in reducing inflammation associated with RA, they often come with significant side effects and limited efficacy. To address this, the novel pharmagenic enrichment score (PES) algorithm was tested in self-reported RA patients from the UK Biobank (UKBB), which is a cohort of predominantly European ancestry, and identified individuals with a high genetic risk in clinically actionable biological pathways to identify novel opportunities for precision interventions and drug repurposing to treat RA. Methods and materials: Genetic association data for rheumatoid arthritis was derived from publicly available genome-wide association studies (GWAS) summary statistics (N=97173). The PES framework exploits competitive gene set enrichment to identify pathways that are associated with RA to explore novel treatment opportunities. This data is then integrated into WebGestalt, Drug Interaction database (DGIdb) and DrugBank databases to identify existing compounds with existing use or potential for repurposed use. The PES for each of these candidates was then profiled in individuals with RA in the UKBB (Ncases = 3,719, Ncontrols = 333,160). Results A total of 209 pathways with known drug targets after multiple testing correction were identified. Several pathways, including interferon gamma signaling and TID pathway (which relates to a chaperone that modulates interferon signaling), were significantly associated with self-reported RA in the UKBB when adjusting for age, sex, assessment centre month and location, RA polygenic risk and 10 principal components. These pathways have a major role in RA pathogenesis, including autoimmune attacks against certain citrullinated proteins, synovial inflammation, and bone loss. Encouragingly, many also relate to the mechanism of action of existing RA medications. The analyses also revealed statistically significant association between RA polygenic scores and self-reported RA with individual PES scorings, highlighting the potential utility of the PES algorithm in uncovering additional genetic insights that could aid in the identification of individuals at risk for RA and provide opportunities for more targeted interventions. Conclusions In this study, pharmacologically annotated genetic risk was explored through the PES framework to overcome inter-individual heterogeneity and enable precision drug repurposing in RA. The results showed a statistically significant association between RA polygenic scores and self-reported RA and individual PES scorings for 3,719 RA patients. Interestingly, several enriched PES pathways were targeted by already approved RA drugs. In addition, the analysis revealed genetically supported drug repurposing opportunities for future treatment of RA with a relatively safe profile.Keywords: rheumatoid arthritis, precision medicine, drug repurposing, system biology, bioinformatics
Procedia PDF Downloads 76459 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism
Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape
Abstract:
Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders
Procedia PDF Downloads 23458 High-Resolution Facial Electromyography in Freely Behaving Humans
Authors: Lilah Inzelberg, David Rand, Stanislav Steinberg, Moshe David Pur, Yael Hanein
Abstract:
Human facial expressions carry important psychological and neurological information. Facial expressions involve the co-activation of diverse muscles. They depend strongly on personal affective interpretation and on social context and vary between spontaneous and voluntary activations. Smiling, as a special case, is among the most complex facial emotional expressions, involving no fewer than 7 different unilateral muscles. Despite their ubiquitous nature, smiles remain an elusive and debated topic. Smiles are associated with happiness and greeting on one hand and anger or disgust-masking on the other. Accordingly, while high-resolution recording of muscle activation patterns, in a non-interfering setting, offers exciting opportunities, it remains an unmet challenge, as contemporary surface facial electromyography (EMG) methodologies are cumbersome, restricted to the laboratory settings, and are limited in time and resolution. Here we present a wearable and non-invasive method for objective mapping of facial muscle activation and demonstrate its application in a natural setting. The technology is based on a recently developed dry and soft electrode array, specially designed for surface facial EMG technique. Eighteen healthy volunteers (31.58 ± 3.41 years, 13 females), participated in the study. Surface EMG arrays were adhered to participant left and right cheeks. Participants were instructed to imitate three facial expressions: closing the eyes, wrinkling the nose and smiling voluntary and to watch a funny video while their EMG signal is recorded. We focused on muscles associated with 'enjoyment', 'social' and 'masked' smiles; three categories with distinct social meanings. We developed a customized independent component analysis algorithm to construct the desired facial musculature mapping. First, identification of the Orbicularis oculi and the Levator labii superioris muscles was demonstrated from voluntary expressions. Second, recordings of voluntary and spontaneous smiles were used to locate the Zygomaticus major muscle activated in Duchenne and non-Duchenne smiles. Finally, recording with a wireless device in an unmodified natural work setting revealed expressions of neutral, positive and negative emotions in face-to-face interaction. The algorithm outlined here identifies the activation sources in a subject-specific manner, insensitive to electrode placement and anatomical diversity. Our high-resolution and cross-talk free mapping performances, along with excellent user convenience, open new opportunities for affective processing and objective evaluation of facial expressivity, objective psychological and neurological assessment as well as gaming, virtual reality, bio-feedback and brain-machine interface applications.Keywords: affective expressions, affective processing, facial EMG, high-resolution electromyography, independent component analysis, wireless electrodes
Procedia PDF Downloads 246457 Assessing Sustainability of Bike Sharing Projects Using Envision™ Rating System
Authors: Tamar Trop
Abstract:
Bike sharing systems can be important elements of smart cities as they have the potential for impact on multiple levels. These systems can add a significant alternative to other modes of mass transit in cities that are continuously looking for measures to become more livable and maintain their attractiveness for citizens, businesses and tourism. Bike-sharing began in Europe in 1965, and a viable format emerged in the mid-2000s thanks to the introduction of information technology. The rate of growth in bike-sharing schemes and fleets has been very rapid since 2008 and has probably outstripped growth in every other form of urban transport. Today, public bike-sharing systems are available on five continents, including over 700 cities, operating more than 800,000 bicycles at approximately 40,000 docking stations. Since modern bike sharing systems have become prevalent only in the last decade, the existing literature analyzing these systems and their sustainability is relatively new. The purpose of the presented study is to assess the sustainability of these newly emerging transportation systems, by using the Envision™ rating system as a methodological framework and the Israeli 'Tel -O-Fun' – bike sharing project as a case study. The assessment was conducted by project team members. Envision™ is a new guidance and rating system used to assess and improve the sustainability of all types and sizes of infrastructure projects. This tool provides a holistic framework for evaluating and rating the community, environmental, and economic benefits of infrastructure projects over the course of their life cycle. This evaluation method has 60 sustainability criteria divided into five categories: Quality of life, leadership, resource allocation, natural world, and climate and risk. 'Tel -O-Fun' project was launched in Tel Aviv-Yafo on 2011 and today provides about 1,800 bikes for rent, at 180 rental stations across the city. The system is based on a complex computer terminal that is located in the docking stations. The highest-rated sustainable features that the project scored include: (a) Improving quality of life by: offering a low cost and efficient form of public transit, improving community mobility and access, enabling the flexibility of travel within a multimodal transportation system, saving commuters time and money, enhancing public health and reducing air and noise pollution; (b) improving resource allocation by: offering inexpensive and flexible last-mile connectivity, reducing space, materials and energy consumption, reducing wear and tear on public roads, and maximizing the utility of existing infrastructure, and (c) reducing of greenhouse gas emissions from transportation. Overall, 'Tel -O-Fun' project was highly scored as an environmentally sustainable and socially equitable infrastructure. The use of this practical framework for evaluation also yielded various interesting insights on the shortcoming of the system and the characteristics of good solutions. This can contribute to the improvement of the project and may assist planners and operators of bike sharing systems to develop a sustainable, efficient and reliable transportation infrastructure within smart cities.Keywords: bike sharing, Envision™, sustainability rating system, sustainable infrastructure
Procedia PDF Downloads 340456 The Influence of a Radio Intervention on Farmers’ Practices in Climate Change Mitigation and Adaptation in Kilifi, Kenya
Authors: Fiona Mwaniki
Abstract:
Climate change is considered a serious threat to sustainable development globally and as one of the greatest ecological, economic and social challenges of our time. The global demand for food is projected to increase by 60% by 2050. Small holder farmers who are vulnerable to the adverse effects of climate change are expected to contribute to this projected demand. Effective climate change education and communication is therefore required for smallholder and subsistence farmers’ in order to build communities that are more climate change aware, prepared and resilient. In Kenya radio is the most important and dominant mass communication tool for agricultural extension. This study investigated the potential role of radio in influencing farmers’ understanding and use of climate change information. The broad aims of this study were three-fold. Firstly, to identify Kenyan farmers’ perceptions and responses to the impacts of climate change. Secondly, to develop radio programs that communicate climate change information to Kenyan farmers and thirdly, to evaluate the impact of information disseminated through radio on farmers’ understanding and responses to climate change mitigation and adaptation. This study was conducted within the farming community of Kilifi County, located along the Kenyan coast. Education and communication about climate change was undertaken using radio to make available information understandable to different social and cultural groups. A mixed methods pre-and post-intervention design that provided the opportunity for triangulating results from both quantitative and qualitative data was used. Quantitative and qualitative data was collected simultaneously, where quantitative data was collected through semi structured surveys with 421 farmers’ and qualitative data was derived from 11 focus group interviews, six interviews with key informants and nine climate change experts. The climate change knowledge gaps identified in the initial quantitative and qualitative data were used in developing radio programs. Final quantitative and qualitative data collection and analysis enabled an assessment of the impact of climate change messages aired through radio on the farming community in Kilifi County. Results of this study indicate that 32% of the farmers’ listened to the radio programs and 26% implemented technologies aired on the programs that would help them adapt to climate change. The most adopted technologies were planting drought tolerant crops including indigenous crop varieties, planting trees, water harvesting and use of manure. The proportion of farmers who indicated they knew “a fair amount” about climate change increased significantly (Z= -5.1977, p < 0.001) from 33% (at the pre intervention phase of this study) to 64% (post intervention). However, 68% of the farmers felt they needed “a lot more” information on agriculture interventions (43%), access to financial resources (21%) and the effects of climate change (15%). The challenges farmers’ faced when adopting the interventions included lack of access to financial resources (18%), high cost of adaptation measures (17%), and poor access to water (10%). This study concludes that radio effectively complements other agricultural extension methods and has the potential to engage farmers’ on climate change issues and motivate them to take action.Keywords: climate change, climate change intervention, farmers, radio
Procedia PDF Downloads 338455 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline
Authors: Leo Nnamdi Ozurumba-Dwight
Abstract:
Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.
Procedia PDF Downloads 122454 Designing Disaster Resilience Research in Partnership with an Indigenous Community
Authors: Suzanne Phibbs, Christine Kenney, Robyn Richardson
Abstract:
The Sendai Framework for Disaster Risk Reduction called for the inclusion of indigenous people in the design and implementation of all hazard policies, plans, and standards. Ensuring that indigenous knowledge practices were included alongside scientific knowledge about disaster risk was also a key priority. Indigenous communities have specific knowledge about climate and natural hazard risk that has been developed over an extended period of time. However, research within indigenous communities can be fraught with issues such as power imbalances between the researcher and researched, the privileging of researcher agendas over community aspirations, as well as appropriation and/or inappropriate use of indigenous knowledge. This paper documents the process of working alongside a Māori community to develop a successful community-led research project. Research Design: This case study documents the development of a qualitative community-led participatory project. The community research project utilizes a kaupapa Māori research methodology which draws upon Māori research principles and concepts in order to generate knowledge about Māori resilience. The research addresses a significant gap in the disaster research literature relating to indigenous knowledge about collective hazard mitigation practices as well as resilience in rurally isolated indigenous communities. The research was designed in partnership with the Ngāti Raukawa Northern Marae Collective as well as Ngā Wairiki Ngāti Apa (a group of Māori sub-tribes who are located in the same region) and will be conducted by Māori researchers utilizing Māori values and cultural practices. The research project aims and objectives, for example, are based on themes that were identified as important to the Māori community research partners. The research methodology and methods were also negotiated with and approved by the community. Kaumātua (Māori elders) provided cultural and ethical guidance over the proposed research process and will continue to provide oversight over the conduct of the research. Purposive participant recruitment will be facilitated with support from local Māori community research partners, utilizing collective marae networks and snowballing methods. It is envisaged that Māori participants’ knowledge, experiences and views will be explored using face-to-face communication research methods such as workshops, focus groups and/or semi-structured interviews. Interviews or focus groups may be held in English and/or Te Reo (Māori language) to enhance knowledge capture. Analysis, knowledge dissemination, and co-authorship of publications will be negotiated with the Māori community research partners. Māori knowledge shared during the research will constitute participants’ intellectual property. New knowledge, theory, frameworks, and practices developed by the research will be co-owned by Māori, the researchers, and the host academic institution. Conclusion: An emphasis on indigenous knowledge systems within the Sendai Framework for Disaster Risk Reduction risks the appropriation and misuse of indigenous experiences of disaster risk identification, mitigation, and response. The research protocol underpinning this project provides an exemplar of collaborative partnership in the development and implementation of an indigenous project that has relevance to policymakers, academic researchers, other regions with indigenous communities and/or local disaster risk reduction knowledge practices.Keywords: community resilience, indigenous disaster risk reduction, Maori, research methods
Procedia PDF Downloads 124453 Investigating the Impact of Migration Background on Pregnancy Outcomes During the End of Period of COVID-19 Pandemic: A Mixed-Method Study
Authors: Charlotte Bach, Albrecht Jahn, Mahnaz Motamedi, Maryam Karimi-Ghahfarokhi
Abstract:
Background: Maternal and infant deaths are most prevalent in the first month after birth, emphasizing the critical need for quality healthcare services during this period. Immigrant women, who are more susceptible to adverse pregnancy outcomes, often face neglect in accessing proper healthcare. The lack of adequate postpartum care significantly contributes to mortality rates. Therefore, utilizing maternal health care services and implementing postpartum care is crucial in reducing maternal and child mortality. Aims: This study aims to evaluate the assessment of pre- and postnatal care among women with and without migration background. In addition, the study explores the impact of COVID-19 procedures on women's experiences during pregnancy, birth, and the postpartum period. Methods: This research employs a cross-sectional Mixed-Method design. Data collection was facilitated through structured questionnaires administered to participants, alongside the utilization of patient bases, including Maternity and child medical records. Following the assumption that the investigator aimed to gain comprehensive insights, qualitative sampling focused on individuals with substantial experiences related to COVID-19, regarded as rich cases. Results: our study highlighted the influence of educational level, marital status, and consensual partnerships on the likelihood of Cesarean deliveries. Regarding breastfeeding practices, migrant women exhibited higher rates of breastfeeding initiation and continuation. Contraception utilization revealed interesting patterns, with non-migrants displaying higher odds of contraceptive use. The qualitative component of our research adds depth to the exploration of women's experiences during the COVID-19 pandemic, revealing nuanced challenges related to anxiety, hospital restrictions, breastfeeding support, and postnatal ward routines. Conclusion: Dissimilarity among studies toward cesarean rate between migrants and non-migrants underscores the importance of targeted interventions considering the diverse needs of distinct population groups. It also acknowledges potential cultural, contextual, and healthcare system influences on the association between mode of delivery and infant feeding practices. Studies acknowledge the influence of contextual variables on contraceptive preferences among migrants and non-migrants, emphasizing the need for tailored healthcare policies. The findings contribute to existing research, highlighting the need for a nuanced understanding of the impact of birth preparation courses on maternal and infant outcomes. Furthermore, they emphasize the universality of certain maternity care experiences, regardless of pandemic contexts, reinforcing the importance of patient-centred approaches in healthcare delivery.Keywords: migration background, pregnancy outcome, covid-19, postpartum
Procedia PDF Downloads 55452 Analysis of the Evolution of Techniques and Review in Cleft Surgery
Authors: Tomaz Oliveira, Rui Medeiros, André Lacerda
Abstract:
Introduction: Cleft lip and/or palate are the most frequent forms of congenital craniofacial anomalies, affecting mainly the middle third of the face and manifesting by functional and aesthetic changes. Bilateral cleft lip represents a reconstructive surgical challenge, not only for the labial component but also for the associated nasal deformation. Recently, the paradigm of the approach to this pathology has changed, placing the focus on muscle reconstruction and anatomical repositioning of the nasal cartilages in order to obtain the best aesthetic and functional results. The aim of this study is to carry out a systematic review of the surgical approach to bilateral cleft lip, retrospectively analyzing the case series of Plastic Surgery Service at Hospital Santa Maria (Lisbon, Portugal) regarding this pathology, the global assessment of the characteristics of the operated patients and the study of the different surgical approaches and their complications in the last 20 years. Methods: The present work demonstrates a retrospective and descriptive study of patients who underwent at least one reconstructive surgery for cleft lip and/or palate, in the CPRE service of the HSM, in the period between January 1 of 1997 and December 31 of 2017, in which the data relating to 361 individuals were analyzed who, after applying the exclusion criteria, constituted a sample of 212 participants. The variables analyzed were the year of the first surgery, gender, age, type of orofacial cleft, surgical approach, and its complications. Results: There was a higher overall prevalence in males, with cleft lip and cleft palate occurring in greater proportion in males, with the cleft palate being more common in females. The most frequently recorded malformation was cleft lip and palate, which is complete in most cases. Regarding laterality, alterations with a unilateral labial component were the most commonly observed, with the left lip being described as the most affected. It was found that the vast majority of patients underwent primary intervention up to 12 months of age. The surgical techniques used in the approach to this pathology showed an important chronological variation over the years. Discussion: Cleft lip and/or palate is a medical condition associated with high aesthetic and functional morbidity, which requires early treatment in order to optimize the long-term outcome. The existence of a nasolabial component and its respective surgical correction plays a central role in the treatment of this pathology. The high rates of post-surgical complications and unconvincing aesthetic results have motivated an evolution of the surgical technique, increasingly evident in recent years, allowing today to achieve satisfactory aesthetic results, even in bilateral cleft lip with high deformation complexity. The introduction of techniques that favor nasolabial reconstruction based on anatomical principles has been producing increasingly convincing results. The analyzed sample shows that most of the results obtained in this study are, in general, compatible with the results published in the literature. Conclusion: This work showed that the existence of small variations in the surgical technique can bring significant improvements in the functional and aesthetic results in the treatment of bilateral cleft lip.Keywords: cleft lip, palate lip, congenital abnormalities, cranofacial malformations
Procedia PDF Downloads 111451 Multi-Criteria Decision Making Network Optimization for Green Supply Chains
Authors: Bandar A. Alkhayyal
Abstract:
Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains
Procedia PDF Downloads 160450 Executive Function and Attention Control in Bilingual and Monolingual Children: A Systematic Review
Authors: Zihan Geng, L. Quentin Dixon
Abstract:
It has been proposed that early bilingual experience confers a number of advantages in the development of executive control mechanisms. Although the literature provides empirical evidence for bilingual benefits, some studies also reported null or mixed results. To make sense of these contradictory findings, the current review synthesize recent empirical studies investigating bilingual effects on children’s executive function and attention control. The publication time of the studies included in the review ranges from 2010 to 2017. The key searching terms are bilingual, bilingualism, children, executive control, executive function, and attention. The key terms were combined within each of the following databases: ERIC (EBSCO), Education Source, PsycINFO, and Social Science Citation Index. Studies involving both children and adults were also included but the analysis was based on the data generated only by the children group. The initial search yielded 137 distinct articles. Twenty-eight studies from 27 articles with a total of 3367 participants were finally included based on the selection criteria. The selective studies were then coded in terms of (a) the setting (i.e., the country where the data was collected), (b) the participants (i.e., age and languages), (c) sample size (i.e., the number of children in each group), (d) cognitive outcomes measured, (e) data collection instruments (i.e., cognitive tasks and tests), and (f) statistic analysis models (e.g., t-test, ANOVA). The results show that the majority of the studies were undertaken in western countries, mainly in the U.S., Canada, and the UK. A variety of languages such as Arabic, French, Dutch, Welsh, German, Spanish, Korean, and Cantonese were involved. In relation to cognitive outcomes, the studies examined children’s overall planning and problem-solving abilities, inhibition, cognitive complexity, working memory (WM), and sustained and selective attention. The results indicate that though bilingualism is associated with several cognitive benefits, the advantages seem to be weak, at least, for children. Additionally, the nature of the cognitive measures was found to greatly moderate the results. No significant differences are observed between bilinguals and monolinguals in overall planning and problem-solving ability, indicating that there is no bilingual benefit in the cooperation of executive function components at an early age. In terms of inhibition, the mixed results suggest that bilingual children, especially young children, may have better conceptual inhibition measured in conflict tasks, but not better response inhibition measured by delay tasks. Further, bilingual children showed better inhibitory control to bivalent displays, which resembles the process of maintaining two language systems. The null results were obtained for both cognitive complexity and WM, suggesting no bilingual advantage in these two cognitive components. Finally, findings on children’s attention system associate bilingualism with heightened attention control. Together, these findings support the hypothesis of cognitive benefits for bilingual children. Nevertheless, whether these advantages are observable appears to highly depend on the cognitive assessments. Therefore, future research should be more specific about the cognitive outcomes (e.g., the type of inhibition) and should report the validity of the cognitive measures consistently.Keywords: attention, bilingual advantage, children, executive function
Procedia PDF Downloads 185449 Management of Femoral Neck Stress Fractures at a Specialist Centre and Predictive Factors to Return to Activity Time: An Audit
Authors: Charlotte K. Lee, Henrique R. N. Aguiar, Ralph Smith, James Baldock, Sam Botchey
Abstract:
Background: Femoral neck stress fractures (FNSF) are uncommon, making up 1 to 7.2% of stress fractures in healthy subjects. FNSFs are prevalent in young women, military recruits, endurance athletes, and individuals with energy deficiency syndrome or female athlete triad. Presentation is often non-specific and is often misdiagnosed following the initial examination. There is limited research addressing the return–to–activity time after FNSF. Previous studies have demonstrated prognostic time predictions based on various imaging techniques. Here, (1) OxSport clinic FNSF practice standards are retrospectively reviewed, (2) FNSF cohort demographics are examined, (3) Regression models were used to predict return–to–activity prognosis and consequently determine bone stress risk factors. Methods: Patients with a diagnosis of FNSF attending Oxsport clinic between 01/06/2020 and 01/01/2020 were selected from the Rheumatology Assessment Database Innovation in Oxford (RhADiOn) and OxSport Stress Fracture Database (n = 14). (1) Clinical practice was audited against five criteria based on local and National Institute for Health Care Excellence guidance, with a 100% standard. (2) Demographics of the FNSF cohort were examined with Student’s T-Test. (3) Lastly, linear regression and Random Forest regression models were used on this patient cohort to predict return–to–activity time. Consequently, an analysis of feature importance was conducted after fitting each model. Results: OxSport clinical practice met standard (100%) in 3/5 criteria. The criteria not met were patient waiting times and documentation of all bone stress risk factors. Importantly, analysis of patient demographics showed that of the population with complete bone stress risk factor assessments, 53% were positive for modifiable bone stress risk factors. Lastly, linear regression analysis was utilized to identify demographic factors that predicted return–to–activity time [R2 = 79.172%; average error 0.226]. This analysis identified four key variables that predicted return-to-activity time: vitamin D level, total hip DEXA T value, femoral neck DEXA T value, and history of an eating disorder/disordered eating. Furthermore, random forest regression models were employed for this task [R2 = 97.805%; average error 0.024]. Analysis of the importance of each feature again identified a set of 4 variables, 3 of which matched with the linear regression analysis (vitamin D level, total hip DEXA T value, and femoral neck DEXA T value) and the fourth: age. Conclusion: OxSport clinical practice could be improved by more comprehensively evaluating bone stress risk factors. The importance of this evaluation is demonstrated by the population found positive for these risk factors. Using this cohort, potential bone stress risk factors that significantly impacted return-to-activity prognosis were predicted using regression models.Keywords: eating disorder, bone stress risk factor, femoral neck stress fracture, vitamin D
Procedia PDF Downloads 183448 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction
Authors: Mohammad Ghahramani, Fahimeh Saei Manesh
Abstract:
Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.Keywords: soccer, analytics, machine learning, database
Procedia PDF Downloads 238447 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology
Authors: Amarendar Reddy Addula
Abstract:
Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.Keywords: artificial intelligence, ethics & human rights issues, laws, international laws
Procedia PDF Downloads 94446 Associated Factors the Safety of the Patient in Hemodialysis Clinics of a Brazilian Municipality: Cross-Sectional Study
Authors: Magda Milleyde de Sousa Lima, Letícia Lima Aguiar, Marina Guerra Martins, Erika Veríssimo Dias Sousa, Lizandra Sampaio de Oliveira, Lívia Moreira Barros, Joselany Áfio Caetano
Abstract:
Patients with chronic kidney disease are vulnerable to episodes which make the safety of their health vulnerable, mainly due to the treatment process that exposes them to high rates of interventions during hemodialysis sessions. Some factors associated with health care contribute to the risk of death and complications. However, there are a small number of scientific studies evaluating the level of safety of hemodialysis clinics, and the sociodemographic characteristics of patients and professionals associated with this safety. Therefore, the present study aims to examine the level of patient safety in hemodialysis clinics in the Brazilian capital, to identify the sociodemographic and clinical factors of patients and nursing staff associated with the level of safety. This is an observational, descriptive and quantitative research conducted in three hemodialysis clinics placed in the city of Fortaleza-CE, Brazil, from September to November 2019. The sample was formed after a sample calculation for finite inhabitants of correlation with 200 chronic renal patients, 30 nursing technicians and seven nurses. Conventional sampling was used based on the inclusion criteria: being present at the hemodialysis session on the day the researcher performed the data collection and being 18 years of age or older. Participants who presented communication difficulties to listen to and/or answer the sociodemographic and clinical questionnaire were excluded. Two instruments were applied: sociodemographic and clinical characterization form and Chronic Renal Patient Safety Assessment Scale on Hemodialysis (EASPRCH). The data were analyzed using the Kruskal Walls Test for categorical variables and Spearman correlation coefficient for non-categorical variables, using the Statistical Package SPSS version 20.0. The present study respected the ethical and legal principles determined by resolution 466/2012 of the National Health Council, under the approval of the Ethics and Research Committee with an opinion number: 3,255,635. The results showed that a hemodialysis clinic presented unsafe care practices of 32 points in the EASPRCH (p=0.001). A statistical association was identified between the level of safety and the variables of the patients: level of education (p=0.018), family income (p=0.049), type of employment (p=0.012), venous access site (p=0.009), use of medication during the session (p=0.008) and time of hemodialysis (p=0.002). When evaluating the profile of nurses, a statistical association was evidenced between the level of safety with the variables: marital status (p=0.000), race (p=0.017), schooling (p= 0.000), income (p=0.013), age (p=0.000), clinic workload (p=0.000), time working with hemodialysis (p=0.000), time working in the clinic (p= 0.007) and clinic sizing (p=0.000). In order, the sociodemographic factors of nursing technicians associated with the level of patient safety were: race (p= 0.001) and weekly workload at (p=0.010). Therefore, it is concluded that there is a non-conformity in the level of patient safety in one of the clinics studied and, that sociodemographic and clinical factors of patients and health professionals corroborate the level of safety of the health unit.Keywords: hemodialysis, nursing, patient safety, quality improvement
Procedia PDF Downloads 196445 W-WING: Aeroelastic Demonstrator for Experimental Investigation into Whirl Flutter
Authors: Jiri Cecrdle
Abstract:
This paper describes the concept of the W-WING whirl flutter aeroelastic demonstrator. Whirl flutter is the specific case of flutter that accounts for the additional dynamic and aerodynamic influences of the engine rotating parts. The instability is driven by motion-induced unsteady aerodynamic propeller forces and moments acting in the propeller plane. Whirl flutter instability is a serious problem that may cause the unstable vibration of a propeller mounting, leading to the failure of an engine installation or an entire wing. The complicated physical principle of whirl flutter required the experimental validation of the analytically gained results. W-WING aeroelastic demonstrator has been designed and developed at Czech Aerospace Research Centre (VZLU) Prague, Czechia. The demonstrator represents the wing and engine of the twin turboprop commuter aircraft. Contrary to the most of past demonstrators, it includes a powered motor and thrusting propeller. It allows the changes of the main structural parameters influencing the whirl flutter stability characteristics. Propeller blades are adjustable at standstill. The demonstrator is instrumented by strain gauges, accelerometers, revolution-counting impulse sensor, sensor of airflow velocity, and the thrust measurement unit. Measurement is supported by the in house program providing the data storage and real-time depiction in the time domain as well as pre-processing into the form of the power spectral densities. The engine is linked with a servo-drive unit, which enables maintaining of the propeller revolutions (constant or controlled rate ramp) and monitoring of immediate revolutions and power. Furthermore, the program manages the aerodynamic excitation of the demonstrator by the aileron flapping (constant, sweep, impulse). Finally, it provides the safety guard to prevent any structural failure of the demonstrator hardware. In addition, LMS TestLab system is used for the measurement of the structure response and for the data assessment by means of the FFT- and OMA-based methods. The demonstrator is intended for the experimental investigations in the VZLU 3m-diameter low-speed wind tunnel. The measurement variant of the model is defined by the structural parameters: pitch and yaw attachment stiffness, pitch and yaw hinge stations, balance weight station, propeller type (duralumin or steel blades), and finally, angle of attack of the propeller blade 75% section (). The excitation is provided either by the airflow turbulence or by means of the aerodynamic excitation by the aileron flapping using a frequency harmonic sweep. The experimental results are planned to be utilized for validation of analytical methods and software tools in the frame of development of the new complex multi-blade twin-rotor propulsion system for the new generation regional aircraft. Experimental campaigns will include measurements of aerodynamic derivatives and measurements of stability boundaries for various configurations of the demonstrator.Keywords: aeroelasticity, flutter, whirl flutter, W WING demonstrator
Procedia PDF Downloads 96444 Edible Active Antimicrobial Coatings onto Plastic-Based Laminates and Its Performance Assessment on the Shelf Life of Vacuum Packaged Beef Steaks
Authors: Andrey A. Tyuftin, David Clarke, Malco C. Cruz-Romero, Declan Bolton, Seamus Fanning, Shashi K. Pankaj, Carmen Bueno-Ferrer, Patrick J. Cullen, Joe P. Kerry
Abstract:
Prolonging of shelf-life is essential in order to address issues such as; supplier demands across continents, economical profit, customer satisfaction, and reduction of food wastage. Smart packaging solutions presented in the form of naturally occurred antimicrobially-active packaging may be a solution to these and other issues. Gelatin film forming solution with adding of natural sourced antimicrobials is a promising tool for the active smart packaging. The objective of this study was to coat conventional plastic hydrophobic packaging material with hydrophilic antimicrobial active beef gelatin coating and conduct shelf life trials on beef sub-primal cuts. Minimal inhibition concentration (MIC) of Caprylic acid sodium salt (SO) and commercially available Auranta FV (AFV) (bitter oranges extract with mixture of nutritive organic acids) were found of 1 and 1.5 % respectively against bacterial strains Bacillus cereus, Pseudomonas fluorescens, Escherichia coli, Staphylococcus aureus and aerobic and anaerobic beef microflora. Therefore SO or AFV were incorporated in beef gelatin film forming solution in concentration of two times of MIC which was coated on a conventional plastic LDPE/PA film on the inner cold plasma treated polyethylene surface. Beef samples were vacuum packed in this material and stored under chilling conditions, sampled at weekly intervals during 42 days shelf life study. No significant differences (p < 0.05) in the cook loss was observed among the different treatments compared to control samples until the day 29. Only for AFV coated beef sample it was 3% higher (37.3%) than the control (34.4 %) on the day 36. It was found antimicrobial films did not protect beef against discoloration. SO containing packages significantly (p < 0.05) reduced Total viable bacterial counts (TVC) compared to the control and AFV samples until the day 35. No significant reduction in TVC was observed between SO and AFV films on the day 42 but a significant difference was observed compared to control samples with a 1.40 log of bacteria reduction on the day 42. AFV films significantly (p < 0.05) reduced TVC compared to control samples from the day 14 until the day 42. Control samples reached the set value of 7 log CFU/g on day 27 of testing, AFV films did not reach this set limit until day 35 and SO films until day 42 of testing. The antimicrobial AFV and SO coated films significantly prolonged the shelf-life of beef steaks by 33 or 55% (on 7 and 14 days respectively) compared to control film samples. It is concluded antimicrobial coated films were successfully developed by coating the inner polyethylene layer of conventional LDPE/PA laminated films after plasma surface treatment. The results indicated that the use of antimicrobial active packaging coated with SO or AFV increased significantly (p < 0.05) the shelf life of the beef sub-primal. Overall, AFV or SO containing gelatin coatings have the potential of being used as effective antimicrobials for active packaging applications for muscle-based food products.Keywords: active packaging, antimicrobials, edible coatings, food packaging, gelatin films, meat science
Procedia PDF Downloads 303443 Evaluation of the Quality of Education Offered to Students with Special Needs in Public Schools in the City of Bauru, Brazil
Authors: V. L. M. F. Capellini, A. P. P. M. Maturana, N. C. M. Brondino, M. B. C. L. B. M. Peixoto, A. J. Broughton
Abstract:
A paradigm shift is a process. The process of implementing inclusive education, a system constructed to support all learners, requires planning, identification, experimentation, and evaluation. In this vein, the purpose of the present study was to evaluate the capacity of one Brazilian state school systems to provide special education students with a quality inclusive education. This study originated at the behest of concerned families of students with special needs who filed complaints with the Municipality of Bauru, São Paulo. These families claimed, 1) children with learning differences and educational needs had not been identified for services, and 2) those who had been identified had not received sufficient specialized educational assistance (SEA) in schools across the City of Bauru. Hence, the Office of Civil Rights for the state of São Paulo (Ministério Público de São Paulo) summoned the local higher education institution, UNESP, to design a research study to investigate these allegations. In this exploratory study, descriptive data were gathered from all elementary and middle schools including 58 state schools and 17 city schools, for a total of 75 schools overall. Data collection consisted of each school's annual strategic action plan, surveys and interviews with all school stakeholders to determine their perceptions of the inclusive education available to students with Special Education Needs (SEN). The data were collected as one of four stages in a larger study which also included field observations of a focal students' experience and a continuing education course for all teachers and administrators in both state and city schools. For the purposes of this study, the researchers were interested in understanding the perceptions of school staff, parents, and students across all schools. Therefore, documents and surveys from 75 schools were analyzed for adherence to federal legislation guaranteeing students with SEN the right to special education assistance within the regular school setting. Results shows that while some schools recognized the legal rights of SEN students to receive special education, the plans to actually deliver services were absent. In conclusion, the results of this study revealed both school staff and families have insufficient planning and accessibility resources, and the schools have inadequate infrastructure for full-time support to SEN students, i.e., structures and systems to support the identification of SEN and delivery of services within schools of Bauru, SP. Having identified the areas of need, the city is now prepared to take next steps in the process toward preparing all schools to be inclusive.Keywords: inclusion, school, special education, special needs
Procedia PDF Downloads 159