Search results for: random common fixed point theorem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13136

Search results for: random common fixed point theorem

506 Sensory Weighting and Reweighting for Standing Postural Control among Children and Adolescents with Autistic Spectrum Disorder Compared with Typically Developing Children and Adolescents

Authors: Eglal Y. Ali, Smita Rao, Anat Lubetzky, Wen Ling

Abstract:

Background: Postural abnormalities, rigidity, clumsiness, and frequent falls are common among children with autism spectrum disorders (ASD). The central nervous system’s ability to process all reliable sensory inputs (weighting) and disregard potentially perturbing sensory input (reweighting) is critical for successfully maintaining standing postural control. This study examined how sensory inputs (visual and somatosensory) are weighted and reweighted to maintain standing postural control in children with ASD compared with typically developing (TD) children. Subjects: Forty (20 (TD) and 20 ASD) children and adolescents participated in this study. The groups were matched for age, weight, and height. Participants had normal somatosensory (no somatosensory hypersensitivity), visual, and vestibular perception. Participants with ASD were categorized with severity level 1 according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) and Social Responsiveness Scale. Methods: Using one force platform, the center of pressure (COP) was measured during quiet standing for 30 seconds, 3 times first standing on stable surface with eyes open (Condition 1), followed by randomization of the following 3 conditions: Condition 2 standing on stable surface with eyes closed, (visual input perturbed); Condition 3 standing on a compliant foam surface with eyes open, (somatosensory input perturbed); and Condition 4 standing on a compliant foam surface with eyes closed, (both visual and somatosensory inputs perturbed). Standing postural control was measured by three outcome measures: COP sway area, COP anterior-posterior (AP), and mediolateral (ML) path length (PL). A repeated measure mixed model analysis of variance was conducted to determine whether there was a significant difference between the two groups in the mean of the three outcome measures across the four conditions. Results: According to all three outcome measures, both groups showed a gradual increase in postural sway from condition 1 to condition 4. However, TD participants showed a larger postural sway than those with ASD. There was a significant main effect of the condition on three outcome measures (p< 0.05). Only the COP AP PL showed a significant main effect of the group (p<0.05) and a significant group by condition interaction (p<0.05). In COP AP PL, TD participants showed a significant difference between condition 2 and the baseline (p<0.05), whereas the ASD group did not. This suggests that the ASD group did not weigh visual input as much as the TD group. A significant difference between conditions for the ASD group was seen only when participants stood on foam regardless of the visual condition, suggesting that the ASD group relied more on the somatosensory inputs to maintain the standing postural control. Furthermore, the ASD group exhibited significantly smaller postural sway compared with TD participants during standing on a stable surface, whereas the postural sway of the ASD group was close to that of the TD group on foam. Conclusion: These results suggest that participants with high-functioning ASD (level 1, no somatosensory hypersensitivity in ankles and feet) over-rely on somatosensory inputs and use a stiffening strategy for standing postural control. This deviation in the reweighting mechanism might explain the postural abnormalities mentioned above among children with ASD.

Keywords: autism spectrum disorders, postural sway, sensory weighting and reweighting, standing postural control

Procedia PDF Downloads 108
505 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 107
504 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 148
503 Review of Carbon Materials: Application in Alternative Energy Sources and Catalysis

Authors: Marita Pigłowska, Beata Kurc, Maciej Galiński

Abstract:

The application of carbon materials in the branches of the electrochemical industry shows an increasing tendency each year due to the many interesting properties they possess. These are, among others, a well-developed specific surface, porosity, high sorption capacity, good adsorption properties, low bulk density, electrical conductivity and chemical resistance. All these properties allow for their effective use, among others in supercapacitors, which can store electric charges of the order of 100 F due to carbon electrodes constituting the capacitor plates. Coals (including expanded graphite, carbon black, graphite carbon fibers, activated carbon) are commonly used in electrochemical methods of removing oil derivatives from water after tanker disasters, e.g. phenols and their derivatives by their electrochemical anodic oxidation. Phenol can occupy practically the entire surface of carbon material and leave the water clean of hydrophobic impurities. Regeneration of such electrodes is also not complicated, it is carried out by electrochemical methods consisting in unblocking the pores and reducing resistances, and thus their reactivation for subsequent adsorption processes. Graphite is commonly used as an anode material in lithium-ion cells, while due to the limited capacity it offers (372 mAh g-1), new solutions are sought that meet both capacitive, efficiency and economic criteria. Increasingly, biodegradable materials, green materials, biomass, waste (including agricultural waste) are used in order to reuse them and reduce greenhouse effects and, above all, to meet the biodegradability criterion necessary for the production of lithium-ion cells as chemical power sources. The most common of these materials are cellulose, starch, wheat, rice, and corn waste, e.g. from agricultural, paper and pharmaceutical production. Such products are subjected to appropriate treatments depending on the desired application (including chemical, thermal, electrochemical). Starch is a biodegradable polysaccharide that consists of polymeric units such as amylose and amylopectin that build an ordered (linear) and amorphous (branched) structure of the polymer. Carbon is also used as a catalyst. Elemental carbon has become available in many nano-structured forms representing the hybridization combinations found in the primary carbon allotropes, and the materials can be enriched with a large number of surface functional groups. There are many examples of catalytic applications of coal in the literature, but the development of this field has been hampered by the lack of a conceptual approach combining structure and function and a lack of understanding of material synthesis. In the context of catalytic applications, the integrity of carbon environmental management properties and parameters such as metal conductivity range and bond sequence management should be characterized. Such data, along with surface and textured information, can form the basis for the provision of network support services.

Keywords: carbon materials, catalysis, BET, capacitors, lithium ion cell

Procedia PDF Downloads 167
502 Exploring the Impact of Mobility-Related Treatments (Drug and Non-Pharmacological) on Independence and Wellbeing in Parkinson’s Disease - A Qualitative Synthesis

Authors: Cameron Wilson, Megan Hanrahan, Katie Brittain, Riona McArdle, Alison Keogh, Lynn Rochester

Abstract:

Background: The loss of mobility and functional dependence is a significant marker in the progression of neurodegenerative diseases such as Parkinson’s Disease (PD). Pharmacological, surgical, and therapeutic treatments are available that can help in the management and amelioration of PD symptoms; however, these only prolong more severe symptoms. Accordingly, ensuring people with PD can maintain independence and a healthy wellbeing are essential in establishing an effective treatment option for those afflicted. Existing literature reviews have examined experiences in engaging with PD treatment options and the impact of PD on independence and wellbeing. Although, the literature fails to explore the influence of treatment options on independence and wellbeing and therefore misses what people value in their treatment. This review is the first that synthesises the impact of mobility-related treatments on independence and wellbeing in people with PD and their carers, offering recommendations to clinical practice and provides a conceptual framework (in development) for future research and practice. Objectives: To explore the impact of mobility-related treatment (both pharmacological and non-pharmacological) on the independence and wellbeing of people with PD and their carers. To propose a conceptual framework to patients, carers and clinicians which captures the qualities people with PD value as part of their treatment. Methods: We performed a critical interpretive synthesis of qualitative evidence, searching six databases for reports that explored the impact of mobility-related treatments (both drug and non-pharmacological) on independence and wellbeing in Parkinson’s Disease. The types of treatments included medication (Levodopa and Amantadine), dance classes, Deep-Brain Stimulation, aquatic therapies, physical rehabilitation, balance training and foetal transplantation. Data was extracted, and quality was assessed using an adapted version of the NICE Quality Appraisal Tool Appendix H before being synthesised according to the critical interpretive synthesis framework and meta-ethnography process. Results: From 2301 records, 28 were eligible. Experiences and impact of treatment pathway on independence and wellbeing was similar across all types of treatments and are described by five inter-related themes: (i) desire to maintain independence, (ii) treatment as a social experience during and after, (iii) medication to strengthen emotional health, (iv) recognising physical capacity and (v) emphasising the personal journey of Parkinson’s treatments. Conclusion: There is a complex and inter-related experience and effect of PD treatments common across all types of treatment. The proposed conceptual framework (in development) provides patients, carers, and clinicians recommendations to personalise the delivery of PD treatment, thereby potentially improving adherence and effectiveness. This work is vital to disseminate as PD treatment transitions from subjective and clinically captured assessments to a more personalised process supplemented using wearable technology.

Keywords: parkinson's disease, medication, treatment, dance, review, healthcare, delivery, levodopa, social, emotional, psychological, personalised healthcare

Procedia PDF Downloads 83
501 Delineation of Different Geological Interfaces Beneath the Bengal Basin: Spectrum Analysis and 2D Density Modeling of Gravity Data

Authors: Md. Afroz Ansari

Abstract:

The Bengal basin is a spectacular example of a peripheral foreland basin formed by the convergence of the Indian plate beneath the Eurasian and Burmese plates. The basin is embraced on three sides; north, west and east by different fault-controlled tectonic features whereas released in the south where the rivers are drained into the Bay of Bengal. The Bengal basin in the eastern part of the Indian subcontinent constitutes the largest fluvio-deltaic to shallow marine sedimentary basin in the world today. This continental basin coupled with the offshore Bengal Fan under the Bay of Bengal forms the biggest sediment dispersal system. The continental basin is continuously receiving the sediments by the two major rivers Ganga and Brahmaputra (known as Jamuna in Bengal), and Meghna (emerging from the point of conflux of the Ganga and Brahmaputra) and large number of rain-fed, small tributaries originating from the eastern Indian Shield. The drained sediments are ultimately delivered into the Bengal fan. The significance of the present study is to delineate the variations in thicknesses of the sediments, different crustal structures, and the mantle lithosphere throughout the onshore-offshore Bengal basin. In the present study, the different crustal/geological units and the shallower mantle lithosphere were delineated by analyzing the Bouguer Gravity Anomaly (BGA) data along two long traverses South-North (running from Bengal fan cutting across the transition offshore-onshore of the Bengal basin and intersecting the Main Frontal Thrust of India-Himalaya collision zone in Sikkim-Bhutan Himalaya) and West-East (running from the Peninsular Indian Shield across the Bengal basin to the Chittagong–Tripura Fold Belt). The BGA map was derived from the analysis of topex data after incorporating Bouguer correction and all terrain corrections. The anomaly map was compared with the available ground gravity data in the western Bengal basin and the sub-continents of India for consistency of the data used. Initially, the anisotropy associated with the thicknesses of the different crustal units, crustal interfaces and moho boundary was estimated through spectral analysis of the gravity data with varying window size over the study area. The 2D density sections along the traverses were finalized after a number of iterations with the acceptable root mean square (RMS) errors. The estimated thicknesses of the different crustal units and dips of the Moho boundary along both the profiles are consistent with the earlier results. Further the results were encouraged by examining the earthquake database and focal mechanism solutions for better understanding the geodynamics. The earthquake data were taken from the catalogue of US Geological Survey, and the focal mechanism solutions were compiled from the Harvard Centroid Moment Tensor Catalogue. The concentrations of seismic events at different depth levels are not uncommon. The occurrences of earthquakes may be due to stress accumulation as a result of resistance from three sides.

Keywords: anisotropy, interfaces, seismicity, spectrum analysis

Procedia PDF Downloads 266
500 Nursing Experience in Caring for a Patient with Terminal Gastric Cancer and Abdominal Aortic Aneurysm

Authors: Pei-Shan Liang

Abstract:

Objective: This article explores the nursing experience of caring for a patient with terminal gastric cancer complicated by an abdominal aortic aneurysm. The patient experienced physical discomfort due to the disease, initially unable to accept the situation, leading to anxiety, and eventually accepting the need for surgery. Methods: The nursing period was from June 6 to June 10, 2024. Through observation, direct care, conversations, and physical assessments, and using Gordon's eleven functional health patterns for a one-on-one holistic assessment, interdisciplinary team meetings were held with the critical care team and family. Three nursing health issues were identified: pain related to the disease and invasive procedures, anxiety related to uncertainty about disease recovery, and decreased cardiac tissue perfusion related to hemodynamic instability. Results: Open communication techniques and empathetic care were employed to establish a trusting nurse-patient relationship, and patient-centered nursing interventions were developed. Pain was assessed using a 10-point pain scale, and pain medications were adjusted by a pharmacist. Initially, Fentanyl 500mcg with pump run at 1ml/hr was administered, later changed to Ultracet 37.5mg/325mg, 1 tablet every 6 hours orally, reducing the pain score to 3. Lavender aromatherapy and listening to crystal music were used as distractions to alleviate pain, allowing the patient to sleep uninterrupted for at least 7 hours. The patient was encouraged to express feelings and fears through LINE messages or drawings, and a psychologist was invited to provide support. Family members were present at least twice a day for over an hour each time, reducing psychological distress and uncertainty about the prognosis. According to the Beck Anxiety Inventory, the anxiety score dropped from 17 (moderate anxiety) to 6 (no anxiety). Focused nursing care was implemented with close monitoring of vital signs maintaining systolic blood pressure between 112-118 mmHg to ensure adequate myocardial perfusion. The patient was encouraged to get out of bed for postoperative rehabilitation and to strengthen cardiopulmonary function. A chest X-ray showed no abnormalities, and breathing was smooth with Triflow use, maintaining at least 5 seconds with 2 balls four times a day, and SpO2 >96%. Conclusion: The care process highlighted the importance of addressing psychological care in addition to maintaining life when the patient’s condition changes. The presence of family often provided the greatest source of comfort for the patient, helping to reduce anxiety and pain. Nurses must play multiple roles, including advocate, coordinator, educator, and consultant, using various communication techniques and fostering hope by listening to and accepting the patient’s emotional responses. It is hoped that this report will provide a reference for clinical nursing staff and contribute to improving the quality of care.

Keywords: intensive care, gastric cancer, aortic aneurysm, quality of care

Procedia PDF Downloads 15
499 Concepts of Modern Design: A Study of Art and Architecture Synergies in Early 20ᵗʰ Century Europe

Authors: Stanley Russell

Abstract:

Until the end of the 19th century, European painting dealt almost exclusively with the realistic representation of objects and landscapes, as can be seen in the work of realist artists like Gustav Courbet. Architects of the day typically made reference to and recreated historical precedents in their designs. The curriculum of the first architecture school in Europe, The Ecole des Beaux Artes, based on the study of classical buildings, had a profound effect on the profession. Painting exhibited an increasing level of abstraction from the late 19th century, with impressionism, and the trend continued into the early 20th century when Cubism had an explosive effect sending shock waves through the art world that also extended into the realm of architectural design. Architect /painter Le Corbusier with “Purism” was one of the first to integrate abstract painting and building design theory in works that were equally shocking to the architecture world. The interrelationship of the arts, including architecture, was institutionalized in the Bauhaus curriculum that sought to find commonality between diverse art disciplines. Renowned painter and Bauhaus instructor Vassily Kandinsky was one of the first artists to make a semi-scientific analysis of the elements in “non-objective” painting while also drawing parallels between painting and architecture in his book Point and Line to plane. Russian constructivists made abstract compositions with simple geometric forms, and like the De Stijl group of the Netherlands, they also experimented with full-scale constructions and spatial explorations. Based on the study of historical accounts and original artworks, of Impressionism, Cubism, the Bauhaus, De Stijl, and Russian Constructivism, this paper begins with a thorough explanation of the art theory and several key works from these important art movements of the late 19th and early 20th century. Similarly, based on written histories and first-hand experience of built and drawn works, the author continues with an analysis of the theories and architectural works generated by the same groups, all of which actively pursued continuity between their art and architectural concepts. With images of specific works, the author shows how the trend toward abstraction and geometric purity in painting coincided with a similar trend in architecture that favored simple unornamented geometries. Using examples like the Villa Savoye, The Schroeder House, the Dessau Bauhaus, and unbuilt designs by Russian architect Chernikov, the author gives detailed examples of how the intersection of trends in Art and Architecture led to a unique and fruitful period of creative synergy when the same concepts that were used by artists to generate paintings were also used by architects in the making of objects, space, and buildings. In Conclusion, this article examines the extremely pivotal period in art and architecture history from the late 19th to early 20th century when the confluence of art and architectural theory led to many painted, drawn, and built works that continue to inspire architects and artists to this day.

Keywords: modern art, architecture, design methodologies, modern architecture

Procedia PDF Downloads 121
498 The Development of Home-Based Long Term Care Model among Thai Elderly Dependent

Authors: N. Uaphongsathorn, C. Worawong, S. Thaewpia

Abstract:

Background and significance: The population is aging in Thai society, the elderly dependent is at great risk of various functional, psychological, and socio-economic problems as well as less access to health care. They may require long term care at home to maximize their functional abilities and activities of daily living and to improve their quality of life during their own age. Therefore, there is a need to develop a home-based long term care to meet the long term care needs of elders dependent. Methods: The research purpose was to develop long term care model among the elderly dependent in Chaiyaphum province in Northeast region of Thailand. Action Research which is composing of planning, action, observation, and reflection phases was used. Research was carried out for 12 months in all sub-districts of 6 districts in Chaiyaphum province. Participants (N = 1,010) participating in the processes of model development were comprised of 3 groups: a) a total of 110 health care professionals, b) a total of 600 health volunteers and family caregivers and c) a total of 300 the elderly dependent with chronically medical illnesses or disabilities. Descriptive statistics and content analysis were used to analyze data. Findings: Results have shown that the most common health problems among elders dependent with physical disabilities to function independently were cardiovascular disease, dementia, and traffic injuries. The development of home-based long term care model among elders dependent in Chaiyaphum province was composed of six key steps. They are: a) initiating policies supporting formal and informal caregivers for the elder dependent in all sub-districts, b) building network and multidisciplinary team, c) developing 3-day care manager training program and 3-day care provider training program d) training case managers and care providers for the elderly dependent through team and action learning, e) assessing, planning and providing care based on care individual’s needs of the elderly dependent, and f) sharing experiences for good practice and innovation for long term care at homes in district urban and rural areas. Among all care managers and care providers, the satisfaction level for training programs was high with a mean score of 3.98 out of 5. The elders dependent and family caregivers addressed that long term care at home could contribute to improving life’s daily activities, family relationship, health status, and quality of life. Family caregivers and volunteers have feeling a sense of personal satisfaction and experiencing providing meaningful care and support for elders dependent. Conclusion: In conclusion, a home-based long term care is important to Thai elders dependent. Care managers and care providers play a large role and responsibility to provide appropriate care to meet the elders’ needs in both urban and rural areas in Thai society. Further research could be rigorously studied with a larger group of populations in similar socio-economic and cultural contexts.

Keywords: elderly people, care manager, care provider, long term care

Procedia PDF Downloads 299
497 Gold Nanoprobes Assay for the Identification of Foodborn Pathogens Such as Staphylococcus aureus, Listeria monocytogenes and Salmonella enteritis

Authors: D. P. Houhoula, J. Papaparaskevas, S. Konteles, A. Dargenta, A. Farka, C. Spyrou, M. Ziaka, S. Koussisis, E. Charvalos

Abstract:

Objectives: Nanotechnology is providing revolutionary opportunities for the rapid and simple diagnosis of many infectious diseases. Staphylococcus aureus, Listeria monocytogenes and Salmonella enteritis are important human pathogens. Diagnostic assays for bacterial culture and identification are time consuming and laborious. There is an urgent need to develop rapid, sensitive, and inexpensive diagnostic tests. In this study, a gold nanoprobe strategy developed and relies on the colorimetric differentiation of specific DNA sequences based approach on differential aggregation profiles in the presence or absence of specific target hybridization. Method: Gold nanoparticles (AuNPs) were purchased from Nanopartz. They were conjugated with thiolated oligonucleotides specific for the femA gene for the identification of members of Staphylococcus aureus, the mecA gene for the differentiation of Staphylococcus aureus and MRSA Staphylococcus aureus, hly gene encoding the pore-forming cytolysin listeriolysin for the identification of Listeria monocytogenes and the invA sequence for the identification of Salmonella enteritis. DNA isolation from Staphylococcus aureus Listeria monocytogenes and Salmonella enteritis cultures was performed using the commercial kit Nucleospin Tissue (Macherey Nagel). Specifically 20μl of DNA was diluted in 10mMPBS (pH5). After the denaturation of 10min, 20μl of AuNPs was added followed by the annealing step at 58oC. The presence of a complementary target prevents aggregation with the addition of acid and the solution remains pink, whereas in the opposite event it turns to purple. The color could be detected visually and it was confirmed with an absorption spectrum. Results: Specifically, 0.123 μg/μl DNA of St. aureus, L.monocytogenes and Salmonella enteritis was serially diluted from 1:10 to 1:100. Blanks containing PBS buffer instead of DNA were used. The application of the proposed method on isolated bacteria produced positive results with all the species of St. aureus and L. monocytogenes and Salmonella enteritis using the femA, mecA, hly and invA genes respectively. The minimum detection limit of the assay was defined at 0.2 ng/μL of DNA. Below of 0.2 ng/μL of bacterial DNA the solution turned purple after addition of HCl, defining the minimum detection limit of the assay. None of the blank samples was positive. The specificity was 100%. The application of the proposed method produced exactly the same results every time (n = 4) the evaluation was repeated (100% repeatability) using the femA, hly and invA genes. Using the gene mecA for the differentiation of Staphylococcus aureus and MRSA Staphylococcus aureus the method had a repeatability 50%. Conclusion: The proposed method could be used as a highly specific and sensitive screening tool for the detection and differentiation of Staphylococcus aureus Listeria monocytogenes and Salmonella enteritis. The use AuNPs for the colorimetric detection of DNA targets represents an inexpensive and easy-to-perform alternative to common molecular assays. The technology described here, may develop into a platform that could accommodate detection of many bacterial species.

Keywords: gold nanoparticles, pathogens, nanotechnology, bacteria

Procedia PDF Downloads 336
496 Occipital Squama Convexity and Neurocranial Covariation in Extant Homo sapiens

Authors: Miranda E. Karban

Abstract:

A distinctive pattern of occipital squama convexity, known as the occipital bun or chignon, has traditionally been considered a derived Neandertal trait. However, some early modern and extant Homo sapiens share similar occipital bone morphology, showing pronounced internal and external occipital squama curvature and paralambdoidal flattening. It has been posited that these morphological patterns are homologous in the two groups, but this claim remains disputed. Many developmental hypotheses have been proposed, including assertions that the chignon represents a developmental response to a long and narrow cranial vault, a narrow or flexed basicranium, or a prognathic face. These claims, however, remain to be metrically quantified in a large subadult sample, and little is known about the feature’s developmental, functional, or evolutionary significance. This study assesses patterns of chignon development and covariation in a comparative sample of extant human growth study cephalograms. Cephalograms from a total of 549 European-derived North American subjects (286 male, 263 female) were scored on a 5-stage ranking system of chignon prominence. Occipital squama shape was found to exist along a continuum, with 34 subjects (6.19%) possessing defined chignons, and 54 subjects (9.84%) possessing very little occipital squama convexity. From this larger sample, those subjects represented by a complete radiographic series were selected for metric analysis. Measurements were collected from lateral and posteroanterior (PA) cephalograms of 26 subjects (16 male, 10 female), each represented at 3 longitudinal age groups. Age group 1 (range: 3.0-6.0 years) includes subjects during a period of rapid brain growth. Age group 2 (range: 8.0-9.5 years) includes subjects during a stage in which brain growth has largely ceased, but cranial and facial development continues. Age group 3 (range: 15.9-20.4 years) includes subjects at their adult stage. A total of 16 landmarks and 153 sliding semi-landmarks were digitized at each age point, and geometric morphometric analyses, including relative warps analysis and two-block partial least squares analysis, were conducted to study covariation patterns between midsagittal occipital bone shape and other aspects of craniofacial morphology. A convex occipital squama was found to covary significantly with a low, elongated neurocranial vault, and this pattern was found to exist from the youngest age group. Other tested patterns of covariation, including cranial and basicranial breadth, basicranial angle, midcoronal cranial vault shape, and facial prognathism, were not found to be significant at any age group. These results suggest that the chignon, at least in this sample, should not be considered an independent feature, but rather the result of developmental interactions relating to neurocranial elongation. While more work must be done to quantify chignon morphology in fossil subadults, this study finds no evidence to disprove the developmental homology of the feature in modern humans and Neandertals.

Keywords: chignon, craniofacial covariation, human cranial development, longitudinal growth study, occipital bun

Procedia PDF Downloads 199
495 Using Differentiated Instruction Applying Cognitive Approaches and Strategies for Teaching Diverse Learners

Authors: Jolanta Jonak, Sylvia Tolczyk

Abstract:

Educational systems are tasked with preparing students for future success in academic or work environments. Schools strive to achieve this goal, but often it is challenging as conventional teaching approaches are often ineffective in increasingly diverse educational systems. In today’s ever-increasing global society, educational systems become increasingly diverse in terms of cultural and linguistic differences, learning preferences and styles, ability and disability. Through increased understanding of disabilities and improved identification processes, students having some form of disabilities tend to be identified earlier than in the past, meaning that more students with identified disabilities are being supported in our classrooms. Also, a large majority of students with disabilities are educated in general education environments. Due to cognitive makeup and life experiences, students have varying learning styles and preferences impacting how they receive and express what they are learning. Many students come from bi or multilingual households and with varying proficiencies in the English language, further impacting their learning. All these factors need to be seriously considered when developing learning opportunities for student's. Educators try to adjust their teaching practices as they discover that conventional methods are often ineffective in reaching each student’s potential. Many teachers do not have the necessary educational background or training to know how to teach students whose learning needs are more unique and may vary from the norm. This is further complicated by the fact that many classrooms lack consistent access to interventionists/coaches that are adequately trained in evidence-based approaches to meet the needs of all students, regardless of what their academic needs may be. One evidence-based way for providing successful education for all students is by incorporating cognitive approaches and strategies that tap into affective, recognition, and strategic networks in the student's brain. This can be done through Differentiated Instruction (DI). Differentiated Instruction is increasingly recognized model that is established on the basic principles of Universal Design for Learning. This form of support ensures that regardless of the students’ learning preferences and cognitive learning profiles, they have opportunities to learn through approaches that are suitable to their needs. This approach improves the educational outcomes of students with special needs and it benefits other students as it accommodates learning styles as well as the scope of unique learning needs that are evident in the typical classroom setting. Differentiated Instruction also is recognized as an evidence-based best practice in education and is highly effective when it is implemented within the tiered system of the Response to Intervention (RTI) model. Recognition of DI becomes more common; however, there is still limited understanding of the effective implementation and use of strategies that can create unique learning environments for each student within the same setting. Through employing knowledge of a variety of instructional strategies, general and special education teachers can facilitate optimal learning for all students, with and without a disability. A desired byproduct of DI is that it can eliminate inaccurate perceptions about the students’ learning abilities, unnecessary referrals for special education evaluations, and inaccurate decisions about the presence of a disability.

Keywords: differentiated instruction, universal design for learning, special education, diversity

Procedia PDF Downloads 217
494 Spectral Responses of the Laser Generated Coal Aerosol

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki

Abstract:

Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.

Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation

Procedia PDF Downloads 356
493 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 157
492 Teachers’ Language Insecurity in English as a Second Language Instruction: Developing Effective In-Service Training

Authors: Mamiko Orii

Abstract:

This study reports on primary school second language teachers’ sources of language insecurity. Furthermore, it aims to develop an in-service training course to reduce anxiety and build sufficient English communication skills. Language/Linguistic insecurity refers to a lack of confidence experienced by language speakers. In particular, second language/non-native learners often experience insecurity, influencing their learning efficacy. While language learner insecurity has been well-documented, research on the insecurity of language teaching professionals is limited. Teachers’ language insecurity or anxiety in target language use may adversely affect language instruction. For example, they may avoid classroom activities requiring intensive language use. Therefore, understanding teachers’ language insecurity and providing continuing education to help teachers to improve their proficiency is vital to improve teaching quality. This study investigated Japanese primary school teachers’ language insecurity. In Japan, teachers are responsible for teaching most subjects, including English, which was recently added as compulsory. Most teachers have never been professionally trained in second language instruction during college teacher certificate preparation, leading to low confidence in English teaching. Primary source of language insecurity is a lack of confidence regarding English communication skills. Their actual use of English in classrooms remains unclear. Teachers’ classroom speech remains a neglected area requiring improvement. A more refined programme for second language teachers could be constructed if we can identify areas of need. Two questionnaires were administered to primary school teachers in Tokyo: (1) Questionnaire A: 396 teachers answered questions (using a 5-point scale) concerning classroom teaching anxiety and general English use and needs for in-service training (Summer 2021); (2) Questionnaire B: 20 teachers answered detailed questions concerning their English use (Autumn 2022). Questionnaire A’s responses showed that over 80% of teachers have significant language insecurity and anxiety, mainly when speaking English in class or teaching independently. Most teachers relied on a team-teaching partner (e.g., ALT) and avoided speaking English. Over 70% of the teachers said they would like to participate in training courses in classroom English. Questionnaire B’s results showed that teachers could use simple classroom English, such as greetings and basic instructions (e.g., stand up, repeat after me), and initiate conversation (e.g., asking questions). In contrast, teachers reported that conversations were mainly carried on in a simple question-answer style. They had difficulty continuing conversations. Responding to learners’ ‘on-the-spot’ utterances was particularly difficult. Instruction in turn-taking patterns suitable in the classroom communication context is needed. Most teachers received grammar-based instruction during their entire English education. They were predominantly exposed to displayed questions and form-focused corrective feedback. Therefore, strategies such as encouraging teachers to ask genuine questions (i.e., referential questions) and responding to students with content feedback are crucial. When learners’ utterances are incorrect or unsatisfactory, teachers should rephrase or extend (recast) them instead of offering explicit corrections. These strategies support a continuous conversational flow. These results offer benefits beyond Japan’s English as a second Language context. They will be valuable in any context where primary school teachers are underprepared but must provide English-language instruction.

Keywords: english as a second/non-native language, in-service training, primary school, teachers’ language insecurity

Procedia PDF Downloads 65
491 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 267
490 Investigation of Hydrate Formation of Associated Petroleum Gas from Promoter Solutions for the Purpose of Utilization and Reduction of Its Burning

Authors: M. E. Semenov, U. Zh. Mirzakimov, A. S. Stoporev, R. S. Pavelev, M. A. Varfolomeev

Abstract:

Gas hydrates are host-guest compounds. Guest molecules can be low molecular weight components of associated petroleum gas (C1-C4 hydrocarbons), carbon dioxide, hydrogen sulfide, nitrogen. Gas hydrates have a number of unique properties that make them interesting from a technological point of view, for example, for storing hydrocarbon gases in solid form under moderate thermobaric conditions. Currently, the possibility of storing and transporting hydrocarbon gases in the form of solid hydrate is being actively explored throughout the world. The hydrate form of gas has a number of advantages, including a significant gas content in the hydrate, relative safety and environmental friendliness of the process. Recently, new developments have been proposed that seek to reduce the number of steps to obtain the finished hydrate, for example, using a pressing device/screw inside the reactor. However, the energy consumption required for the hydrate formation process remains a challenge. Thus, the goal of the current work is to study the patterns and mechanisms of the hydrate formation process using small additions of hydrate formation promoters under static conditions. The study of these aspects will help solve the problem of accelerated production of gas hydrates with minimal energy consumption. New compounds have been developed at Kazan Federal University that can accelerate the formation of methane hydrate with a small amount of promoter in water, not exceeding 0.1% by weight. These promoters were synthesized based on available natural compounds and showed high efficiency in accelerating the growth of methane hydrate. To test the influence of promoters on the process of hydrate formation, standard experiments are carried out under dynamic conditions with stirring. During such experiments, the time at which hydrate formation begins (induction period), the temperature at which formation begins (supercooling), the rate of hydrate formation, and the degree of conversion of water to hydrate are assessed. This approach helps to determine the most effective compound in comparative experiments with different promoters and select their optimal concentration. These experimental studies made it possible to study the features of the formation of associated petroleum gas hydrate from promoter solutions under static conditions. Phase transformations were studied using high-pressure micro-differential scanning calorimetry under various experimental conditions. Visual studies of the growth mode of methane hydrate depending on the type of promoter were also carried out. The work is an extension of the methodology for studying the effect of promoters on the process of associated petroleum gas hydrate formation in order to identify new ways to accelerate the formation of gas hydrates without the use of mixing. This work presents the results of a study of the process of associated petroleum gas hydrate formation using high-pressure differential scanning micro-calorimetry, visual investigation, gas chromatography, autoclave study, and stability data. It was found that the synthesized compounds multiply the conversion of water into hydrate under static conditions up to 96% due to a change in the growth mechanism of associated petroleum gas hydrate. This work was carried out in the framework of the program Priority-2030.

Keywords: gas hydrate, gas storage, promotor, associated petroleum gas

Procedia PDF Downloads 61
489 The Effectiveness of Multi-Media Experiential Training Programme on Advance Care Planning in Enhancing Acute Care Nurses’ Knowledge and Confidence in Advance Care Planning Discussion: An Interim Report

Authors: Carmen W. H. Chan, Helen Y. L. Chan, Kai Chow Choi, Ka Ming Chow, Cecilia W. M. Kwan, Nancy H. Y. Ng, Jackie Robinson

Abstract:

Introduction: In Hong Kong, a significant number of deaths occur in acute care wards, which requires nurses in these settings to provide end-of-life care and lead ACP implementation. However, nurses in these settings, in fact, have very low-level involvement in ACP discussions because of limited training in ACP conversations. Objective: This study aims to assess the impact of a multi-media experiential ACP (MEACP) training program, which is guided by the experiential learning model and theory of planned behaviour, on nurses' knowledge and confidence in assisting patients with ACP. Methodology: The study utilizes a cluster randomized controlled trial with a 12-week follow-up. Eligible nurses working in acute care hospital wards are randomly assigned at the ward level, in a 1:1 ratio, to either the control group (no ACP education) or the intervention group (4-week MEACP training program). The training programme includes training through a webpage and mobile application, as well as a face-to-face training workshop with enhanced lectures and role play, which is based on the Theory of Planned Behavior and Kolb's Experiential Learning Model. Questionnaires were distributed to assess nurses' knowledge (a 10-item true/false questionnaire) and level of confidence (five-point Likert scale) in ACP at baseline (T0), four weeks after the baseline assessment (T1), and 12 weeks after T1 (T2). In this interim report, data analysis was mainly descriptive in nature. Result: The interim report focuses on the preliminary results of 165 nurses at T0 (Control: 74, Intervention: 91) over a 5-month period, 69 nurses from the control group who completed the 4-week follow-up and 65 nurses from the intervention group who completed the 4-week MEACP training program at T1. The preliminary attrition rate is 6.8% and 28.6% for the control and intervention groups, respectively, as some nurses did not complete the whole set of online modules. At baseline, the two groups were generally homogeneous in terms of their years of nursing practice, weekly working hours, working title, and level of education, as well as ACP knowledge and confidence levels. The proportion of nurses who answered all ten knowledge questions correctly increased from 13.8% (T0) to 66.2% (T1) for the intervention group and from 13% (T0) to 20.3% (T1) for the control group. The nurses in the intervention group answered an average of 7.57 and 9.43 questions correctly at T0 and T1, respectively. They showed a greater improvement in the knowledge assessment at T1 with respect to T0 when compared with their counterparts in the control group (mean difference of change score, Δ=1.22). They also exhibited a greater gain in level of confidence at T1 compared to their colleagues in the control group (Δ=0.91). T2 data is yet available. Conclusion: The prevalence of nurses engaging in ACP and their level of knowledge about ACP in Hong Kong is low. The MEACP training program can enrich nurses by providing them with more knowledge about ACP and increasing their confidence in conducting ACP.

Keywords: advance directive, advance care planning, confidence, knowledge, multi-media experiential, randomised control trial

Procedia PDF Downloads 74
488 Fabrication of All-Cellulose Composites from End-of-Life Textiles

Authors: Behnaz Baghaei, Mikael Skrifvars

Abstract:

Sustainability is today a trend that is seen everywhere, with no exception for the textiles 31 industry. However, there is a rather significant downside regarding how the textile industry currently operates, namely the huge amount of end-of-life textiles coming along with it. Approximately 73% of the 53 million tonnes of fibres used annually for textile production is landfilled or incinerated, while only 12% is recycled as secondary products. Mechanical recycling of end-of-life textile fabrics into yarns and fabrics was before very common, but due to the low costs for virgin man-made fibres, the current textile material composition diversity, the fibre material quality variations and the high recycling costs this route is not feasible. Another way to decrease the ever-growing pile of textile waste is to repurpose the textile. If a feasible methodology can be found to reuse end-of life textiles as secondary market products including a manufacturing process that requires rather low investment costs, then this can be highly beneficial to counteract the increasing textile waste volumes. In structural composites, glass fibre textiles are used as reinforcements, but today there is a growing interest in biocomposites where the reinforcement and/or the resin are from a biomass resource. All-cellulose composites (ACCs) are monocomponent or single polymer composites, and they are entirely made from cellulose, ideally leading to a homogeneous biocomposite. Since the matrix and the reinforcement are both made from cellulose, and therefore chemically identical, they are fully compatible with each other which allow efficient stress transfer and adhesion at their interface. Apart from improving the mechanical performance of the final products, the recycling of the composites will be facilitated. This paper reports the recycling of end-of-life cellulose containing textiles by fabrication of all-cellulose composites (ACCs). Composite laminates were prepared by using an ionic liquid (IL) in a hot process, involving a partial dissolving of the cellulose fibres. Discharged denim fabrics were used as the reinforcement while dissolved cellulose from two different cellulose resources was used as the matrix phase. Virgin cotton staple fibres and recovered cotton from polyester/cotton (polycotton) waste fabrics were used to form the matrix phase. The process comprises the dissolving 6 wt.% cellulose solution in the ionic liquid 1-butyl-3-methyl imidazolium acetate ([BMIM][Ac]), this solution acted as a precursor for the matrix component. The denim fabrics were embedded in the cellulose/IL solution after which laminates were formed, which also involved removal of the IL by washing. The effect of reuse of the recovered IL was also investigated. The mechanical properties of the obtained ACCs were determined regarding tensile, impact and flexural properties. Mechanical testing revealed that there are no clear differences between the values measured for mechanical strength and modulus of the manufactured ACCs from denim/cotton-fresh IL, denim/recovered cotton-fresh IL and denim/cotton-recycled IL. This could be due to the low weight fraction of the cellulose matrix in the final ACC laminates and presumably the denim as cellulose reinforcement strongly influences and dominates the mechanical properties. Fabricated ACC composite laminates were further characterized regarding scanning electron microscopy.

Keywords: all-cellulose composites, denim fabrics, ionic liquid, mechanical properties

Procedia PDF Downloads 112
487 Knowledge Based Software Model for the Management and Treatment of Malaria Patients: A Case of Kalisizo General Hospital

Authors: Mbonigaba Swale

Abstract:

Malaria is an infection or disease caused by parasites (Plasmodium Falciparum — causes severe Malaria, plasmodium Vivax, Plasmodium Ovale, and Plasmodium Malariae), transmitted by bites of infected anopheles (female) mosquitoes to humans. These vectors comprise of two types in Africa, particularly in Uganda, i.e. anopheles fenestus and Anopheles gambaie (‘example Anopheles arabiensis,,); feeds on man inside the house mainly at dusk, mid-night and dawn and rests indoors and makes them effective transmitters (vectors) of the disease. People in both urban and rural areas have consistently become prone to repetitive attacks of malaria, causing a lot of deaths and significantly increasing the poverty levels of the rural poor. Malaria is a national problem; it causes a lot of maternal pre-natal and antenatal disorders, anemia in pregnant mothers, low birth weights for the newly born, convulsions and epilepsy among the infants. Cumulatively, it kills about one million children every year in sub-Saharan Africa. It has been estimated to account for 25-35% of all outpatient visits, 20-45% of acute hospital admissions and 15-35% of hospital deaths. Uganda is the leading victim country, for which Rakai and Masaka districts are the most affected. So, it is not clear whether these abhorrent situations and episodes of recurrences and failure to cure from the disease are a result of poor diagnosis, prescription and dosing, treatment habits and compliance of the patients to the drugs or the ethical domain of the stake holders in relation to the main stream methodology of malaria management. The research is aimed at offering an alternative approach to manage and deal absolutely with problem by using a knowledge based software model of Artificial Intelligence (Al) that is capable of performing common-sense and cognitive reasoning so as to take decisions like the human brain would do to provide instantaneous expert solutions so as to avoid speculative simulation of the problem during differential diagnosis in the most accurate and literal inferential aspect. This system will assist physicians in many kinds of medical diagnosis, prescribing treatments and doses, and in monitoring patient responses, basing on the body weight and age group of the patient, it will be able to provide instantaneous and timely information options, alternative ways and approaches to influence decision making during case analysis. The computerized system approach, a new model in Uganda termed as “Software Aided Treatment” (SAT) will try to change the moral and ethical approach and influence conduct so as to improve the skills, experience and values (social and ethical) in the administration and management of the disease and drugs (combination therapy and generics) by both the patient and the health worker.

Keywords: knowledge based software, management, treatment, diagnosis

Procedia PDF Downloads 52
486 Assessment of Very Low Birth Weight Neonatal Tracking and a High-Risk Approach to Minimize Neonatal Mortality in Bihar, India

Authors: Aritra Das, Tanmay Mahapatra, Prabir Maharana, Sridhar Srikantiah

Abstract:

In the absence of adequate well-equipped neonatal-care facilities serving rural Bihar, India, the practice of essential home-based newborn-care remains critically important for reduction of neonatal and infant mortality, especially among pre-term and small-for-gestational-age (Low-birth-weight) newborns. To improve the child health parameters in Bihar, ‘Very-Low-Birth-Weight (vLBW) Tracking’ intervention is being conducted by CARE India, since 2015, targeting public facility-delivered newborns weighing ≤2000g at birth, to improve their identification and provision of immediate post-natal care. To assess the effectiveness of the intervention, 200 public health facilities were randomly selected from all functional public-sector delivery points in Bihar and various outcomes were tracked among the neonates born there. Thus far, one pre-intervention (Feb-Apr’2015-born neonates) and three post-intervention (for Sep-Oct’2015, Sep-Oct’2016 and Sep-Oct’2017-born children) follow-up studies were conducted. In each round, interviews were conducted with the mothers/caregivers of successfully-tracked children to understand outcome, service-coverage and care-seeking during the neonatal period. Data from 171 matched facilities common across all rounds were analyzed using SAS-9.4. Identification of neonates with birth-weight ≤ 2000g improved from 2% at baseline to 3.3%-4% during post-intervention. All indicators pertaining to post-natal home-visits by frontline-workers (FLWs) improved. Significant improvements between baseline and post-intervention rounds were also noted regarding mothers being informed about ‘weak’ child – at the facility (R1 = 25 to R4 = 50%) and at home by FLW (R1 = 19%, to R4 = 30%). Practice of ‘Kangaroo-Mother-Care (KMC)’– an important component of essential newborn care – showed significant improvement in postintervention period compared to baseline in both facility (R1 = 15% to R4 = 31%) and home (R1 = 10% to R4=29%). Increasing trend was noted regarding detection and birth weight-recording of the extremely low-birth-weight newborns (< 1500 g) showed an increasing trend. Moreover, there was a downward trend in mortality across rounds, in each birth-weight strata (< 1500g, 1500-1799g and >= 1800g). After adjustment for the differential distribution of birth-weights, mortality was found to decline significantly from R1 (22.11%) to R4 (11.87%). Significantly declining trend was also observed for both early and late neonatal mortality and morbidities. Multiple regression analysis identified - birth during immediate post-intervention phase as well as that during the maintenance phase, birth weight > 1500g, children of low-parity mothers, receiving visit from FLW in the first week and/or receiving advice on extra care from FLW as predictors of survival during neonatal period among vLBW newborns. vLBW tracking was found to be a successful and sustainable intervention and has already been handed over to the Government.

Keywords: weak newborn tracking, very low birth weight babies, newborn care, community response

Procedia PDF Downloads 156
485 Special Educational Needs Coordinators in England: Changemakers in Mainstream School Settings

Authors: Saneeya Qureshi

Abstract:

This paper reports doctoral research into the impact of Special Educational Needs Coordinators (SENCOs) on teachers in England, UK. Since 1994, it has been compulsory for all mainstream schools in the UK to have a SENCO who co-ordinates assessment and provision for supporting pupils with Special Educational Needs (SEN), helping teachers to develop and implement optimal SEN planning and resources. SENCOs’ roles have evolved as various policies continually redefined SEN provision, impacting their positioning within the school hierarchical structure. SENCOs in England are increasingly recognised as key members of school senior management teams. In this paper, It will be argued that despite issues around the transformative ‘professionalisation’ of their role, and subsequent conflict around boundaries and power relations, SENCOs enhance teachers’ abilities in terms of delivering optimal SEN provision. There is a significant international dimension to the issue: a similar role in respect of SEN management already exists in countries such as Ireland, Finland and Singapore, whilst in other countries, such as Italy and India, the introduction of a role similar to that of a SENCO is currently under discussion. The research question addressed is: do SENCOs enhance teachers’ abilities to be effective teachers of children with Special Educational Needs? The theoretical framework of the project is that of interpretivism, as it is acknowledged that there are contexts and realities are social constructions. The study applied a mixed method approach consisting of two phases. The first phase involved a purposive survey (n=42) of 223 primary school SENCOs, which enabled a deeper insight into SENCOs’ perceptions of their roles in relation to teachers. The second phase consisted of semi-structured interviews (n=36) of SENCOs, teachers and head teachers, in addition to school SEN-related documentation scrutiny. ‘Trustworthiness’ was accomplished through data and methodological triangulation, in addition to a rigorous process of coding and thematic analysis. The research was informed by an Ethical Code as per national guidelines. Research findings point to the evolutionary aspect of the SENCO role having engendered a culture of expectations amongst practitioners, as SENCOs transition from being ‘fixers’ to being ‘enablers’ of teachers. Outcomes indicate that SENCOs can empower teaching staff through the dissemination of specialist knowledge. However, there must be resources clearly identified for such dissemination to take place. It is imperative that both SENCOs and teachers alike address the issue of absolution of responsibility that arises when the ownership and accountability for the planning and implementation of SEN provision are not clarified so as to ensure the promotion of a positive school ethos around inclusive practices. Optimal outcomes through effective SEN interventions and teaching practices are positively correlated with the inclusion of teachers in the planning and execution of SEN provisions. An international audience can consider how the key findings are being manifest in a global context, with reference to their own educational settings. Research outcomes can aid the development of specific competencies needed to shape optimal inclusive educational settings in accordance with the official global priorities pertaining to inclusion.

Keywords: inclusion, school professionals, school leadership, special educational needs (SEN), special educational needs coordinators (SENCOs)

Procedia PDF Downloads 189
484 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 240
483 Development of Solar Poly House Tunnel Dryer (STD) for Medicinal Plants

Authors: N. C. Shahi, Anupama Singh, E. Kate

Abstract:

Drying is practiced to enhance the storage life, to minimize losses during storage, and to reduce transportation costs of agricultural products. Drying processes range from open sun drying to industrial drying. In most of the developing countries, use of fossil fuels for drying of agricultural products has not been practically feasible due to unaffordable costs to majority of the farmers. On the other hand, traditional open sun drying practiced on a large scale in the rural areas of the developing countries suffers from high product losses due to inadequate drying, fungal growth, encroachment of insects, birds and rodents, etc. To overcome these problems a middle technology dryer having low cost need to be developed for farmers. In case of mechanical dryers, the heated air is the main driving force for removal of moisture. The air is heated either electrically or by burning wood, coal, natural gas etc. using heaters. But, all these common sources have finite supplies. The lifetime is estimated to range from 15 years for a natural gas to nearly 250 years for coal. So, mankind must turn towards its safe and reliable utilization and may have undesirable side effects. The mechanical drying involves higher cost of drying and open sun drying deteriorates the quality. The solar tunnel dryer is one of promising option for drying various agricultural and agro-industrial products on large scale. The advantage of Solar tunnel dryer is its relatively cheaper cost of construction and operation. Although many solar dryers have been developed, still there is a scope of modification in them. Therefore, an attempt was made to develop Solar tunnel dryer and test its performance using highly perishable commodity i.e. leafy vegetables (spinach). The effect of air velocity, loading density and shade net on performance parameters namely, collector efficiency, drying efficiency, overall efficiency of dryer and specific heat energy consumption were also studied. Thus, the need for an intermediate level technology was realized and an effort was made to develop a small scale Solar Tunnel Dryer . A dryer consisted of base frame, semi cylindrical drying chamber, solar collector and absorber, air distribution system with chimney and auxiliary heating system, and wheels for its mobility were the main functional components. Drying of fenugreek was carried out to analyze the performance of the dryer. The Solar Tunnel Dryer temperature was maintained using the auxiliary heating system. The ambient temperature was in the range of 12-33oC. The relative humidity was found inside and outside the Solar Tunnel Dryer in the range of 21-75% and 35-79%, respectively. The solar radiation was recorded in the range of 350-780W/m2 during the experimental period. Studies revealed that total drying time was in range of 230 to 420 min. The drying time in Solar Tunnel Dryer was considerably reduced by 67% as compared to sun drying. The collector efficiency, drying efficiency, overall efficiency and specific heat consumption were determined and were found to be in the range of 50.06- 38.71%, 15.53-24.72%, 4.25 to 13.34% and 1897.54-3241.36 kJ/kg, respectively.

Keywords: overall efficiency, solar tunnel dryer, specific heat consumption, sun drying

Procedia PDF Downloads 309
482 A Five-Year Experience of Intensity Modulated Radiotherapy in Nasopharyngeal Carcinomas in Tunisia

Authors: Omar Nouri, Wafa Mnejja, Fatma Dhouib, Syrine Zouari, Wicem Siala, Ilhem Charfeddine, Afef Khanfir, Leila Farhat, Nejla Fourati, Jamel Daoud

Abstract:

Purpose and Objective: Intensity modulated radiation (IMRT) technique, associated with induction chemotherapy (IC) and/or concomitant chemotherapy (CC), is actually the recommended treatment modality for nasopharyngeal carcinomas (NPC). The aim of this study was to evaluate the therapeutic results and the patterns of relapse with this treatment protocol. Material and methods: A retrospective monocentric study of 145 patients with NPC treated between June 2016 and July 2021. All patients received IMRT with integrated simultaneous boost (SIB) of 33 daily fractions at a dose of 69.96 Gy for high-risk volume, 60 Gy for intermediate risk volume and 54 Gy for low-risk volume. The high-risk volume dose was 66.5 Gy in children. Survival analysis was performed according to the Kaplan-Meier method, and the Log-rank test was used to compare factors that may influence survival. Results: Median age was 48 years (11-80) with a sex ratio of 2.9. One hundred-twenty tumors (82.7%) were classified as stages III-IV according to the 2017 UICC TNM classification. Ten patients (6.9%) were metastatic at diagnosis. One hundred-thirty-five patient (93.1%) received IC, 104 of which (77%) were TPF-based (taxanes, cisplatin and 5 fluoro-uracil). One hundred-thirty-eight patient (95.2%) received CC, mostly cisplatin in 134 cases (97%). After a median follow-up of 50 months [22-82], 46 patients (31.7%) had a relapse: 12 (8.2%) experienced local and/or regional relapse after a median of 18 months [6-43], 29 (20%) experienced distant relapse after a median of 9 months [2-24] and 5 patients (3.4%) had both. Thirty-five patients (24.1%) died, including 5 (3.4%) from a cause other than their cancer. Three-year overall survival (OS), cancer specific survival, disease free survival, metastasis free survival and loco-regional free survival were respectively 78.1%, 81.3%, 67.8%, 74.5% and 88.1%. Anatomo-clinic factors predicting OS were age > 50 years (88.7 vs. 70.5%; p=0.004), diabetes history (81.2 vs. 66.7%; p=0.027), UICC N classification (100 vs. 95 vs. 77.5 vs. 68.8% respectively for N0, N1, N2 and N3; p=0.008), the practice of a lymph node biopsy (84.2 vs. 57%; p=0.05), and UICC TNM stages III-IV (93.8 vs. 73.6% respectively for stage I-II vs. III-IV; p=0.044). Therapeutic factors predicting OS were a number of CC courses (less than 4 courses: 65.8 vs. 86%; p=0.03, less than 5 courses: 71.5 vs. 89%; p=0.041), a weight loss > 10% during treatment (84.1 vs. 60.9%; p=0.021) and a total cumulative cisplatin dose, including IC and CC, < 380 mg/m² (64.4 vs. 87.6%; p=0.003). Radiotherapy delay and total duration did not significantly affect OS. No grade 3-4 late side effects were noted in the evaluable 127 patients (87.6%). The most common toxicity was dry mouth which was grade 2 in 47 cases (37%) and grade 1 in 55 cases (43.3%).Conclusion: IMRT for nasopharyngeal carcinoma granted a high loco-regional control rate for patients during the last five years. However, distant relapses remain frequent and conditionate the prognosis. We identified many anatomo-clinic and therapeutic prognosis factors. Therefore, high-risk patients require a more aggressive therapeutic approach, such as radiotherapy dose escalation or adding adjuvant chemotherapy.

Keywords: therapeutic results, prognostic factors, intensity-modulated radiotherapy, nasopharyngeal carcinoma

Procedia PDF Downloads 60
481 Previously Undescribed Cardiac Abnormalities in Two Unrelated Autistic Males with Causative Variants in CHD8

Authors: Mariia A. Parfenenko, Ilya S. Dantsev, Sergei V. Bochenkov, Natalia V. Vinogradova, Olga S. Groznova, Victoria Yu. Voinova

Abstract:

Introduction: Autism is the most common neurodevelopmental disorder. Autism is characterized by difficulties in social interaction and adherence to stereotypic behavioral patterns and frequently co-occurs with epilepsy, intellectual disabilities, connective tissue disorders, and other conditions. CHD8 codes for chromodomain-helicase-DNA-binding protein 8 - a chromatin remodeler that regulates cellular proliferation and neurodevelopment in embryogenesis. CHD8 is one of the genes most frequently involved in autism. Patients and methods: 2 unrelated male patients, P3 and P12, aged 3 and 12 years old, underwent whole genome sequencing, which determined that they both had different likely pathogenic variants, both previously undescribed in literature. Sanger sequencing later determined that P12 inherited the variant from his affected mother. Results: P3 and P12 presented with autism, a developmental delay, ataxia, sleep disorders, overgrowth, and macrocephaly, as well as other clinical features typically present in patients with causative variants in CHD8. The mother of P12 also has autistic traits, as well as ataxia, hypotonia, sleep disorders, and other symptoms. However, P3 and P12 also have different cardiac abnormalities. P3 had signs of a repolarization disorder: a flattened T wave in the III and aVF derivations and a negative T wave in the V1-V2 derivations. He also had structural valve anomalies with associated regurgitation, local contractility impairment of the left ventricular, and diastolic dysfunction of the right ventricle. Meanwhile, P12 had Wolff-Parkinson-White syndrome and underwent radiofrequency ablation at the age of 2 years. At the time of observation, P12 had mild sinus arrhythmia and an incomplete right bundle branch block, as well as arterial hypertension. Discussion: Cardiac abnormalities were not previously reported in patients with causative variants in CHD8. The underlying mechanism for the formation of those abnormalities is currently unknown. However, the two hypotheses are either a disordered interaction with CHD7 – another chromodomain remodeler known to be directly involved in the cardiophenotype of CHARGE syndrome – a rare condition characterized by coloboma, heart defects and growth abnormalities, or the disrupted functioning of CHD8 as an A-Kinase Anchoring Protein, which are known to modulate cardiac function. Conclusion: We observed 2 unrelated autistic males with likely pathogenic variants in CHD8 that presented with typical symptoms of CHD8-related neurodevelopmental disorder, as well as cardiac abnormalities. Cardiac abnormalities have, until now, been considered uncharacteristic for patients with causative variants in CHD8. Further accumulation of data, including experimental evidence of the involvement of CHD8 in heart formation, will elucidate the mechanism underlying the cardiophenotype of those patients. Acknowledgements: Molecular genetic testing of the patients was made possible by the Charity Fund for medical and social genetic aid projects «Life Genome.»

Keywords: autism spectrum disorders, chromodomain-helicase-DNA-binding protein 8, neurodevelopmental disorder, cardio phenotype

Procedia PDF Downloads 84
480 An Architecture of Ingenuity and Empowerment

Authors: Timothy Gray

Abstract:

This paper will present work and discuss lessons learned during a semester-long travel study based in Southeast Asia, which was run in the Spring Semester of 2019 and again in the summer of 2023. The first travel group consisted of fifteen students, and the second group consisted of twelve students ranging from second-year to graduate level, student participants majoring in either architecture or planning. Students worked in interdisciplinary teams, each team beginning their travel study, living together in a separate small town for over a month in (relatively) remote conditions in rural Thailand. Students became intimately familiar with these towns, forged strong personal relationships, and built reservoirs of knowledge one conversation at a time. Rather than impose external ideas and solutions, students were asked to learn from and be open to lessons from the people and the place. The following design statement was used as a point of departure for their investigations: It is our shared premise that architecture exists in small villages and towns of Southeast Asia in the ingenuity of the people, that architecture exists in a shared language of making, modifying, and reusing. It is a modest but vibrant architecture, an architecture that is alive and evolving, an architecture that is small in scale, accessible, and one that emerges from the people. It is an architecture that can exist in a modified bicycle, a woven bamboo bridge, or a self-built community. Students were challenged to engage in existing conditions as design professionals, both empowering and lending coherence to the energies that already existed in the place. As one of the student teams noted in their design narrative: “During our field study, we had the unique opportunity to tour a number of informal settlements and meet and talk to residents through interpreters. We found that many of the residents work in nearby factories for dollars a day. Others find employment in self-generated informal economies such as hand carving and textiles. Despite extreme poverty, we found these places to be vibrant and full of life as people navigate these challenging conditions to live lives with purpose and dignity.” Students worked together with local community members and colleagues to develop a series of varied proposals that emerged from their interrogations of place and partnered with community members and professional colleagues in the development of these proposals. Project partners included faculty and student colleagues Yangon University, the mayor's Office, Planning Department Officials and religious leaders in Sawankhalok, Thailand, and community leaders in Natonchan, Thailand, to name a few. This paper will present a series of student community-based design projects that emerged from these conditions. The paper will also discuss this model of travel study as a way of building an architecture which uses social and cultural issues as a catalyst for design. The paper will discuss lessons relative to sustainable development that the Western students learned through their travels in Southeast Asia.

Keywords: travel study, CAPasia, architecture of empowerment, modular housing

Procedia PDF Downloads 45
479 Improving School Design through Diverse Stakeholder Participation in the Programming Phase

Authors: Doris C. C. K. Kowaltowski, Marcella S. Deliberador

Abstract:

The architectural design process, in general, is becoming more complex, as new technical, social, environmental, and economical requirements are imposed. For school buildings, this scenario is also valid. The quality of a school building depends on known design criteria and professional knowledge, as well as feedback from building performance assessments. To attain high-performance school buildings, a design process should add a multidisciplinary team, through an integrated process, to ensure that the various specialists contribute at an early stage to design solutions. The participation of stakeholders is of special importance at the programming phase when the search for the most appropriate design solutions is underway. The composition of a multidisciplinary team should comprise specialists in education, design professionals, and consultants in various fields such as environmental comfort and psychology, sustainability, safety and security, as well as administrators, public officials and neighbourhood representatives. Users, or potential users (teachers, parents, students, school officials, and staff), should be involved. User expectations must be guided, however, toward a proper understanding of a response of design to needs to avoid disappointment. In this context, appropriate tools should be introduced to organize such diverse participants and ensure a rich and focused response to needs and a productive outcome of programming sessions. In this paper, different stakeholder in a school design process are discussed in relation to their specific contributions and a tool in the form of a card game is described to structure the design debates and ensure a comprehensive decision-making process. The game is based on design patterns for school architecture as found in the literature and is adapted to a specific reality: State-run public schools in São Paulo, Brazil. In this State, school buildings are managed by a foundation called Fundação para o Desenvolvimento da Educação (FDE). FDE supervises new designs and is responsible for the maintenance of ~ 5000 schools. The design process of this context was characterised with a recommendation to improve the programming phase. Card games can create a common environment, to which all participants can relate and, therefore, can contribute to briefing debates on an equal footing. The cards of the game described here represent essential school design themes as found in the literature. The tool was tested with stakeholder groups and with architecture students. In both situations, the game proved to be an efficient tool to stimulate school design discussions and to aid in the elaboration of a rich, focused and thoughtful architectural program for a given demand. The game organizes the debates and all participants are shown to spontaneously contribute each in his own field of expertise to the decision-making process. Although the game was specifically based on a local school design process it shows potential for other contexts because the content is based on known facts, needs and concepts of school design, which are global. A structured briefing phase with diverse stakeholder participation can enrich the design process and consequently improve the quality of school buildings.

Keywords: architectural program, design process, school building design, stakeholder

Procedia PDF Downloads 402
478 The Pigeon Circovirus Evolution and Epidemiology under Conditions of One Loft Race Rearing System: The Preliminary Results

Authors: Tomasz Stenzel, Daria Dziewulska, Ewa Łukaszuk, Joy Custer, Simona Kraberger, Arvind Varsani

Abstract:

Viral diseases, especially those leading to impairment of the immune system, are among the most important problems in avian pathology. However, there is not much data available on this subject other than commercial poultry bird species. Recently, increasing attention has been paid to racing pigeons, which have been refined for many years in terms of their ability to return to their place of origin. Currently, these birds are used for races at distances from 100 to 1000 km, and winning pigeons are highly valuable. The rearing system of racing pigeons contradicts the principles of biosecurity, as birds originating from various breeding facilities are commonly transported and reared in “One Loft Race” (OLR) facilities. This favors the spread of multiple infections and provides conditions for the development of novel variants of various pathogens through recombination. One of the most significant viruses occurring in this avian species is the pigeon circovirus (PiCV), which is detected in ca. 70% of pigeons. Circoviruses are characterized by vast genetic diversity which is due to, among other things, the recombination phenomenon. It consists of an exchange of fragments of genetic material among various strains of the virus during the infection of one organism. The rate and intensity of the development of PiCV recombinants have not been determined so far. For this reason, an experiment was performed to investigate the frequency of development of novel PiCV recombinants in racing pigeons kept in OLR-type conditions. 15 racing pigeons originating from 5 different breeding facilities, subclinically infected with various PiCV strains, were housed in one room for eight weeks, which was supposed to mimic the conditions of OLR rearing. Blood and swab samples were collected from birds every seven days to recover complete PiCV genomes that were amplified through Rolling Circle Amplification (RCA), cloned, sequenced, and subjected to bioinformatic analyses aimed at determining the genetic diversity and the dynamics of recombination phenomenon among the viruses. In addition, virus shedding rate/level of viremia, expression of the IFN-γ and interferon-related genes, and anti-PiCV antibodies were determined to enable the complete analysis of the course of infection in the flock. Initial results have shown that 336 full PiCV genomes were obtained, exhibiting nucleotide similarity ranging from 86.6 to 100%, and 8 of those were recombinants originating from viruses of different lofts of origin. The first recombinant appeared after seven days of experiment, but most of the recombinants appeared after 14 and 21 days of joint housing. The level of viremia and virus shedding was the highest in the 2nd week of the experiment and gradually decreased to the end of the experiment, which partially corresponded with Mx 1 gene expression and antibody dynamics. The results have shown that the OLR pigeon-rearing system could play a significant role in spreading infectious agents such as circoviruses and contributing to PiCV evolution through recombination. Therefore, it is worth considering whether a popular gambling game such as pigeon racing is sensible from both animal welfare and epidemiological point of view.

Keywords: pigeon circovirus, recombination, evolution, one loft race

Procedia PDF Downloads 68
477 Use of Sewage Sludge Ash as Partial Cement Replacement in the Production of Mortars

Authors: Domagoj Nakic, Drazen Vouk, Nina Stirmer, Mario Siljeg, Ana Baricevic

Abstract:

Wastewater treatment processes generate significant quantities of sewage sludge that need to be adequately treated and disposed. In many EU countries, the problem of adequate disposal of sewage sludge has not been solved, nor is determined by the unique rules, instructions or guidelines. Disposal of sewage sludge is important not only in terms of satisfying the regulations, but the aspect of choosing the optimal wastewater and sludge treatment technology. Among the solutions that seem reasonable, recycling of sewage sludge and its byproducts reaches the top recommendation. Within the framework of sustainable development, recycling of sludge almost completely closes the cycle of wastewater treatment in which only negligible amounts of waste that requires landfilling are being generated. In many EU countries, significant amounts of sewage sludge are incinerated, resulting in a new byproduct in the form of ash. Sewage sludge ash is three to five times less in volume compared to stabilized and dehydrated sludge, but it also requires further management. The combustion process also destroys hazardous organic components in the sludge and minimizes unpleasant odors. The basic objective of the presented research is to explore the possibilities of recycling of the sewage sludge ash as a supplementary cementitious material. This is because of the main oxides present in the sewage sludge ash (SiO2, Al2O3 and Cao, which is similar to cement), so it can be considered as latent hydraulic and pozzolanic material. Physical and chemical characteristics of ashes, generated by sludge collected from different wastewater treatment plants, and incinerated in laboratory conditions at different temperatures, are investigated since it is a prerequisite of its subsequent recycling and the eventual use in other industries. Research was carried out by replacing up to 20% of cement by mass in cement mortar mixes with different obtained ashes and examining characteristics of created mixes in fresh and hardened condition. The mixtures with the highest ash content (20%) showed an average drop in workability of about 15% which is attributed to the increased water requirements when ash was used. Although some mixes containing added ash showed compressive and flexural strengths equivalent to those of reference mixes, generally slight decrease in strength was observed. However, it is important to point out that the compressive strengths always remained above 85% compared to the reference mix, while flexural strengths remained above 75%. Ecological impact of innovative construction products containing sewage sludge ash was determined by analyzing leaching concentrations of heavy metals. Results demonstrate that sewage sludge ash can satisfy technical and environmental criteria for use in cementitious materials which represents a new recycling application for an increasingly important waste material that is normally landfilled. Particular emphasis is placed on linking the composition of generated ashes depending on its origin and applied treatment processes (stage of wastewater treatment, sludge treatment technology, incineration temperature) with the characteristics of the final products. Acknowledgement: This work has been fully supported by Croatian Science Foundation under the project '7927 - Reuse of sewage sludge in concrete industry – from infrastructure to innovative construction products'.

Keywords: cement mortar, recycling, sewage sludge ash, sludge disposal

Procedia PDF Downloads 243