Search results for: input mixed
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4806

Search results for: input mixed

756 Design, Numerical Simulation, Fabrication and Physical Experimentation of the Tesla’s Cohesion Type Bladeless Turbine

Authors: M.Sivaramakrishnaiah, D. S .Nasan, P. V. Subhanjeneyulu, J. A. Sandeep Kumar, N. Sreenivasulu, B. V. Amarnath Reddy, B. Veeralingam

Abstract:

Design, numerical simulation, fabrication, and physical experimentation of the Tesla’s Bladeless centripetal turbine for generating electrical power are presented in this research paper. 29 Pressurized air combined with water via a nozzle system is made to pass tangentially through a set of parallel smooth discs surfaces, which impart rotational motion to the discs fastened common shaft for the power generation. The power generated depends upon the fluid speed parameter leaving the nozzle inlet. Physically due to laminar boundary layer phenomena at smooth disc surface, the high speed fluid layers away from the plate moving against the low speed fluid layers nearer to the plate develop a tangential drag from the viscous shear forces. This compels the nearer layers to drag along with the high layers causing the disc to spin. Solid Works design software and fluid mechanics and machine elements design theories was used to compute mechanical design specifications of turbine parts like 48 mm diameter discs, common shaft, central exhaust, plenum chamber, swappable nozzle inlets, etc. Also, ANSYS CFX 2018 was used for the numerical 2 simulation of the physical phenomena encountered in the turbine working. When various numerical simulation and physical experimental results were verified, there is good agreement between them 6, both quantitatively and qualitatively. The sources of input and size of the blades may affect the power generated and turbine efficiency, respectively. The results may change if there is a change in the fluid flowing between the discs. The inlet fluid pressure versus turbine efficiency and the number of discs versus turbine power studies based on both results were carried out to develop the 8 relationships between the inlet and outlet parameters of the turbine. The present research work obtained the turbine efficiency in the range of 7-10%, and for this range; the electrical power output generated was 50-60 W.

Keywords: tesla turbine, cohesion type bladeless turbine, boundary layer theory, cohesion type bladeless turbine, tangential fluid flow, viscous and adhesive forces, plenum chamber, pico hydro systems

Procedia PDF Downloads 75
755 Mental Health Impacts of COVID-19 on Diverse Youth and Families in Canada

Authors: Lucksini Raveendran

Abstract:

Introduction: This mixed-methods study focuses on the experiences of ethnocultural youth and families in Canada, identifying key barriers and opportunities to inform service programming and policies that can better meet their mental health needs during the COVID-19 pandemic and beyond. Methods: Mental Health Commission of Canada's Headstrong initiative administered the youth survey (April – June 2020) and family survey (June – August 2020) with a total sample size of 137 and 481 respondents, respectively. Thematic analysis was conducted to identify key challenges faced, coping strategies used, and help-seeking behaviours. A similar approach was also applied to the family survey data, but instead, a representative sample was collated to analyze geographically variable and ethnically diverse subgroups. Results and analysis: Multiple challenges have impacted families, including increased feelings of loneliness and distress from border travel restrictions, especially among those navigating pregnancy alone or managing children with developmental needs, which is often understudied. Also, marginalized groups were disproportionately affected by inequitable access to communication technologies, further deepening the digital divide. Some reported living in congregated homes with regular conflicts, thus leading to increased anxiety and exposure to violence. For many families, urbanicity and ethnicity played a key role in how families reported coping with feelings of uncertainty while managing work commitments, navigating community resources, fulfilling care responsibilities, and homeschooling children of all ages. Despite these challenges, there was evidence of post-traumatic growth and building community resiliency. Conclusions and implications for policy, practice, or additional research: There is a need to foster opportunities to promote and sustain mental health, wellness, and resilience for families through social connections. Also, intersectionality must be embedded in the collection, analysis, and application of data to improve equitable access to evidence-based and recovery-oriented mental health supports among diverse families in Canada. Lastly, address future research on the long-term COVID-19 impacts of travel border restrictions on family wellness.

Keywords: mental health, youth mental health, family wellness, health equity

Procedia PDF Downloads 84
754 Novel Low-cost Bubble CPAP as an Alternative Non-invasive Oxygen Therapy for Newborn Infants with Respiratory Distress Syndrome in a Tertiary Level Neonatal Intensive Care Unit in the Philippines: A Single Blind Randomized Controlled Trial

Authors: Navid P Roodaki, Rochelle Abila, Daisy Evangeline Garcia

Abstract:

Background and Objective: Respiratory Distress Syndrome (RDS) among premature infants is a major causes of neonatal death. The use of Continuous Positive Airway Pressure (CPAP) has become a standard of care for preterm newborns with RDS hence cost-effective innovations are needed. This study compared a novel low-cost Bubble CPAP (bCPAP) device to ventilator driven CPAP in the treatment of RDS. Methods: This is a single-blind, randomized controlled trial done on May 2022 to October 2022 in a Level III Neonatal Intensive Care Unit in the Philippines. Preterm newborns (<36 weeks) with RDS were randomized to receive Vayu bCPAP device or Ventilator-derived CPAP. Arterial Blood Gases, Oxygen Saturation, administration of surfactant, and CPAP failure rates were measured. Results: Seventy preterm newborns were included. No differences were observed between the Ventilator driven CPAP and Vayu bCPAP on the PaO2 (97.51mmHg vs 97.37mmHg), So2 (97.08% vs 95.60%) levels, amount of surfactant administered between groups. There were no observed differences in CPAP failure rates between Vayu bPCAP (x̄ 3.23 days) and ventilator-driven CPAP (x̄ 2.98 days). However, a significant difference was noted on the CO2 level (40.32mmHg vs 50.70mmHg), which was higher among those hooked to Ventilator-driven CPAP (p 0.004). Conclusion: This study has shown that the novel low-cost bubble CPAP (Vayu bCPAP) can be used as an efficacious alternate non invasive oxygen therapy among preterm neonates with RDS, although the CO2 levels were higher among those hooked to ventilator driven CPAP, other outcome parameters measured showed that both devices are comparable. Recommendation: A multi-center or national study to account for geographic region, which may alter the outcomes of patients connected to different ventilatory support. Cost comparison between devices is also suggested. A mixed-method research assessing the experiences of health care professionals in assembling and utilizing the gadget is a second consideration.

Keywords: bubble CPAP, ventilator-derived CPAP; infant, premature, respiratory distress syndrome

Procedia PDF Downloads 67
753 Dynamic Changes of Shifting Cultivation: Past, Present and Future Perspective of an Agroforestry System from Sri Lanka

Authors: Thavananthan Sivananthawerl

Abstract:

Shifting cultivation (Chena, Slash & Burn) is a cultivation method of raising, primarily, food crops (mainly annual) where an area of land is cleared off for its vegetation and cultivated for a period, and the abandoned (fallow) for its fertility to be naturally restored. Although this is the oldest (more than 5000 years) farming system, it is still practiced by indigenous communities of several countries such as Sri Lanka, India, Indonesia, Malaysia, Myanmar, West & Central Africa, and Amazon rainforest area. In Sri Lanka, shifting cultivation is mainly practiced during the North-East monsoon (called as Maha season, from Sept. to Dec.) with no irrigation. The traditional system allows farmers to cultivate for a short period of cultivation and a long period fallow period. This was facilitated mainly by the availability of land with less population. In addition, in the old system, cultivation practices were mostly related to religious and spiritual practices (Astrology, dynamic farming, etc.). At present, the majority of the shifting cultivators (SC’s) are cultivating in government lands, and most of them are adopting new technology (seeds, agrochemicals, machineries). Due to the local demand, almost 70% of the SC’s growing maize is mono-crop, and the rest with mixed-crop, such as groundnut, cowpea, millet, and vegetables. To ensure continuous cultivation and reduce moisture stress, they established ‘dug wells’ and used pumps to lift water from nearby sources. Due to this, the fallow period has been reduced drastically to 1- 2 years. To have the future prosperous of system, farmers should be educated so that they can understand the harmful effects of shifting cultivation and require new policies and a framework for converting the land use pattern towards high economic returns (new crop varieties, maintaining soil fertility, reducing soil erosion) while protecting the natural forests. The practice of agroforestry should be encouraged in which both the crops and the tall trees are cared for by farmers simultaneously. To facilitate the continuous cultivation, the system needs to develop water harvesting, water-conserving technologies, and scientific water management for the limited rainy season. Even though several options are available, all the solutions vary from region to region. Therefore, it is only the government and cultivators together who can find solutions to the problems of the specific areas.

Keywords: shifting cultivation, agroforestry, fallow, economic returns, government, Sri Lanka

Procedia PDF Downloads 83
752 Towards Creative Movie Title Generation Using Deep Neural Models

Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie

Abstract:

Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.

Keywords: creativity, deep machine learning, natural language generation, movies

Procedia PDF Downloads 315
751 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 331
750 Properties of Sustainable Artificial Lightweight Aggregate

Authors: Wasan Ismail Khalil, Hisham Khalid Ahmed, Zainab Ali

Abstract:

Structural Lightweight Aggregate Concrete (SLWAC) has been developed in recent years because it reduces the dead load, cost, thermal conductivity and coefficient of thermal expansion of the structure. So SLWAC has the advantage of being a relatively green building material. Lightweight Aggregate (LWA) is either occurs as natural material such as pumice, scoria, etc. or as artificial material produced from different raw materials such as expanded shale, clay, slate, etc. The use of SLWAC in Iraq is limited due to the lack in natural LWA. The existence of Iraqi clay deposit with different types and characteristics leads to the idea of producing artificial expanded clay aggregate. The main aim in this work is to present of the properties of artificial LWA produced in the laboratory. Available local bentonite clay which occurs in the Western region of Iraq was used as raw material to produce the LWA. Sodium silicate as liquid industrial waste material from glass plant was mixed with bentonite clay in mix proportion 1:1 by weight. The manufacturing method of the lightweight aggregate including, preparation and mixing of clay and sodium silicate, burning of the mixture in the furnace at the temperature between 750-800˚C for two hours, and finally gradually cooling process. The produced LWA was then crushed to small pieces then screened on standard sieve series and prepared with grading which conforms to the specifications of LWA. The maximum aggregate size used in this investigation is 10 mm. The chemical composition and the physical properties of the produced LWA are investigated. The results indicate that the specific gravity of the produced LWA is 1.5 with the density of 543kg/m3 and water absorption of 20.7% which is in conformity with the international standard of LWA. Many trail mixes were carried out in order to produce LWAC containing the artificial LWA produced in this research. The selected mix proportion is 1:1.5:2 (cement: sand: aggregate) by weight with water to cement ratio of 0.45. The experimental results show that LWAC has oven dry density of 1720 kg/m3, water absorption of 8.5%, the thermal conductivity of 0.723 W/m.K and compressive strength of 23 N/mm2. The SLWAC produced in this research can be used in the construction of different thermal insulated buildings and masonry units. It can be concluded that the SLWA produced in this study contributes to sustainable development by, using industrial waste materials, conserving energy, enhancing the thermal and structural efficiency of concrete.

Keywords: expanded clay, lightweight aggregate, structural lightweight aggregate concrete, sustainable

Procedia PDF Downloads 317
749 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing

Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska

Abstract:

Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.

Keywords: learning academic words, writing essays, cognitive load, english as an L2

Procedia PDF Downloads 59
748 Early Intervention for Preschool Children of Parents with Mental Illness: The Evaluation of a Resource for Service Providers

Authors: Stella Laletas, Andrea Reupert, Melinda Goodyear, Bradley Morgan

Abstract:

Background: Many people with a mental illness have young children. Research has shown that early childhood is a particularly vulnerable time for children whose parents have a mental illness. Moreover, repeated research has demonstrated the effectiveness of a multiagency approach to family focused practice for improving parental functioning and preventing adverse outcomes in children whose parents have a mental illness, particularly in the early years of a child’s life. However, there is a paucity of professional development resources for professionals who work with families where a parent has a mental illness and has young children. Significance of the study: This study will make a contribution to addressing knowledge gaps around resource development and workforce needs for early childhood and mental health professionals working with young children where a parent has a mental illness. Objective: This presentation describes a newly developed resource, 'Pathways of Care', specifically designed for early childhood educators and mental health workers, alongside pilot evaluation data regarding its effectiveness. ‘Pathways of Care’ aims to promote collaborative practice and present early identification and referral processes for workers in this sector. The resource was developed by the Children of Parents with a Mental Illness (COPMI) National Initiative which is funded by the Australian Government. Method: Using a mixed method design, the effectiveness of the training resource is also presented. Fifteen workers completed the Family Focus Mental Health Practice Questionnaire pre and post using the resource, to measure confidence and practice change; semi-structured interviews were also conducted with eight of these same workers to further explore the utility of the resource. Findings: The findings indicated the resource was effective in increasing knowledge and confidence, particularly for new and/or inexperienced staff. Examples of how the resource was used in practice by various professions emerged from the interview data. Conclusions: Collaborative practice, early identification and intervention in early childhood can potentially play a key role in altering the life trajectory of children who are at risk. This information has important implications for workforce development and staff training in both the early childhood and mental health sectors. Implications for policy and future research are discussed.

Keywords: parents with mental ilnesses, early intervention, evaluation, preschool children

Procedia PDF Downloads 433
747 Evaluate Existing Mental Health Intervention Programs Tailored for International Students in China

Authors: Nargiza Nuralieva

Abstract:

This meta-analysis investigates the effectiveness of mental health interventions tailored for international students in China, with a specific focus on Uzbek students and Silk Road scholarship recipients. The comprehensive literature review synthesizes existing studies, papers, and reports, evaluating the outcomes, limitations, and cultural considerations of these programs. Data selection targets mental health programs for international students, honing in on a subset analysis related to Uzbek students and Silk Road scholarship recipients. The analysis encompasses diverse outcome measures, such as reported stress levels, utilization rates of mental health services, academic performance, and more. Results reveal a consistent and statistically significant reduction in reported stress levels, emphasizing the positive impact of these interventions. Utilization rates of mental health services witness a significant increase, highlighting the accessibility and effectiveness of support. Retention rates show marked improvement, though academic performance yields mixed findings, prompting nuanced exploration. Psychological well-being, quality of life, and overall well-being exhibit substantial enhancements, aligning with the overarching goal of holistic student development. Positive outcomes are observed in increased help-seeking behavior, positive correlations with social support, and significant reductions in anxiety levels. Cultural adaptation and satisfaction with interventions both indicate positive outcomes, underscoring the effectiveness of culturally sensitive mental health support. The findings emphasize the importance of tailored mental health interventions for international students, providing novel insights into the specific needs of Uzbek students and Silk Road scholarship recipients. This research contributes to a nuanced understanding of the multifaceted impact of mental health programs on diverse student populations, offering valuable implications for the design and refinement of future interventions. As educational institutions continue to globalize, addressing the mental health needs of international students remains pivotal for fostering inclusive and supportive learning environments.

Keywords: international students, mental health interventions, cross-cultural support, silk road scholarship, meta-analysis

Procedia PDF Downloads 40
746 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process

Authors: Johannes Gantner, Michael Held, Matthias Fischer

Abstract:

The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.

Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation

Procedia PDF Downloads 276
745 Teaching Self-Advocacy Skills to Students With Learning Disabilities: The S.A.M.E. Program of Instruction

Authors: Dr. Rebecca Kimelman

Abstract:

Teaching students to self-advocate has become a central topic in special education literature and practice. However, many special education programs do not address this important skill area. To this end, I created and implemented the Self Advocacy Made Easy (S.A.M.E.) program of instruction, intended to enhance the self-advocacy skills of young adults with mild to moderate disabilities. The effectiveness of S.A.M.E., the degree to which self-advocacy skills were acquired and demonstrated by the students, the level of parental support, and the impact of culture on the process, and teachers’ beliefs and attitudes about the role of self-advocacy skills for their students were measured using action research that employed mixed methodology. Conducted at an overseas American International School, this action research study sought answers to these questions by providing an in-depth portrayal of the S.A.M.E. program, as well as the attitudes and perceptions of the stakeholders involved in the study (thirteen students, their parents, teachers and counsellors). The findings of this study were very positive. The S.A.M.E. program was found to be a valid and valuable instructional tool for teaching self-advocacy skills to students with learning disabilities and ADHD. The study showed participation in the S.A.M.E. program led to an increased understanding of the important elements of self-advocacy, an increase in students’ skills and abilities to self-advocate, and a positive increase in students’ feelings about themselves. Inclusion in the Student-Led IEP meetings, an authentic student assessment within the S.A.M.E. program, also yielded encouraging results, including a higher level of ownership of one’s profile and learning needs, a higher level of student engagement and participation in the IEP meeting, and a growing student awareness of the relevance of the document and the IEP process to their lives. Without exception, every parent believed that participating in the Student-Led IEP led to a growth in confidence in their children, including that it taught them how to ‘own’ their disability and an improvement in their communication skills. Teachers and counsellors that participated in the study felt the program was worthwhile, and led to an increase in the students’ ability to acknowledge their learning profile and to identify and request the accommodations (such as extended time or use of a calculator) they need to overcome or work around their disability. The implications for further research are many, and include an examination of the degree to which participation in S.A.M.E. fosters student achievement, the long-term effects of participation in the program, and the degree to which student participation in the Student-Led IEP meeting increases parents’ level of understanding and involvement.

Keywords: self-advocacy, learning disabilities, ADHD, student-led IEP process

Procedia PDF Downloads 45
744 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 53
743 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 130
742 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 144
741 Feasibility Study and Experiment of On-Site Nuclear Material Identification in Fukushima Daiichi Fuel Debris by Compact Neutron Source

Authors: Yudhitya Kusumawati, Yuki Mitsuya, Tomooki Shiba, Mitsuru Uesaka

Abstract:

After the Fukushima Daiichi nuclear power reactor incident, there are a lot of unaccountable nuclear fuel debris in the reactor core area, which is subject to safeguard and criticality safety. Before the actual precise analysis is performed, preliminary on-site screening and mapping of nuclear debris activity need to be performed to provide a reliable data on the nuclear debris mass-extraction planning. Through a collaboration project with Japan Atomic Energy Agency, an on-site nuclear debris screening system by using dual energy X-Ray inspection and neutron energy resonance analysis has been established. By using the compact and mobile pulsed neutron source constructed from 3.95 MeV X-Band electron linac, coupled with Tungsten as electron-to-photon converter and Beryllium as a photon-to-neutron converter, short-distance neutron Time of Flight measurement can be performed. Experiment result shows this system can measure neutron energy spectrum up to 100 eV range with only 2.5 meters Time of Flightpath in regards to the X-Band accelerator’s short pulse. With this, on-site neutron Time of Flight measurement can be used to identify the nuclear debris isotope contents through Neutron Resonance Transmission Analysis (NRTA). Some preliminary NRTA experiments have been done with Tungsten sample as dummy nuclear debris material, which isotopes Tungsten-186 has close energy absorption value with Uranium-238 (15 eV). The results obtained shows that this system can detect energy absorption in the resonance neutron area within 1-100 eV. It can also detect multiple elements in a material at once with the experiment using a combined sample of Indium, Tantalum, and silver makes it feasible to identify debris containing mixed material. This compact neutron Time of Flight measurement system is a great complementary for dual energy X-Ray Computed Tomography (CT) method that can identify atomic number quantitatively but with 1-mm spatial resolution and high error bar. The combination of these two measurement methods will able to perform on-site nuclear debris screening at Fukushima Daiichi reactor core area, providing the data for nuclear debris activity mapping.

Keywords: neutron source, neutron resonance, nuclear debris, time of flight

Procedia PDF Downloads 226
740 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method

Authors: Emmanuel Ophel Gilbert, Williams Speret

Abstract:

The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.

Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids

Procedia PDF Downloads 162
739 Statistical Design of Central Point for Evaluate the Combination of PH and Cinnamon Essential Oil on the Antioxidant Activity Using the ABTS Technique

Authors: H. Minor-Pérez, A. M. Mota-Silva, S. Ortiz-Barrios

Abstract:

Substances of vegetable origin with antioxidant capacity have a high potential for application on the conservation of some foods, can prevent or reduce for example oxidation of lipids. However a food is a complex system whose wide variety of components wich can reduce or eliminate this antioxidant capacity. The antioxidant activity can be determined with the ABTS technique. The radical ABTS+ is generated from the acid 2, 2´ - Azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS). This radical is a composite color bluish-green, stable and with a spectrum of absorption into the UV-visible. The addition of antioxidants causes discoloration, value that can be reported as a percentage of inhibition of the cation radical ABTS+. The objective of this study was evaluated the effect of the combination of the pH and the essential oil of cinnamon (EOC) on inhibition of the radical ABTS+, using statistical design of central point (Design Expert) to obtain mathematical models that describe this phenomenon. Were evaluated 17 treatments with combinations of pH 5, 6 and 7 (citrate-phosphate buffer) and the concentration of essential oil of cinnamon (C): 0 µg/mL, 100 µg/mL and 200 µg/mL. The samples were analyzed using the ABTS technique. The reagent was dissolved in methanol 80% to standardized the absorbance to 0.7 +/- 0.1 at 754 nm. Then samples were mixed with reagent standardized ABTS and after 1 min and 7 min absorbance was read for each treatment at 754 nm. Was used a curve pattern with vitamin C and reported the values as inhibition (%) of radical ABTS+. The statistical analysis shows the experimental results were adjusted to a quadratic model, to the times of 1 min and 7 min. This model describes the influence of the factors investigated independently: pH and cinnamon essential oil (µg/mL) and the effect of the interaction between pH*C, as well as the square of the pH2 and C2. The model obtained was Y = 10.33684 - 3.98118*pH + 1.17031*C + 0.62745*pH2 - 3.26675*10-3*C2 - 0.013112*pH*C, where Y is the response variable. The coefficient of determination was 0.9949 for 1 min. The equation was obtained at 7 min and = - 10.89710 + 1.52341*pH + 1.32892*C + 0.47953*pH2 - 3.56605*10- *C2 - 0.034687*pH*C. The coefficient of determination was 0.9970. This means that only 1% of the total variation is not explained by the developed models. At 100 µg/mL of EOC was obtained an inhibition percentage of 80%, 84% and 97% for the pH values of 5,6 and 7 respectively, while a value of 200 µg/mL the inhibition (%) was very similar for the treatments. In these values of pH was obtained an inhibition close 97%. In conclusion the pH does not have a significant effect on the antioxidant capacity, while the concentration of EOC was decisive for the antioxidant capacity. The authors acknowledge the funding provided by the CONACYT for the project 131998.

Keywords: antioxidant activity, ABTS technique, essential oil of cinnamon, mathematical models

Procedia PDF Downloads 393
738 Economic Impact of Drought on Agricultural Society: Evidence Based on a Village Study in Maharashtra, India

Authors: Harshan Tee Pee

Abstract:

Climate elements include surface temperatures, rainfall patterns, humidity, type and amount of cloudiness, air pressure and wind speed and direction. Change in one element can have an impact on the regional climate. The scientific predictions indicate that global climate change will increase the number of extreme events, leading to more frequent natural hazards. Global warming is likely to intensify the risk of drought in certain parts and also leading to increased rainfall in some other parts. Drought is a slow advancing disaster and creeping phenomenon– which accumulate slowly over a long period of time. Droughts are naturally linked with aridity. But droughts occur over most parts of the world (both wet and humid regions) and create severe impacts on agriculture, basic household welfare and ecosystems. Drought condition occurs at least every three years in India. India is one among the most vulnerable drought prone countries in the world. The economic impacts resulting from extreme environmental events and disasters are huge as a result of disruption in many economic activities. The focus of this paper is to develop a comprehensive understanding about the distributional impacts of disaster, especially impact of drought on agricultural production and income through a panel study (drought year and one year after the drought) in Raikhel village, Maharashtra, India. The major findings of the study indicate that cultivating area as well as the number of cultivating households reduced after the drought, indicating a shift in the livelihood- households moved from agriculture to non-agriculture. Decline in the gross cropped area and production of various crops depended on the negative income from these crops in the previous agriculture season. All the landholding categories of households except landlords had negative income in the drought year and also the income disparities between the households were higher in that year. In the drought year, the cost of cultivation was higher for all the landholding categories due to the increased cost for irrigation and input cost. In the drought year, agriculture products (50 per cent of the total products) were used for household consumption rather than selling in the market. It is evident from the study that livelihood which was based on natural resources became less attractive to the people to due to the risk involved in it and people were moving to less risk livelihood for their sustenance.

Keywords: climate change, drought, agriculture economics, disaster impact

Procedia PDF Downloads 103
737 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 73
736 Music Education is Languishing in Rural South African Schools as Revealed Through Education Students

Authors: E. N. Jansen van Vuuren

Abstract:

When visiting Foundation Phase (FP) students during their Teaching Practice at schools in rural Mpumalanga, the lack of music education is evident through the absence of musical sounds, with the exception of a limited repertoire of songs that are sung by all classes everywhere you go. The absence of music teaching resources such as posters and music instruments add to the perception that generalist teachers in the FP are not teaching music. Pre-service students also acknowledge that they have never seen a music class being taught during their teaching practice visits at schools. This lack of music mentoring impacts the quality of teachers who are about to enter the workforce and ultimately results in the perpetuation of no music education in many rural schools. The situation in more affluent schools present a contrasting picture with music education being given a high priority and generalist teachers often being supported by music specialists, paid for by the parents. When student teachers start their music course, they have limited knowledge to use as a foundation for their studies. The aim of the study was to ascertain the music knowledge that students gained throughout their school careers so that the curriculum could be adapted to suit their needs. By knowing exactly what pre-service teachers know about music, the limited tuition time at tertiary level can be used in the most suitable manner and concentrate on filling the knowledge gaps. Many scholars write about the decline of music education in South African schools and mention reasons, but the exact music knowledge void amongst students does not feature in the studies. Knowing the parameters of students’ music knowledge will empower lecturers to restructure their curricula to meet the needs of pre-service students. The research question asks, “what is the extent of the music void amongst rural pre-service teachers in a B.Ed. FP course at an African university?” This action research was done using a pragmatic paradigm and mixed methodology. First year students in the cohort studying for a B.Ed. in FP were requested to complete an online baseline assessment to determine the status quo. This assessment was compiled using the CAPS music content for Grade R to 9. The data was sorted using the elements of music as a framework. Findings indicate that students do not have a suitable foundation in music education despite supposedly having had music tuition from grade R to grade 9. Knowing the content required to fill the lack of knowledge provides academics with valuable information to amend their curricula and to ensure that future teachers will be able to provide rural learners with the same foundations in music as those received by learners in more affluent schools. It is only then that the rich music culture of the African continent will thrive.

Keywords: generalist educators, music education, music curriculum, pre-service teachers

Procedia PDF Downloads 56
735 Using Signature Assignments and Rubrics in Assessing Institutional Learning Outcomes and Student Learning

Authors: Leigh Ann Wilson, Melanie Borrego

Abstract:

The purpose of institutional learning outcomes (ILOs) is to assess what students across the university know and what they do not. The issue is gathering this information in a systematic and usable way. This presentation will explain how one institution has engineered this process for both student success and maximum faculty curriculum and course design input. At Brandman University, there are three levels of learning outcomes: course, program, and institutional. Institutional Learning Outcomes (ILOs) are mapped to specific courses. Faculty course developers write the signature assignments (SAs) in alignment with the Institutional Learning Outcomes for each course. These SAs use a specific rubric that is applied consistently by every section and every instructor. Each year, the 12-member General Education Team (GET), as a part of their work, conducts the calibration and assessment of the university-wide SAs and the related rubrics for one or two of the five ILOs. GET members, who are senior faculty and administrators who represent each of the university's schools, lead the calibration meetings. Specifically, calibration is a process designed to ensure the accuracy and reliability of evaluating signature assignments by working with peer faculty to interpret rubrics and compare scoring. These calibration meetings include the full time and adjunct faculty members who teach the course to ensure consensus on the application of the rubric. Each calibration session is chaired by a GET representative as well as the course custodian/contact where the ILO signature assignment resides. The overall calibration process GET follows includes multiple steps, such as: contacting and inviting relevant faculty members to participate; organizing and hosting calibration sessions; and reviewing and discussing at least 10 samples of student work from class sections during the previous academic year, for each applicable signature assignment. Conversely, the commitment for calibration teams consist of attending two virtual meetings lasting up to three hours in duration. The first meeting focuses on interpreting the rubric, and the second meeting involves comparing scores for sample work and sharing feedback about the rubric and assignment. Next, participants are expected to follow all directions provided and participate actively, and respond to scheduling requests and other emails within 72 hours. The virtual meetings are recorded for future institutional use. Adjunct faculty are paid a small stipend after participating in both calibration meetings. Full time faculty can use this work on their annual faculty report for "internal service" credit.

Keywords: assessment, assurance of learning, course design, institutional learning outcomes, rubrics, signature assignments

Procedia PDF Downloads 270
734 Prevalence of Cyp2d6 and Its Implications for Personalized Medicine in Saudi Arabs

Authors: Hamsa T. Tayeb, Mohammad A. Arafah, Dana M. Bakheet, Duaa M. Khalaf, Agnieszka Tarnoska, Nduna Dzimiri

Abstract:

Background: CYP2D6 is a member of the cytochrome P450 mixed-function oxidase system. The enzyme is responsible for the metabolism and elimination of approximately 25% of clinically used drugs, especially in breast cancer and psychiatric therapy. Different phenotypes have been described displaying alleles that lead to a complete loss of enzyme activity, reduced function (poor metabolizers – PM), hyperfunctionality (ultrarapid metabolizers–UM) and therefore drug intoxication or loss of drug effect. The prevalence of these variants may vary among different ethnic groups. Furthermore, the xTAG system has been developed to categorized all patients into different groups based on their CYP2D6 substrate metabolization. Aim of the study: To determine the prevalence of the different CYP2D6 variants in our population, and to evaluate their clinical relevance in personalized medicine. Methodology: We used the Luminex xMAP genotyping system to sequence 305 Saudi individuals visiting the Blood Bank of our Institution and determine which polymorphisms of CYP2D6 gene are prevalent in our region. Results: xTAG genotyping showed that 36.72% (112 out of 305 individuals) carried the CYP2D6_*2. Out of the 112 individuals with the *2 SNP, 6.23% had multiple copies of *2 SNP (19 individuals out of 305 individuals), resulting in an UM phenotype. About 33.44% carried the CYP2D6_*41, which leads to decreased activity of the CYP2D6 enzyme. 19.67% had the wild-type alleles and thus had normal enzyme function. Furthermore, 15.74% carried the CYP2D6_*4, which is the most common nonfunctional form of the CYP2D6 enzyme worldwide. 6.56% carried the CYP2D6_*17, resulting in decreased enzyme activity. Approximately 5.73% carried the CYP2D6_*10, consequently decreasing the enzyme activity, resulting in a PM phenotype. 2.30% carried the CYP2D6_*29, leading to decreased metabolic activity of the enzyme, and 2.30% carried the CYP2D6_*35, resulting in an UM phenotype, 1.64% had a whole-gene deletion CYP2D6_*5, thus resulting in the loss of CYP2D6 enzyme production, 0.66% carried the CYP2D6_*6 variant. One individual carried the CYP2D6_*3(B), producing an inactive form of the enzyme, which leads to decrease of enzyme activity, resulting in a PM phenotype. Finally, one individual carried the CYP2D6_*9, which decreases the enzyme activity. Conclusions: Our study demonstrates that different CYP2D6 variants are highly prevalent in ethnic Saudi Arabs. This finding sets a basis for informed genotyping for these variants in personalized medicine. The study also suggests that xTAG is an appropriate procedure for genotyping the CYP2D6 variants in personalized medicine.

Keywords: CYP2D6, hormonal breast cancer, pharmacogenetics, polymorphism, psychiatric treatment, Saudi population

Procedia PDF Downloads 565
733 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 423
732 Demographic Shrinkage and Reshaping Regional Policy of Lithuania in Economic Geographic Context

Authors: Eduardas Spiriajevas

Abstract:

Since the end of the 20th century, when Lithuania regained its independence, a process of demographic shrinkage started. Recently, it affects the efficiency of implementation of actions related to regional development policy and geographic scopes of created value added in the regions. The demographic structures of human resources reflect onto the regions and their economic geographic environment. Due to reshaping economies and state reforms on restructuration of economic branches such as agriculture and industry, it affects the economic significance of services’ sector. These processes influence the competitiveness of labor market and its demographic characteristics. Such vivid consequences are appropriate for the structures of human migrations, which affected the processes of demographic ageing of human resources in the regions, especially in peripheral ones. These phenomena of modern times induce the demographic shrinkage of society and its economic geographic characteristics in the actions of regional development and in regional policy. The internal and external migrations of population captured numerous regional economic disparities, and influenced on territorial density and concentration of population of the country and created the economies of spatial unevenness in such small geographically compact country as Lithuania. The processes of territorial reshaping of distribution of population create new regions and their economic environment, which is not corresponding to the main principles of regional policy and its power to create the well-being and to promote the attractiveness for economic development. These are the new challenges of national regional policy and it should be researched in a systematic way of taking into consideration the analytical approaches of regional economy in the context of economic geographic research methods. A comparative territorial analysis according to administrative division of Lithuania in relation to retrospective approach and introduction of method of location quotients, both give the results of economic geographic character with cartographic representations using the tools of spatial analysis provided by technologies of Geographic Information Systems. A set of these research methods provide the new spatially evidenced based results, which must be taken into consideration in reshaping of national regional policy in economic geographic context. Due to demographic shrinkage and increasing differentiation of economic developments within the regions, an input of economic geographic dimension is inevitable. In order to sustain territorial balanced economic development, there is a need to strengthen the roles of regional centers (towns) and to empower them with new economic functionalities for revitalization of peripheral regions, and to increase their economic competitiveness and social capacities on national scale.

Keywords: demographic shrinkage, economic geography, Lithuania, regions

Procedia PDF Downloads 147
731 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 480
730 Utilization of Standard Paediatric Observation Chart to Evaluate Infants under Six Months Presenting with Non-Specific Complaints

Authors: Michael Zhang, Nicholas Marriage, Valerie Astle, Marie-Louise Ratican, Jonathan Ash, Haddijatou Hughes

Abstract:

Objective: Young infants are often brought to the Emergency Department (ED) with a variety of complaints, some of them are non-specific and present as a diagnostic challenge to the attending clinician. Whilst invasive investigations such as blood tests and lumbar puncture are necessary in some cases to exclude serious infections, some basic clinical tools in additional to thorough clinical history can be useful to assess the risks of serious conditions in these young infants. This study aimed to examine the utilization of one of clinical tools in this regard. Methods: This retrospective observational study examined the medical records of infants under 6 months presenting to a mixed urban ED between January 2013 and December 2014. The infants deemed to have non-specific complaints or diagnoses by the emergency clinicians were selected for analysis. The ones with clear systemic diagnoses were excluded. Among all relevant clinical information and investigation results, utilization of Standard Paediatric Observation Chart (SPOC) was particularly scrutinized in these medical records. This specific chart was developed by the expert clinicians in local health department. It categorizes important clinical signs into some color-coded zones as a visual cue for serious implication of some abnormalities. An infant is regarded as SPOC positive when fulfills 1 red zone or 2 yellow zones criteria, and the attending clinician would be prompted to investigate and treat for potential serious conditions accordingly. Results: Eight hundred and thirty-five infants met the inclusion criteria for this project. The ones admitted to the hospital for further management were more likely to have SPOC positive criteria than the discharged infants (Odds ratio: 12.26, 95% CI: 8.04 – 18.69). Similarly, Sepsis alert criteria on SPOC were positive in a higher percentage of patients with serious infections (56.52%) in comparison to those with mild conditions (15.89%) (p < 0.001). The SPOC sepsis criteria had a sensitivity of 56.5% (95% CI: 47.0% - 65.7%) and a moderate specificity of 84.1% (95% CI: 80.8% - 87.0%) to identify serious infections. Applying to this infant population, with a 17.4% prevalence of serious infection, the positive predictive value was only 42.8% (95% CI: 36.9% - 49.0%). However, the negative predictive value was high at 90.2% (95% CI: 88.1% - 91.9%). Conclusions: Standard Paediatric Observation Chart has been applied as a useful clinical tool in the clinical practice to help identify and manage young sick infants in ED effectively.

Keywords: clinical tool, infants, non-specific complaints, Standard Paediatric Observation Chart

Procedia PDF Downloads 236
729 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution

Authors: Robin Devillers, Chantal van Dijk

Abstract:

Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languages

Keywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development

Procedia PDF Downloads 89
728 Empirical Orthogonal Functions Analysis of Hydrophysical Characteristics in the Shira Lake in Southern Siberia

Authors: Olga S. Volodko, Lidiya A. Kompaniets, Ludmila V. Gavrilova

Abstract:

The method of empirical orthogonal functions is the method of data analysis with a complex spatial-temporal structure. This method allows us to decompose the data into a finite number of modes determined by empirically finding the eigenfunctions of data correlation matrix. The modes have different scales and can be associated with various physical processes. The empirical orthogonal function method has been widely used for the analysis of hydrophysical characteristics, for example, the analysis of sea surface temperatures in the Western North Atlantic, ocean surface currents in the North Carolina, the study of tropical wave disturbances etc. The method used in this study has been applied to the analysis of temperature and velocity measurements in saline Lake Shira (Southern Siberia, Russia). Shira is a shallow lake with the maximum depth of 25 m. The lake Shira can be considered as a closed water site because of it has one small river providing inflow and but it has no outflows. The main factor that causes the motion of fluid is variable wind flows. In summer the lake is strongly stratified by temperature and saline. Long-term measurements of the temperatures and currents were conducted at several points during summer 2014-2015. The temperature has been measured with an accuracy of 0.1 ºC. The data were analyzed using the empirical orthogonal function method in the real version. The first empirical eigenmode accounts for 70-80 % of the energy and can be interpreted as temperature distribution with a thermocline. A thermocline is a thermal layer where the temperature decreases rapidly from the mixed upper layer of the lake to much colder deep water. The higher order modes can be interpreted as oscillations induced by internal waves. The currents measurements were recorded using Acoustic Doppler Current Profilers 600 kHz and 1200 kHz. The data were analyzed using the empirical orthogonal function method in the complex version. The first empirical eigenmode accounts for about 40 % of the energy and corresponds to the Ekman spiral occurring in the case of a stationary homogeneous fluid. Other modes describe the effects associated with the stratification of fluids. The second and next empirical eigenmodes were associated with dynamical modes. These modes were obtained for a simplified model of inhomogeneous three-level fluid at a water site with a flat bottom.

Keywords: Ekman spiral, empirical orthogonal functions, data analysis, stratified fluid, thermocline

Procedia PDF Downloads 124
727 Exploring the Nexus of Gastronomic Tourism and Its Impact on Destination Image

Authors: Usha Dinakaran, Richa Ganguly

Abstract:

Gastronomic tourism has evolved into a prominent niche within the travel industry, with tourists increasingly seeking unique culinary experiences as a primary motivation for their journeys. This research explores the intricate relationship between gastronomic tourism and its profound influence on the overall image of travel destinations. It delves into the multifaceted aspects of culinary experiences, tourists' perceptions, and the preservation of cultural identity, all of which play pivotal roles in shaping a destination's image. The primary aim of this study is to comprehensively examine the interplay between gastronomy and tourism, specifically focusing on its impact on destination image. The research seeks to achieve the following objectives: (1) Investigate how tourists perceive and engage with gastronomic tourism experiences. (2) Understand the significance of food in shaping the tourism image. (3.) Explore the connection between gastronomy and the destination's cultural identity Quantify the relationship between tourists' engagement in co-creation activities related to gastronomic tourism and their overall satisfaction with the quality of their culinary experiences. To achieve these objectives, a mixed-method research approach will be employed, including surveys, interviews, and content analysis. Data will be collected from tourists visiting diverse destinations known for their culinary offerings. This research anticipates uncovering valuable insights into the nexus between gastronomic tourism and destination image. It is expected to shed light on how tourists' perceptions of culinary experiences impact their overall perception of a destination. Additionally, the study aims to identify factors influencing tourist satisfaction and how cultural identity is preserved and promoted through gastronomic tourism. The findings of this research hold practical implications for destination marketers and stakeholders. Understanding the symbiotic relationship between gastronomy and tourism can guide the development of more targeted marketing strategies. Furthermore, promoting co-creation activities can enhance tourists' culinary experiences and contribute to the positive image of destinations.This study contributes to the growing body of knowledge regarding gastronomic tourism by consolidating insights from various studies and offering a comprehensive perspective on its impact on destination image. It offers a platform for future research in this domain and underscores the importance of culinary experiences in contemporary travel. In conclusion, this research endeavors to illuminate the dynamic interplay between gastronomic tourism and destination image, providing valuable insights for both academia and industry stakeholders in the field of tourism and hospitality.

Keywords: gastronomy, tourism, destination image, culinary

Procedia PDF Downloads 66