Search results for: optimized criteria
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4215

Search results for: optimized criteria

225 Self-Evaluation of the Foundation English Language Programme at the Center for Preparatory Studies Offered at the Sultan Qaboos University, Oman: Process and Findings

Authors: Meenalochana Inguva

Abstract:

The context: The Center for Preparatory study is one of the strongest and most vibrant academic teaching units of the Sultan Qaboos University (SQU). The Foundation Programme English Language (FPEL) is part of a larger foundation programme which was implemented at SQU in fall 2010. The programme has been designed to prepare the students who have been accepted to study in the university in order to achieve the required educational goals (the learning outcomes) that have been designed according to Oman Academic Standards and published by the Omani Authority for Academic Accreditation (OAAA) for the English language component. The curriculum: At the CPS, the English language curriculum is based on the learning outcomes drafted for each level. These learning outcomes guide the students in meeting what is expected of them by the end of each level. These six levels are progressive in nature and are seen as a continuum. The study: A periodic evaluation of language programmes is necessary to improve the quality of the programmes and to meet the set goals of the programmes. An evaluation may be carried out internally or externally depending on the purpose and context. A self-study programme was initiated at the beginning of spring semester 2015 with a team comprising a total of 11 members who worked with-in the assigned course areas (level and programme specific). Only areas specific to FPEL have been included in the study. The study was divided into smaller tasks and members focused on their assigned courses. The self-study primarily focused on analyzing the programme LOs, curriculum planning, materials used and their relevance against the GFP exit standards. The review team also reflected on the assessment methods and procedures followed to reflect on student learning. The team has paid attention to having standard criteria for assessment and transparency in procedures. A special attention was paid to the staging of LOs across levels to determine students’ language and study skills ability to cope with higher level courses. Findings: The findings showed that most of the LOs are met through the materials used for teaching. Students score low on objective tests and high on subjective tests. Motivated students take advantage of academic support activities others do not utilize the student support activities to their advantage. Reading should get more hours. In listening, the format of the listening materials in CT 2 does not match the test format. Some of the course materials need revision. For e.g. APA citation, referencing etc. No specific time is allotted for teaching grammar Conclusion: The findings resulted in taking actions in bridging gaps. It will also help the center to be better prepared for the external review of its FPEL curriculum. It will also provide a useful base to prepare for the self-study portfolio for GFP standards assessment and future audit.

Keywords: curriculum planning, learning outcomes, reflections, self-evaluation

Procedia PDF Downloads 226
224 Association of Zinc with New Generation Cardiovascular Risk Markers in Childhood Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Zinc is a vital element required for growth and development. This fact makes zinc important, particularly for children. It maintains normal cellular structure and functions. This essential element appears to have protective effects against coronary artery disease and cardiomyopathy. Higher serum zinc levels are associated with lower risk of cardiovascular diseases (CVDs). There is a significant association between low serum zinc levels and heart failure. Zinc may be a potential biomarker of cardiovascular health. High sensitive cardiac troponin T (hs-cTnT) and cardiac myosin binding protein C (cMyBP-C) are new generation markers used for prediagnosis, diagnosis, and prognosis of CVDs. The aim of this study is to determine zinc as well as new generation cardiac markers profiles in children with normal body mass index (N-BMI), obese (OB), morbid obese (MO) children, and children with metabolic syndrome (MetS) findings. The association among them will also be investigated. Four study groups were constituted. The study protocol was approved by the institutional Ethics Committee of Tekirdag Namik Kemal University. Parents of the participants filled informed consent forms to participate in the study. Group 1 is composed of 44 children with N-BMI. Group 2 and Group 3 comprised 43 OB and 45 MO children, respectively. Forty-five MO children with MetS findings were included in Group 4. World Health Organization age- and sex-adjusted BMI percentile tables were used to constitute groups. These values were 15-85, 95-99, and above 99 for N-BMI, OB, and MO, respectively. Criteria for MetS findings were determined. Routine biochemical analyses, including zinc, were performed. High sensitive-cTnT and cMyBP-C concentrations were measured by kits based on enzyme-linked immunosorbent assay principle. Appropriate statistical tests within the scope of SPSS were used for the evaluation of the study data. p<0.05 was accepted as statistically significant. Four groups were matched for age and gender. Decreased zinc concentrations were measured in Groups 2, 3, and 4 compared to Group 1. Groups did not differ from one another in terms of hs-cTnT. There were statistically significant differences between cMyBP-C levels of MetS group and N-BMI as well as OB groups. There was an increasing trend going from N-BMI group to MetS group. There were statistically significant negative correlations between zinc and hs-cTnT as well as cMyBP-C concentrations in MetS group. In conclusion, inverse correlations detected between zinc and new generation cardiac markers (hs-TnT and cMyBP-C) have pointed out that decreased levels of this physiologically essential trace element accompany increased levels of hs-cTnT as well as cMyBP-C in children with MetS. This finding emphasizes that both zinc and these new generation cardiac markers may be evaluated as biomarkers of cardiovascular health during severe childhood obesity precipitated with MetS findings and also suggested as the messengers of the future risk in the adulthood periods of children with MetS.

Keywords: cardiac myosin binding protein-C, cardiovascular diseases, children, high sensitive cardiac troponin T, obesity

Procedia PDF Downloads 110
223 Clinical Response of Nuberol Forte® (Paracetamol 650 MG+Orphenadrine 50 MG) For Pain Management with Musculoskeletal Conditions in Routine Pakistani Practice (NFORTE-EFFECT)

Authors: Shahid Noor, Kazim Najjad, Muhammad Nasir, Irshad Bhutto, Abdul Samad Memon, Khurram Anwar, Tehseen Riaz, Mian Muhammad Hanif, Nauman A. Mallik, Saeed Ahmed, Israr Ahmed, Ali Yasir

Abstract:

Background: Musculoskeletal pain is the most common complaint presented to the health practitioner. It is well known that untreated or under-treated pain can have a significant negative impact on an individual’s quality of life (QoL). Objectives: This study was conducted across 10 sites in six (6) major cities of Pakistan to evaluate the tolerability, safety, and the clinical response of Nuberol Forte® (Paracetamol 650 mg + Orphenadrine 50 mg) to musculoskeletal pain in routine Pakistani practice and its impact on improving the patient’s QoL. Design & Methods: This NFORT-EFFECT observational, prospective multicenter study was conducted in compliance with Good Clinical Practice guidelines and local regulatory requirements. The study sponsor was "The Searle Company Limited, Pakistan. To maintain the GCP compliances, the sponsor assigned the CRO for the site and data management. Ethical approval was obtained from an independent ethics committee. The IEC reviewed the progress of the study. Written informed consent was obtained from the study participants, and their confidentiality was maintained throughout the study. A total of 399 patients with known prescreened musculoskeletal conditions and pain who attended the study sites were recruited, as per the inclusion/exclusion criteria (clinicaltrials.gov ID# NCT04765787). The recruited patients were then prescribed Paracetamol (650 mg) and Orphenadrine (50 mg) combination (Nuberol Forte®) for 7 to 14 days as per the investigator's discretion based on the pain intensity. After the initial screening (visit 1), a follow-up visit was conducted after 1-2 weeks of the treatment (visit 2). Study Endpoints: The primary objective was to assess the pain management response of Nuberol Forte treatment and the overall safety of the drug. The Visual Analogue Scale (VAS) scale was used to measure pain severity. Secondary to pain, the patients' health-related quality of life (HRQoL) was also assessed using the Muscle, Joint Measure (MJM) scale. The safety was monitored on the first dose by the patients. These assessments were done on each study visit. Results: Out of 399 enrolled patients, 49.4% were males, and 50.6% were females with a mean age of 47.24 ± 14.20 years. Most patients were presented with Knee Osteoarthritis (OA), i.e., 148(38%), followed by backache 70(18.2%). A significant reduction in the mean pain score was observed after the treatment with the combination of Paracetamol and Orphenadrine (p<0.05). Furthermore, an overall improvement in the patient’s QoL was also observed. During the study, only ten patients reported mild adverse events (AEs). Conclusion: The combination of Paracetamol and Orphenadrine (Nuberol Forte®) exhibited effective pain management among patients with musculoskeletal conditions and also improved their QoL.

Keywords: musculoskeletal pain, orphenadrine/paracetamol combination, pain management, quality of life, Pakistani population

Procedia PDF Downloads 169
222 A Generation Outside: Afghan Refugees in Greece 2003-2016

Authors: Kristina Colovic, Mari Janikian, Nikolaos Takis, Fotini-Sonia Apergi

Abstract:

A considerable number of Afghan asylum seekers in Greece are still waiting for answers about their future and status for personal, social and societal advancement. Most have been trapped in a stalemate of continuously postponed or temporarily progressed levels of integration into the EU/Greek process of asylum. Limited quantitative research exists investigating the psychological effects of long-term displacement among Afghans refugees in Greece. The purpose of this study is to investigate factors that are associated with and predict psychological distress symptoms among this population. Data from a sample of native Afghan nationals (N > 70) living in Greece for approximately the last ten years will be collected from May to July 2016. Criteria for participation include the following: being 18 years of age or older, and emigration from Afghanistan to Greece from 2003 onwards (i.e., long-term refugees or part of the 'old system of asylum'). Snowball sampling will be used to recruit participants, as this is considered the most effective option when attempting to study refugee populations. Participants will complete self-report questionnaires, consisting of the Afghan Symptom Checklist (ASCL), a culturally validated measure of psychological distress, the World Health Organization Quality of Life scale (WHOQOL-BREF), an adapted version of the Comprehensive Trauma Inventory-104 (CTI-104), and a modified Psychological Acculturation Scale. All instruments will be translated in Greek, through the use of forward- and back-translations by bilingual speakers of English and Greek, following WHO guidelines. A pilot study with 5 Afghan participants will take place to check for discrepancies in understanding and for further adapting the instruments as needed. Demographic data, including age, gender, year of arrival to Greece and current asylum status will be explored. Three different types of analyses (descriptive statistics, bivariate correlations, and multivariate linear regression) will be used in this study. Descriptive findings for respondent demographics, psychological distress symptoms, traumatic life events and quality of life will be reported. Zero-order correlations will assess the interrelationships among demographic, traumatic life events, psychological distress, and quality of life variables. Lastly, a multivariate linear regression model will be estimated. The findings from the study will contribute to understanding the determinants of acculturation, distress and trauma on daily functioning for Afghans in Greece. The main implications of the current study will be to advocate for capacity building and empower communities through effective program evaluation and design for mental health services for all refugee populations in Greece.

Keywords: Afghan refugees, evaluation, Greece, mental health, quality of life

Procedia PDF Downloads 288
221 Developing Motorized Spectroscopy System for Tissue Scanning

Authors: Tuba Denkceken, Ayse Nur Sarı, Volkan Ihsan Tore, Mahmut Denkceken

Abstract:

The aim of the presented study was to develop a newly motorized spectroscopy system. Our system is composed of probe and motor parts. The probe part consists of bioimpedance and fiber optic components that include two platinum wires (each 25 micrometer in diameter) and two fiber cables (each 50 micrometers in diameter) respectively. Probe was examined on tissue phantom (polystyrene microspheres with different diameters). In the bioimpedance part of the probe current was transferred to the phantom and conductivity information was obtained. Adjacent two fiber cables were used in the fiber optic part of the system. Light was transferred to the phantom by fiber that was connected to the light source and backscattered light was collected with the other adjacent fiber for analysis. It is known that the nucleus expands and the nucleus-cytoplasm ratio increases during the cancer progression in the cell and this situation is one of the most important criteria for evaluating the tissue for pathologists. The sensitivity of the probe to particle (nucleus) size in phantom was tested during the study. Spectroscopic data obtained from our system on phantom was evaluated by multivariate statistical analysis. Thus the information about the particle size in the phantom was obtained. Bioimpedance and fiber optic experiments results which were obtained from polystyrene microspheres showed that the impedance value and the oscillation amplitude were increasing while the size of particle was enlarging. These results were compatible with the previous studies. In order to motorize the system within the motor part, three driver electronic circuits were designed primarily. In this part, supply capacitors were placed symmetrically near to the supply inputs which were used for balancing the oscillation. Female capacitors were connected to the control pin. Optic and mechanic switches were made. Drivers were structurally designed as they could command highly calibrated motors. It was considered important to keep the drivers’ dimension as small as we could (4.4x4.4x1.4 cm). Then three miniature step motors were connected to each other along with three drivers. Since spectroscopic techniques are quantitative methods, they yield more objective results than traditional ones. In the future part of this study, it is planning to get spectroscopic data that have optic and impedance information from the cell culture which is normal, low metastatic and high metastatic breast cancer. In case of getting high sensitivity in differentiated cells, it might be possible to scan large surface tissue areas in a short time with small steps. By means of motorize feature of the system, any region of the tissue will not be missed, in this manner we are going to be able to diagnose cancerous parts of the tissue meticulously. This work is supported by The Scientific and Technological Research Council of Turkey (TÜBİTAK) through 3001 project (115E662).

Keywords: motorized spectroscopy, phantom, scanning system, tissue scanning

Procedia PDF Downloads 191
220 Lactic Acid Solution and Aromatic Vinegar Nebulization to Improve Hunted Wild Boar Carcass Hygiene at Game-Handling Establishment: Preliminary Results

Authors: Rossana Roila, Raffaella Branciari, Lorenzo Cardinali, David Ranucci

Abstract:

The wild boar (Sus scrofa) population has strongly increased across Europe in the last decades, also causing severe fauna management issues. In central Italy, wild boar is the main hunted wild game species, with approximately 40,000 animals killed per year only in the Umbria region. The meat of the game is characterized by high-quality nutritional value as well as peculiar taste and aroma, largely appreciated by consumers. This type of meat and products thereof can meet the current consumers’ demand for higher quality foodstuff, not only from a nutritional and sensory point of view but also in relation to environmental sustainability, the non-use of chemicals, and animal welfare. The game meat production chain is characterized by some gaps from a hygienic point of view: the harvest process is usually conducted in a wild environment where animals can be more easily contaminated during hunting and subsequent practices. The definition and implementation of a certified and controlled supply chain could ensure quality, traceability and safety for the final consumer and therefore promote game meat products. According to European legislation in some animal species, such as bovine, the use of weak acid solutions for carcass decontamination is envisaged in order to ensure the maintenance of optimal hygienic characteristics. A preliminary study was carried out to evaluate the applicability of similar strategies to control the hygienic level of wild boar carcasses. The carcasses, harvested according to the selective method and processed into the game-handling establishment, were treated by nebulization with two different solutions: a 2% food-grade lactic acid solution and aromatic vinegar. Swab samples were performed before treatment and in different moments after-treatment of the carcasses surfaces and subsequently tested for Total Aerobic Mesophilic Load, Total Aerobic Psychrophilic Load, Enterobacteriaceae, Staphylococcus spp. and lactic acid bacteria. The results obtained for the targeted microbial populations showed a positive effect of the application of the lactic acid solution on all the populations investigated, while aromatic vinegar showed a lower effect on bacterial growth. This study could lay the foundations for the optimization of the use of a lactic acid solution to treat wild boar carcasses aiming to guarantee good hygienic level and safety of meat.

Keywords: game meat, food safety, process hygiene criteria, microbial population, microbial growth, food control

Procedia PDF Downloads 158
219 The Distribution and Environmental Behavior of Heavy Metals in Jajarm Bauxite Mine, Northeast Iran

Authors: Hossein Hassani, Ali Rezaei

Abstract:

Heavy metals are naturally occurring elements that have a high atomic weight and a density at least five times greater than that of water. Their multiple industrial, domestic, agricultural, medical, and technological applications have led to their wide distribution in the environment, raising concerns over their potential effects on human health and the environment. Environmental protection against various pollutants, such as heavy metals formed by industries, mines and modern technologies, is a concern for researchers and industry. In order to assess the contamination of soils the distribution and environmental behavior have been investigated. Jajarm bauxite mine, the most important deposits have been discovered in Iran, which is about 22 million tons of reserve, and is the main mineral of the Diaspora. With a view to estimate the heavy metals ratio of the Jajarm bauxite mine area and to evaluate the pollution level, 50 samples have been collected and have been analyzed for the heavy metals of As, Cd, Cu, Hg, Ni and Pb with the help of Inductively Coupled Plasma-Mass Spectrometer (ICP- MS). In this study, we have dealt with determining evaluation criteria including contamination factor (CF), average concentration (AV), enrichment factor (EF) and geoaccumulation index (GI) to assess the risk of pollution from heavy metals(As, Cd, Cu, Hg, Ni and Pb) in Jajarm bauxite mine. In the samples of the studied, the average of recorded concentration of elements for Arsenic, Cadmium, Copper, Mercury, Nickel and Lead are 18, 0.11, 12, 0.07, 58 and 51 (mg/kg) respectively. The comparison of the heavy metals concentration average and the toxic potential in the samples has shown that an average with respect to the world average of the uncontaminated soil amounts. The average of Pb and As elements shows a higher quantity with respect to the world average quantity. The pollution factor for the study elements has been calculated on the basis of the soil background concentration and has been categorized on the basis of the uncontaminated world soil average with respect to the Hakanson classification. The calculation of the corrected pollutant degree shows the degree of the bulk intermediate pollutant (1.55-2.0) for the average soil sampling of the study area which is on the basis of the background quantity and the world average quantity of the uncontaminated soils. The provided conclusion from calculation of the concentrated factor, for some of the samples show that the average of the lead and arsenic elements stations are more than the background values and the unnatural metal concentration are covered under the study area, That's because the process of mining and mineral extraction. Given conclusion from the calculation of Geoaccumulation index of the soil sampling can explain that the copper, nickel, cadmium, arsenic, lead and mercury elements are Uncontamination. In general, the results indicate that the Jajarm bauxite mine of heavy metal pollution is uncontaminated area and extract the mineral from the mine, not create environmental hazards in the region.

Keywords: enrichment factor, geoaccumulation index, heavy metals, Jajarm bauxite mine, pollution

Procedia PDF Downloads 290
218 What Is At Stake When Developing and Using a Rubric to Judge Chemistry Honours Dissertations for Entry into a PhD?

Authors: Moira Cordiner

Abstract:

As a result of an Australian university approving a policy to improve the quality of assessment practices, as an academic developer (AD) with expertise in criterion-referenced assessment commenced in 2008. The four-year appointment was to support 40 'champions' in their Schools. This presentation is based on the experiences of a group of Chemistry academics who worked with the AD to develop and implement an honours dissertation rubric. Honours is a research year following a three-year undergraduate year. If the standard of the student's work is high enough (mainly the dissertation) then the student can commence a PhD. What became clear during the process was that much more was at stake than just the successful development and trial of the rubric, including academics' reputations, university rankings and research outputs. Working with the champion-Head of School(HOS) and the honours coordinator, the AD helped them adapt an honours rubric that she had helped create and trial successfully for another Science discipline. A year of many meetings and complex power plays between the two academics finally resulted in a version that was critiqued by the Chemistry teaching and learning committee. Accompanying the rubric was an explanation of grading rules plus a list of supervisor expectations to explain to students how the rubric was used for grading. Further refinements were made until all staff were satisfied. It was trialled successfully in 2011, then small changes made. It was adapted and implemented for Medicine honours with her help in 2012. Despite coming to consensus about statements of quality in the rubric, a few academics found it challenging matching these to the dissertations and allocating a grade. They had had no time to undertake training to do this, or make overt their implicit criteria and standards, which some admitted they were using - 'I know what a first class is'. Other factors affecting grading included: the small School where all supervisors knew each other and the students, meant that friendships and collegiality were at stake if low grades were given; no external examiners were appointed-all were internal with the potential for bias; supervisors’ reputations were at stake if their students did not receive a good grade; the School's reputation was also at risk if insufficient honours students qualified for PhD entry; and research output was jeopardised without enough honours students to work on supervisors’ projects. A further complication during the study was a restructure of the university and retrenchments, with pressure to increase research output as world rankings assumed greater importance to senior management. In conclusion, much more was at stake than developing a usable rubric. The HOS had to be seen to champion the 'new' assessment practice while balancing institutional demands for increased research output and ensuring as many honours dissertations as possible met high standards, so that eventually the percentage of PhD completions and research output rose. It is therefore in the institution's best interest for this cycle to be maintained as it affects rankings and reputations. In this context, are rubrics redundant?

Keywords: explicit and implicit standards, judging quality, university rankings, research reputations

Procedia PDF Downloads 336
217 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 190
216 Health and Performance Fitness Assessment of Adolescents in Middle Income Schools in Lagos State

Authors: Onabajo Paul

Abstract:

The testing and assessment of physical fitness of school-aged adolescents in Nigeria has been going on for several decades. Originally, these tests strictly focused on identifying health and physical fitness status and comparing the results of adolescents with others. There is a considerable interest in health and performance fitness of adolescents in which results attained are compared with criteria representing positive health rather than simply on score comparisons with others. Despite the fact that physical education program is being studied in secondary schools and physical activities are encouraged, it is observed that regular assessment of students’ fitness level and health status seems to be scarce or not being done in these schools. The purpose of the study was to assess the heath and performance fitness of adolescents in middle-income schools in Lagos State. A total number of 150 students were selected using the simple random sampling technique. Participants were measured on hand grip strength, sit-up, pacer 20 meter shuttle run, standing long jump, weight and height. The data collected were analyzed with descriptive statistics of means, standard deviations, and range and compared with fitness norms. It was concluded that majority 111(74.0%) of the adolescents achieved the healthy fitness zone, 33(22.0%) were very lean, and 6(4.0%) needed improvement according to the normative standard of Body Mass Index test. For muscular strength, majority 78(52.0%) were weak, 66(44.0%) were normal, and 6(4.0%) were strong according to the normative standard of hand-grip strength test. For aerobic capacity fitness, majority 93(62.0%) needed improvement and were at health risk, 36(24.0%) achieved healthy fitness zone, and 21(14.0%) needed improvement according to the normative standard of PACER test. Majority 48(32.0%) of the participants had good hip flexibility, 38(25.3%) had fair status, 27(18.0%) needed improvement, 24(16.0%) had very good hip flexibility status, and 13(8.7%) of the participants had excellent status. Majority 61(40.7%) had average muscular endurance status, 30(20.0%) had poor status, 29(18.3%) had good status, 28(18.7%) had fair muscular endurance status, and 2(1.3%) of the participants had excellent status according to the normative standard of sit-up test. Majority 52(34.7%) had low jump ability fitness, 47(31.3%) had marginal fitness, 31(20.7%) had good fitness, and 20(13.3%) had high performance fitness according to the normative standard of standing long jump test. Based on the findings, it was concluded that majority of the adolescents had better Body Mass Index status, and performed well in both hip flexibility and muscular endurance tests. Whereas majority of the adolescents performed poorly in aerobic capacity test, muscular strength and jump ability test. It was recommended that to enhance wellness, adolescents should be involved in physical activities and recreation lasting 30 minutes three times a week. Schools should engage in fitness program for students on regular basis at both senior and junior classes so as to develop good cardio-respiratory, muscular fitness and improve overall health of the students.

Keywords: adolescents, health-related fitness, performance-related fitness, physical fitness

Procedia PDF Downloads 353
215 Effects of Virtual Reality Treadmill Training on Gait and Balance Performance of Patients with Stroke: Review

Authors: Hanan Algarni

Abstract:

Background: Impairment of walking and balance skills has negative impact on functional independence and community participation after stroke. Gait recovery is considered a primary goal in rehabilitation by both patients and physiotherapists. Treadmill training coupled with virtual reality technology is a new emerging approach that offers patients with feedback, open and random skills practice while walking and interacting with virtual environmental scenes. Objectives: To synthesize the evidence around the effects of the VR treadmill training on gait speed and balance primarily, functional independence and community participation secondarily in stroke patients. Methods: Systematic review was conducted; search strategy included electronic data bases: MEDLINE, AMED, Cochrane, CINAHL, EMBASE, PEDro, Web of Science, and unpublished literature. Inclusion criteria: Participant: adult >18 years, stroke, ambulatory, without severe visual or cognitive impartments. Intervention: VR treadmill training alone or with physiotherapy. Comparator: any other interventions. Outcomes: gait speed, balance, function, community participation. Characteristics of included studies were extracted for analysis. Risk of bias assessment was performed using Cochrane's ROB tool. Narrative synthesis of findings was undertaken and summary of findings in each outcome was reported using GRADEpro. Results: Four studies were included involving 84 stroke participants with chronic hemiparesis. Interventions intensity ranged (6-12 sessions, 20 minutes-1 hour/session). Three studies investigated the effects on gait speed and balance. 2 studies investigated functional outcomes and one study assessed community participation. ROB assessment showed 50% unclear risk of selection bias and 25% of unclear risk of detection bias across the studies. Heterogeneity was identified in the intervention effects at post training and follow up. Outcome measures, training intensity and durations also varied across the studies, grade of evidence was low for balance, moderate for speed and function outcomes, and high for community participation. However, it is important to note that grading was done on few numbers of studies in each outcome. Conclusions: The summary of findings suggests positive and statistically significant effects (p<0.05) of VR treadmill training compared to other interventions on gait speed, dynamic balance skills, function and participation directly after training. However, the effects were not sustained at follow up in two studies (2 weeks-1 month) and other studies did not perform follow up measurements. More RCTs with larger sample sizes and higher methodological quality are required to examine the long term effects of VR treadmill effects on function independence and community participation after stroke, in order to draw conclusions and produce stronger robust evidence.

Keywords: virtual reality, treadmill, stroke, gait rehabilitation

Procedia PDF Downloads 274
214 Bariatric Surgery Referral as an Alternative to Fundoplication in Obese Patients Presenting with GORD: A Retrospective Hospital-Based Cohort Study

Authors: T. Arkle, D. Pournaras, S. Lam, B. Kumar

Abstract:

Introduction: Fundoplication is widely recognised as the best surgical option for gastro-oesophageal reflux disease (GORD) in the general population. However, there is controversy surrounding the use of conventional fundoplication in obese patients. Whilst the intra-operative failure of fundoplication, including wrap disruption, is reportedly higher in obese individuals, the more significant issue surrounds symptom recurrence post-surgery. Could a bariatric procedure be considered in obese patients for weight management, to treat the GORD, and to also reduce the risk of recurrence? Roux-en-Y gastric bypass, a widely performed bariatric procedure, has been shown to be highly successful both in controlling GORD symptoms and in weight management in obese patients. Furthermore, NICE has published clear guidelines on eligibility for bariatric surgery, with the main criteria being type 3 obesity or type 2 obesity with the presence of significant co-morbidities that would improve with weight loss. This study aims to identify the proportion of patients who undergo conventional fundoplication for GORD and/or hiatus hernia, which would have been eligible for bariatric surgery referral according to NICE guidelines. Methods: All patients who underwent fundoplication procedures for GORD and/or hiatus hernia repair at a single NHS foundation trust over a 10-year period will be identified using the Trust’s health records database. Pre-operative patient records will be used to find BMI and the presence of significant co-morbidities at the time of consideration for surgery. This information will be compared to NICE guidelines to determine potential eligibility for the bariatric surgical referral at the time of initial surgical intervention. Results: A total of 321 patients underwent fundoplication procedures between January 2011 and December 2020; 133 (41.4%) had available data for BMI or to allow BMI to be estimated. Of those 133, 40 patients (30%) had a BMI greater than 30kg/m², and 7 (5.3%) had BMI >35kg/m². One patient (0.75%) had a BMI >40 and would therefore be automatically eligible according to NICE guidelines. 4 further patients had significant co-morbidities, such as hypertension and osteoarthritis, which likely be improved by weight management surgery and therefore also indicated eligibility for referral. Overall, 3.75% (5/133) of patients undergoing conventional fundoplication procedures would have been eligible for bariatric surgical referral, these patients were all female, and the average age was 60.4 years. Conclusions: Based on this Trust’s experience, around 4% of obese patients undergoing fundoplication would have been eligible for bariatric surgical intervention. Based on current evidence, in class 2/3 obese patients, there is likely to have been a notable proportion with recurrent disease, potentially requiring further intervention. These patient’s may have benefitted more through undergoing bariatric surgery, for example a Roux-en-Y gastric bypass, addressing both their obesity and GORD. Use of patient written notes to obtain BMI data for the 188 patients with missing BMI data and further analysis to determine outcomes following fundoplication in all patients, assessing for incidence of recurrent disease, will be undertaken to strengthen conclusions.

Keywords: bariatric surgery, GORD, Nissen fundoplication, nice guidelines

Procedia PDF Downloads 60
213 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging

Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi

Abstract:

Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.

Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA

Procedia PDF Downloads 279
212 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic

Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak

Abstract:

Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.

Keywords: business value, financial ratios, performance measurement, value drivers

Procedia PDF Downloads 222
211 The Relationship between Wasting and Stunting in Young Children: A Systematic Review

Authors: Susan Thurstans, Natalie Sessions, Carmel Dolan, Kate Sadler, Bernardette Cichon, Shelia Isanaka, Dominique Roberfroid, Heather Stobagh, Patrick Webb, Tanya Khara

Abstract:

For many years, wasting and stunting have been viewed as separate conditions without clear evidence supporting this distinction. In 2014, the Emergency Nutrition Network (ENN) examined the relationship between wasting and stunting and published a report highlighting the evidence for linkages between the two forms of undernutrition. This systematic review aimed to update the evidence generated since this 2014 report to better understand the implications for improving child nutrition, health and survival. Following PRISMA guidelines, this review was conducted using search terms to describe the relationship between wasting and stunting. Studies related to children under five from low- and middle-income countries that assessed both ponderal growth/wasting and linear growth/stunting, as well as the association between the two, were included. Risk of bias was assessed in all included studies using SIGN checklists. 45 studies met the inclusion criteria- 39 peer reviewed studies, 1 manual chapter, 3 pre-print publications and 2 published reports. The review found that there is a strong association between the two conditions whereby episodes of wasting contribute to stunting and, to a lesser extent, stunting leads to wasting. Possible interconnected physiological processes and common risk factors drive an accumulation of vulnerabilities. Peak incidence of both wasting and stunting was found to be between birth and three months. A significant proportion of children experience concurrent wasting and stunting- Country level data suggests that up to 8% of children under 5 may be both wasted and stunted at the same time, global estimates translate to around 16 million children. Children with concurrent wasting and stunting have an elevated risk of mortality when compared to children with one deficit alone. These children should therefore be considered a high-risk group in the targeting of treatment. Wasting, stunting and concurrent wasting and stunting appear to be more prevalent in boys than girls and it appears that concurrent wasting and stunting peaks between 12- 30 months of age with younger children being the most affected. Seasonal patterns in prevalence of both wasting and stunting are seen in longitudinal and cross sectional data and in particular season of birth has been shown to have an impact on a child’s subsequent experience of wasting and stunting. Evidence suggests that the use of mid-upper-arm circumference combined with weight-for-age Z-score might effectively identify children most at risk of near-term mortality, including those concurrently wasted and stunted. Wasting and stunting frequently occur in the same child, either simultaneously or at different moments through their life course. Evidence suggests there is a process of accumulation of nutritional deficits and therefore risk over the life course of a child demonstrates the need for a more integrated approach to prevention and treatment strategies to interrupt this process. To achieve this, undernutrition policies, programmes, financing and research must become more unified.

Keywords: Concurrent wasting and stunting, Review, Risk factors, Undernutrition

Procedia PDF Downloads 127
210 Advancing UAV Operations with Hybrid Mobile Network and LoRa Communications

Authors: Annika J. Meyer, Tom Piechotta

Abstract:

Unmanned Aerial Vehicles (UAVs) have increasingly become vital tools in various applications, including surveillance, search and rescue, and environmental monitoring. One common approach to ensure redundant communication systems when flying beyond visual line of sight is for UAVs to employ multiple mobile data modems by different providers. Although widely adopted, this approach suffers from several drawbacks, such as high costs, added weight and potential increases in signal interference. In light of these challenges, this paper proposes a communication framework intermeshing mobile networks and LoRa (Long Range) technology—a low-power, long-range communication protocol. LoRaWAN (Long Range Wide Area Network) is commonly used in Internet of Things applications, relying on stationary gateways and Internet connectivity. This paper, however, utilizes the underlying LoRa protocol, taking advantage of the protocol’s low power and long-range capabilities while ensuring efficiency and reliability. Conducted in collaboration with the Potsdam Fire Department, the implementation of mobile network technology in combination with the LoRa protocol in small UAVs (take-off weight < 0.4 kg), specifically designed for search and rescue and area monitoring missions, is explored. This research aims to test the viability of LoRa as an additional redundant communication system during UAV flights as well as its intermeshing with the primary, mobile network-based controller. The methodology focuses on direct UAV-to-UAV and UAV-to-ground communications, employing different spreading factors optimized for specific operational scenarios—short-range for UAV-to-UAV interactions and long-range for UAV-to-ground commands. This explored use case also dramatically reduces one of the major drawbacks of LoRa communication systems, as a line of sight between the modules is necessary for reliable data transfer. Something that UAVs are uniquely suited to provide, especially when deployed as a swarm. Additionally, swarm deployment may enable UAVs that have lost contact with their primary network to reestablish their connection through another, better-situated UAV. The experimental setup involves multiple phases of testing, starting with controlled environments to assess basic communication capabilities and gradually advancing to complex scenarios involving multiple UAVs. Such a staged approach allows for meticulous adjustment of parameters and optimization of the communication protocols to ensure reliability and effectiveness. Furthermore, due to the close partnership with the Fire Department, the real-world applicability of the communication system is assured. The expected outcomes of this paper include a detailed analysis of LoRa's performance as a communication tool for UAVs, focusing on aspects such as signal integrity, range, and reliability under different environmental conditions. Additionally, the paper seeks to demonstrate the cost-effectiveness and operational efficiency of using a single type of communication technology that reduces UAV payload and power consumption. By shifting from traditional cellular network communications to a more robust and versatile cellular and LoRa-based system, this research has the potential to significantly enhance UAV capabilities, especially in critical applications where reliability is paramount. The success of this paper could pave the way for broader adoption of LoRa in UAV communications, setting a new standard for UAV operational communication frameworks.

Keywords: LoRa communication protocol, mobile network communication, UAV communication systems, search and rescue operations

Procedia PDF Downloads 43
209 System-Driven Design Process for Integrated Multifunctional Movable Concepts

Authors: Oliver Bertram, Leonel Akoto Chama

Abstract:

In today's civil transport aircraft, the design of flight control systems is based on the experience gained from previous aircraft configurations with a clear distinction between primary and secondary flight control functions for controlling the aircraft altitude and trajectory. Significant system improvements are now seen particularly in multifunctional moveable concepts where the flight control functions are no longer considered separate but integral. This allows new functions to be implemented in order to improve the overall aircraft performance. However, the classical design process of flight controls is sequential and insufficiently interdisciplinary. In particular, the systems discipline is involved only rudimentarily in the early phase. In many cases, the task of systems design is limited to meeting the requirements of the upstream disciplines, which may lead to integration problems later. For this reason, approaching design with an incremental development is required to reduce the risk of a complete redesign. Although the potential and the path to multifunctional moveable concepts are shown, the complete re-engineering of aircraft concepts with less classic moveable concepts is associated with a considerable risk for the design due to the lack of design methods. This represents an obstacle to major leaps in technology. This gap in state of the art is even further increased if, in the future, unconventional aircraft configurations shall be considered, where no reference data or architectures are available. This means that the use of the above-mentioned experience-based approach used for conventional configurations is limited and not applicable to the next generation of aircraft. In particular, there is a need for methods and tools for a rapid trade-off between new multifunctional flight control systems architectures. To close this gap in the state of the art, an integrated system-driven design process for multifunctional flight control systems of non-classical aircraft configurations will be presented. The overall goal of the design process is to find optimal solutions for single or combined target criteria in a fast process from the very large solution space for the flight control system. In contrast to the state of the art, all disciplines are involved for a holistic design in an integrated rather than a sequential process. To emphasize the systems discipline, this paper focuses on the methodology for designing moveable actuation systems in the context of this integrated design process of multifunctional moveables. The methodology includes different approaches for creating system architectures, component design methods as well as the necessary process outputs to evaluate the systems. An application example of a reference configuration is used to demonstrate the process and validate the results. For this, new unconventional hydraulic and electrical flight control system architectures are calculated which result from the higher requirements for multifunctional moveable concept. In addition to typical key performance indicators such as mass and required power requirements, the results regarding the feasibility and wing integration aspects of the system components are examined and discussed here. This is intended to show how the systems design can influence and drive the wing and overall aircraft design.

Keywords: actuation systems, flight control surfaces, multi-functional movables, wing design process

Procedia PDF Downloads 144
208 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence

Authors: Nasser Salah Eldin Mohammed Salih Shebka

Abstract:

Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.

Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic

Procedia PDF Downloads 112
207 The Four Pillars of Islamic Design: A Methodology for an Objective Approach to the Design and Appraisal of Islamic Urban Planning and Architecture Based on Traditional Islamic Religious Knowledge

Authors: Azzah Aldeghather, Sara Alkhodair

Abstract:

In the modern urban planning and architecture landscape, with western ideologies and styles becoming the mainstay of experience and definitions globally, the Islamic world requires a methodology that defines its expression, which transcends cultural, societal, and national styles. This paper will propose a methodology as an objective system to define, evaluate and apply traditional Islamic knowledge to Islamic urban planning and architecture, providing the Islamic world with a system to manifest its approach to design. The methodology is expressed as Four Pillars which are based on traditional meanings of Arab words roughly translated as Pillar One: The Principles (Al Mabade’), Pillar Two: The Foundations (Al Asas), Pillar Three: The Purpose (Al Ghaya), Pillar Four: Presence (Al Hadara). Pillar One: (The Principles) expresses the unification (Tawheed) pillar of Islam: “There is no God but God” and is comprised of seven principles listed as: 1. Human values (Qiyam Al Insan), 2. Universal language as sacred geometry, 3. Fortitude© and Benefitability©, 4. Balance and Integration: conjoining the opposites, 5. Man, time, and place, 6. Body, mind, spirit, and essence, 7. Unity of design expression to achieve unity, harmony, and security in design. Pillar Two: The Foundations is based on two foundations: “Muhammad is the Prophet of God” and his relationship to the renaming of Medina City as a prototypical city or place, which defines a center space for collection conjoined by an analysis of the Medina Charter as a base for the humanistic design. Pillar Three: The Purpose (Al Ghaya) is comprised of four criteria: The naming of the design as a title, the intention of the design as an end goal, the reasoning behind the design, and the priorities of expression. Pillar Four: Presence (Al Hadara) is usually translated as a civilization; in Arabic, the root of Hadara is to be present. This has five primary definitions utilized to express the act of design: Wisdom (Hikma) as a philosophical concept, Identity (Hawiya) of the form, and Dialogue (Hiwar), which are the requirements of the project vis-a-vis what the designer wishes to convey, Expression (Al Ta’abeer) the designer wishes to apply, and Resources (Mawarid) available. The Proposal will provide examples, where applicable, of past and present designs that exemplify the manifestation of the Pillars. The proposed methodology endeavors to return Islamic urban planning and architecture design to its a priori position as a leading design expression adaptable to any place, time, and cultural expression while providing a base for analysis that transcends the concept of style and external form as a definition and expresses the singularity of the esoteric “Spiritual” aspects in a rational, principled, and logical manner clearly addressed in Islam’s essence.

Keywords: Islamic architecture, Islamic design, Islamic urban planning, principles of Islamic design

Procedia PDF Downloads 105
206 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 23
205 Multicomponent Positive Psychology Intervention for Health Promotion of Retirees: A Feasibility Study

Authors: Helen Durgante, Mariana F. Sparremberger, Flavia C. Bernardes, Debora D. DellAglio

Abstract:

Health promotion programmes for retirees, based on Positive Psychology perspectives for the development of strengths and virtues, demand broadened empirical investigation in Brazil. In the case of evidence-based applied research, it is suggested feasibility studies are conducted prior to efficacy trials of the intervention, in order to identify and rectify possible faults in the design and implementation of the intervention. The aim of this study was to evaluate the feasibility of a multicomponent Positive Psychology programme for health promotion of retirees, based on Cognitive Behavioural Therapy and Positive Psychology perspectives. The programme structure included six weekly group sessions (two hours each) encompassing strengths such as Values and self-care, Optimism, Empathy, Gratitude, Forgiveness, and Meaning of life and work. The feasibility criteria evaluated were: Demand, Acceptability, Satisfaction with the programme and with the moderator, Comprehension/Generalization of contents, Evaluation of the moderator (Social Skills and Integrity/Fidelity), Adherence, and programme implementation. Overall, 11 retirees (F=11), age range 54-75, from the metropolitan region of Porto Alegre-RS-Brazil took part in the study. The instruments used were: Qualitative Admission Questionnaire; Moderator Field Diary; the Programme Evaluation Form to assess participants satisfaction with the programme and with the moderator (a six-item 4-point likert scale), and Comprehension/Generalization of contents (a three-item 4-point likert scale); Observers’ Evaluation Form to assess the moderator Social Skills (a five-item 4-point likert scale), Integrity/Fidelity (a 10 item 4-point likert scale), and Adherence (a nine-item 5-point likert scale). Qualitative data were analyzed using content analysis. Descriptive statistics as well as Intraclass Correlations coefficients were used for quantitative data and inter-rater reliability analysis. The results revealed high demand (N = 55 interested people) and acceptability (n = 10 concluded the programme with overall 88.3% frequency rate), satisfaction with the program and with the moderator (X = 3.76, SD = .34), and participants self-report of Comprehension/Generalization of contents provided in the programme (X = 2.82, SD = .51). In terms of the moderator Social Skills (X = 3.93; SD = .40; ICC = .752 [IC = .429-.919]), Integrity/Fidelity (X = 3.93; SD = .31; ICC = .936 [IC = .854-.981]), and participants Adherence (X = 4.90; SD = .29; ICC = .906 [IC = .783-.969]), evaluated by two independent observers present in each session of the programme, descriptive and Intraclass Correlation results were considered adequate. Structural changes were introduced in the intervention design and implementation methods, as well as the removal of items from questionnaires and evaluation forms. The obtained results were satisfactory, allowing changes to be made for further efficacy trials of the programme. Results are discussed taking cultural and contextual demands in Brazil into account.

Keywords: feasibility study, health promotion, positive psychology intervention, programme evaluation, retirees

Procedia PDF Downloads 195
204 Pioneering Conservation of Aquatic Ecosystems under Australian Law

Authors: Gina M. Newton

Abstract:

Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.

Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species

Procedia PDF Downloads 132
203 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications

Authors: Marina Shargorodsky

Abstract:

Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.

Keywords: obesity, pregnancy, complications, weight gain

Procedia PDF Downloads 53
202 An Improved Approach for Hybrid Rocket Injection System Design

Authors: M. Invigorito, G. Elia, M. Panelli

Abstract:

Hybrid propulsion combines beneficial properties of both solid and liquid rockets, such as multiple restarts, throttability as well as simplicity and reduced costs. A nitrous oxide (N2O)/paraffin-based hybrid rocket engine demonstrator is currently under development at the Italian Aerospace Research Center (CIRA) within the national research program HYPROB, funded by the Italian Ministry of Research. Nitrous oxide belongs to the class of self-pressurizing propellants that exhibit a high vapor pressure at standard ambient temperature. This peculiar feature makes those fluids very attractive for space rocket applications because it avoids the use of complex pressurization systems, leading to great benefits in terms of weight savings and reliability. To avoid feed-system-coupled instabilities, the phase change is required to occur through the injectors. In this regard, the oxidizer is stored in liquid condition while target chamber pressures are designed to lie below vapor pressure. The consequent cavitation and flash vaporization constitute a remarkably complex phenomenology that arises great modelling challenges. Thus, it is clear that the design of the injection system is fundamental for the full exploitation of hybrid rocket engine throttability. The Analytical Hierarchy Process has been used to select the injection architecture as best compromise among different design criteria such as functionality, technology innovation and cost. The impossibility to use engineering simplified relations for the dimensioning of the injectors led to the needs of applying a numerical approach based on OpenFOAM®. The numerical tool has been validated with selected experimental data from literature. Quantitative, as well as qualitative comparisons are performed in terms of mass flow rate and pressure drop across the injector for several operating conditions. The results show satisfactory agreement with the experimental data. Modeling assumptions, together with their impact on numerical predictions are discussed in the paper. Once assessed the reliability of the numerical tool, the injection plate has been designed and sized to guarantee the required amount of oxidizer in the combustion chamber and therefore to assure high combustion efficiency. To this purpose, the plate has been designed with multiple injectors whose number and diameter have been selected in order to reach the requested mass flow rate for the two operating conditions of maximum and minimum thrust. The overall design has been finally verified through three-dimensional computations in cavitating non-reacting conditions and it has been verified that the proposed design solution is able to guarantee the requested values of mass flow rates.

Keywords: hybrid rocket, injection system design, OpenFOAM®, cavitation

Procedia PDF Downloads 216
201 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
200 Integrative-Cyclical Approach to the Study of Quality Control of Resource Saving by the Use of Innovation Factors

Authors: Anatoliy A. Alabugin, Nikolay K. Topuzov, Sergei V. Aliukov

Abstract:

It is well known, that while we do a quantitative evaluation of the quality control of some economic processes (in particular, resource saving) with help innovation factors, there are three groups of problems: high uncertainty of indicators of the quality management, their considerable ambiguity, and high costs to provide a large-scale research. These problems are defined by the use of contradictory objectives of enhancing of the quality control in accordance with innovation factors and preservation of economic stability of the enterprise. The most acutely, such factors are felt in the countries lagging behind developed economies of the world according to criteria of innovativeness and effectiveness of management of the resource saving. In our opinion, the following two methods for reconciling of the above-mentioned objectives and reducing of conflictness of the problems are to solve this task most effectively: 1) the use of paradigms and concepts of evolutionary improvement of quality of resource-saving management in the cycle "from the project of an innovative product (technology) - to its commercialization and update parameters of customer value"; 2) the application of the so-called integrative-cyclical approach which consistent with complexity and type of the concept, to studies allowing to get quantitative assessment of the stages of achieving of the consistency of these objectives (from baseline of imbalance, their compromise to achievement of positive synergies). For implementation, the following mathematical tools are included in the integrative-cyclical approach: index-factor analysis (to identify the most relevant factors); regression analysis of relationship between the quality control and the factors; the use of results of the analysis in the model of fuzzy sets (to adjust the feature space); method of non-parametric statistics (for a decision on the completion or repetition of the cycle in the approach in depending on the focus and the closeness of the connection of indicator ranks of disbalance of purposes). The repetition is performed after partial substitution of technical and technological factors ("hard") by management factors ("soft") in accordance with our proposed methodology. Testing of the proposed approach has shown that in comparison with the world practice there are opportunities to improve the quality of resource-saving management using innovation factors. We believe that the implementation of this promising research, to provide consistent management decisions for reducing the severity of the above-mentioned contradictions and increasing the validity of the choice of resource-development strategies in terms of parameters of quality management and sustainability of enterprise, is perspective. Our existing experience in the field of quality resource-saving management and the achieved level of scientific competence of the authors allow us to hope that the use of the integrative-cyclical approach to the study and evaluation of the resulting and factor indicators will help raise the level of resource-saving characteristics up to the value existing in the developed economies of post-industrial type.

Keywords: integrative-cyclical approach, quality control, evaluation, innovation factors. economic sustainability, innovation cycle of management, disbalance of goals of development

Procedia PDF Downloads 245
199 Parenting Interventions for Refugee Families: A Systematic Scoping Review

Authors: Ripudaman S. Minhas, Pardeep K. Benipal, Aisha K. Yousafzai

Abstract:

Background: Children of refugee or asylum-seeking background have multiple, complex needs (e.g. trauma, mental health concerns, separation, relocation, poverty, etc.) that places them at an increased risk for developing learning problems. Families encounter challenges accessing support during resettlement, preventing children from achieving their full developmental potential. There are very few studies in literature that examine the unique parenting challenges refugee families’ face. Providing appropriate support services and educational resources that address these distinctive concerns of refugee parents, will alleviate these challenges allowing for a better developmental outcome for children. Objective: To identify the characteristics of effective parenting interventions that address the unique needs of refugee families. Methods: English-language articles published from 1997 onwards were included if they described or evaluated programmes or interventions for parents of refugee or asylum-seeking background, globally. Data were extracted and analyzed according to Arksey and O’Malley’s descriptive analysis model for scoping reviews. Results: Seven studies met criteria and were included, primarily studying families settled in high-income countries. Refugee parents identified parenting to be a major concern, citing they experienced: alienation/unwelcoming services, language barriers, and lack of familiarity with school and early years services. Services that focused on building the resilience of parents, parent education, or provided services in the family’s native language, and offered families safe spaces to promote parent-child interactions were most successful. Home-visit and family-centered programs showed particular success, minimizing barriers such as transportation and inflexible work schedules, while allowing caregivers to receive feedback from facilitators. The vast majority of studies evaluated programs implementing existing curricula and frameworks. Interventions were designed in a prescriptive manner, without direct participation by family members and not directly addressing accessibility barriers. The studies also did not employ evaluation measures of parenting practices or the caregiving environment, or child development outcomes, primarily focusing on parental perceptions. Conclusion: There is scarce literature describing parenting interventions for refugee families. Successful interventions focused on building parenting resilience and capacity in their native language. To date, there are no studies that employ a participatory approach to program design to tailor content or accessibility, and few that employ parenting, developmental, behavioural, or environmental outcome measures.

Keywords: asylum-seekers, developmental pediatrics, parenting interventions, refugee families

Procedia PDF Downloads 161
198 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 330
197 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 121
196 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load

Authors: Ahmad Saadiq, Neeraj Sahu

Abstract:

Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.

Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve

Procedia PDF Downloads 325