Search results for: criteria of similarity
205 The Distribution and Environmental Behavior of Heavy Metals in Jajarm Bauxite Mine, Northeast Iran
Authors: Hossein Hassani, Ali Rezaei
Abstract:
Heavy metals are naturally occurring elements that have a high atomic weight and a density at least five times greater than that of water. Their multiple industrial, domestic, agricultural, medical, and technological applications have led to their wide distribution in the environment, raising concerns over their potential effects on human health and the environment. Environmental protection against various pollutants, such as heavy metals formed by industries, mines and modern technologies, is a concern for researchers and industry. In order to assess the contamination of soils the distribution and environmental behavior have been investigated. Jajarm bauxite mine, the most important deposits have been discovered in Iran, which is about 22 million tons of reserve, and is the main mineral of the Diaspora. With a view to estimate the heavy metals ratio of the Jajarm bauxite mine area and to evaluate the pollution level, 50 samples have been collected and have been analyzed for the heavy metals of As, Cd, Cu, Hg, Ni and Pb with the help of Inductively Coupled Plasma-Mass Spectrometer (ICP- MS). In this study, we have dealt with determining evaluation criteria including contamination factor (CF), average concentration (AV), enrichment factor (EF) and geoaccumulation index (GI) to assess the risk of pollution from heavy metals(As, Cd, Cu, Hg, Ni and Pb) in Jajarm bauxite mine. In the samples of the studied, the average of recorded concentration of elements for Arsenic, Cadmium, Copper, Mercury, Nickel and Lead are 18, 0.11, 12, 0.07, 58 and 51 (mg/kg) respectively. The comparison of the heavy metals concentration average and the toxic potential in the samples has shown that an average with respect to the world average of the uncontaminated soil amounts. The average of Pb and As elements shows a higher quantity with respect to the world average quantity. The pollution factor for the study elements has been calculated on the basis of the soil background concentration and has been categorized on the basis of the uncontaminated world soil average with respect to the Hakanson classification. The calculation of the corrected pollutant degree shows the degree of the bulk intermediate pollutant (1.55-2.0) for the average soil sampling of the study area which is on the basis of the background quantity and the world average quantity of the uncontaminated soils. The provided conclusion from calculation of the concentrated factor, for some of the samples show that the average of the lead and arsenic elements stations are more than the background values and the unnatural metal concentration are covered under the study area, That's because the process of mining and mineral extraction. Given conclusion from the calculation of Geoaccumulation index of the soil sampling can explain that the copper, nickel, cadmium, arsenic, lead and mercury elements are Uncontamination. In general, the results indicate that the Jajarm bauxite mine of heavy metal pollution is uncontaminated area and extract the mineral from the mine, not create environmental hazards in the region.Keywords: enrichment factor, geoaccumulation index, heavy metals, Jajarm bauxite mine, pollution
Procedia PDF Downloads 291204 What Is At Stake When Developing and Using a Rubric to Judge Chemistry Honours Dissertations for Entry into a PhD?
Authors: Moira Cordiner
Abstract:
As a result of an Australian university approving a policy to improve the quality of assessment practices, as an academic developer (AD) with expertise in criterion-referenced assessment commenced in 2008. The four-year appointment was to support 40 'champions' in their Schools. This presentation is based on the experiences of a group of Chemistry academics who worked with the AD to develop and implement an honours dissertation rubric. Honours is a research year following a three-year undergraduate year. If the standard of the student's work is high enough (mainly the dissertation) then the student can commence a PhD. What became clear during the process was that much more was at stake than just the successful development and trial of the rubric, including academics' reputations, university rankings and research outputs. Working with the champion-Head of School(HOS) and the honours coordinator, the AD helped them adapt an honours rubric that she had helped create and trial successfully for another Science discipline. A year of many meetings and complex power plays between the two academics finally resulted in a version that was critiqued by the Chemistry teaching and learning committee. Accompanying the rubric was an explanation of grading rules plus a list of supervisor expectations to explain to students how the rubric was used for grading. Further refinements were made until all staff were satisfied. It was trialled successfully in 2011, then small changes made. It was adapted and implemented for Medicine honours with her help in 2012. Despite coming to consensus about statements of quality in the rubric, a few academics found it challenging matching these to the dissertations and allocating a grade. They had had no time to undertake training to do this, or make overt their implicit criteria and standards, which some admitted they were using - 'I know what a first class is'. Other factors affecting grading included: the small School where all supervisors knew each other and the students, meant that friendships and collegiality were at stake if low grades were given; no external examiners were appointed-all were internal with the potential for bias; supervisors’ reputations were at stake if their students did not receive a good grade; the School's reputation was also at risk if insufficient honours students qualified for PhD entry; and research output was jeopardised without enough honours students to work on supervisors’ projects. A further complication during the study was a restructure of the university and retrenchments, with pressure to increase research output as world rankings assumed greater importance to senior management. In conclusion, much more was at stake than developing a usable rubric. The HOS had to be seen to champion the 'new' assessment practice while balancing institutional demands for increased research output and ensuring as many honours dissertations as possible met high standards, so that eventually the percentage of PhD completions and research output rose. It is therefore in the institution's best interest for this cycle to be maintained as it affects rankings and reputations. In this context, are rubrics redundant?Keywords: explicit and implicit standards, judging quality, university rankings, research reputations
Procedia PDF Downloads 337203 Health and Performance Fitness Assessment of Adolescents in Middle Income Schools in Lagos State
Authors: Onabajo Paul
Abstract:
The testing and assessment of physical fitness of school-aged adolescents in Nigeria has been going on for several decades. Originally, these tests strictly focused on identifying health and physical fitness status and comparing the results of adolescents with others. There is a considerable interest in health and performance fitness of adolescents in which results attained are compared with criteria representing positive health rather than simply on score comparisons with others. Despite the fact that physical education program is being studied in secondary schools and physical activities are encouraged, it is observed that regular assessment of students’ fitness level and health status seems to be scarce or not being done in these schools. The purpose of the study was to assess the heath and performance fitness of adolescents in middle-income schools in Lagos State. A total number of 150 students were selected using the simple random sampling technique. Participants were measured on hand grip strength, sit-up, pacer 20 meter shuttle run, standing long jump, weight and height. The data collected were analyzed with descriptive statistics of means, standard deviations, and range and compared with fitness norms. It was concluded that majority 111(74.0%) of the adolescents achieved the healthy fitness zone, 33(22.0%) were very lean, and 6(4.0%) needed improvement according to the normative standard of Body Mass Index test. For muscular strength, majority 78(52.0%) were weak, 66(44.0%) were normal, and 6(4.0%) were strong according to the normative standard of hand-grip strength test. For aerobic capacity fitness, majority 93(62.0%) needed improvement and were at health risk, 36(24.0%) achieved healthy fitness zone, and 21(14.0%) needed improvement according to the normative standard of PACER test. Majority 48(32.0%) of the participants had good hip flexibility, 38(25.3%) had fair status, 27(18.0%) needed improvement, 24(16.0%) had very good hip flexibility status, and 13(8.7%) of the participants had excellent status. Majority 61(40.7%) had average muscular endurance status, 30(20.0%) had poor status, 29(18.3%) had good status, 28(18.7%) had fair muscular endurance status, and 2(1.3%) of the participants had excellent status according to the normative standard of sit-up test. Majority 52(34.7%) had low jump ability fitness, 47(31.3%) had marginal fitness, 31(20.7%) had good fitness, and 20(13.3%) had high performance fitness according to the normative standard of standing long jump test. Based on the findings, it was concluded that majority of the adolescents had better Body Mass Index status, and performed well in both hip flexibility and muscular endurance tests. Whereas majority of the adolescents performed poorly in aerobic capacity test, muscular strength and jump ability test. It was recommended that to enhance wellness, adolescents should be involved in physical activities and recreation lasting 30 minutes three times a week. Schools should engage in fitness program for students on regular basis at both senior and junior classes so as to develop good cardio-respiratory, muscular fitness and improve overall health of the students.Keywords: adolescents, health-related fitness, performance-related fitness, physical fitness
Procedia PDF Downloads 353202 Effects of Virtual Reality Treadmill Training on Gait and Balance Performance of Patients with Stroke: Review
Authors: Hanan Algarni
Abstract:
Background: Impairment of walking and balance skills has negative impact on functional independence and community participation after stroke. Gait recovery is considered a primary goal in rehabilitation by both patients and physiotherapists. Treadmill training coupled with virtual reality technology is a new emerging approach that offers patients with feedback, open and random skills practice while walking and interacting with virtual environmental scenes. Objectives: To synthesize the evidence around the effects of the VR treadmill training on gait speed and balance primarily, functional independence and community participation secondarily in stroke patients. Methods: Systematic review was conducted; search strategy included electronic data bases: MEDLINE, AMED, Cochrane, CINAHL, EMBASE, PEDro, Web of Science, and unpublished literature. Inclusion criteria: Participant: adult >18 years, stroke, ambulatory, without severe visual or cognitive impartments. Intervention: VR treadmill training alone or with physiotherapy. Comparator: any other interventions. Outcomes: gait speed, balance, function, community participation. Characteristics of included studies were extracted for analysis. Risk of bias assessment was performed using Cochrane's ROB tool. Narrative synthesis of findings was undertaken and summary of findings in each outcome was reported using GRADEpro. Results: Four studies were included involving 84 stroke participants with chronic hemiparesis. Interventions intensity ranged (6-12 sessions, 20 minutes-1 hour/session). Three studies investigated the effects on gait speed and balance. 2 studies investigated functional outcomes and one study assessed community participation. ROB assessment showed 50% unclear risk of selection bias and 25% of unclear risk of detection bias across the studies. Heterogeneity was identified in the intervention effects at post training and follow up. Outcome measures, training intensity and durations also varied across the studies, grade of evidence was low for balance, moderate for speed and function outcomes, and high for community participation. However, it is important to note that grading was done on few numbers of studies in each outcome. Conclusions: The summary of findings suggests positive and statistically significant effects (p<0.05) of VR treadmill training compared to other interventions on gait speed, dynamic balance skills, function and participation directly after training. However, the effects were not sustained at follow up in two studies (2 weeks-1 month) and other studies did not perform follow up measurements. More RCTs with larger sample sizes and higher methodological quality are required to examine the long term effects of VR treadmill effects on function independence and community participation after stroke, in order to draw conclusions and produce stronger robust evidence.Keywords: virtual reality, treadmill, stroke, gait rehabilitation
Procedia PDF Downloads 274201 Bariatric Surgery Referral as an Alternative to Fundoplication in Obese Patients Presenting with GORD: A Retrospective Hospital-Based Cohort Study
Authors: T. Arkle, D. Pournaras, S. Lam, B. Kumar
Abstract:
Introduction: Fundoplication is widely recognised as the best surgical option for gastro-oesophageal reflux disease (GORD) in the general population. However, there is controversy surrounding the use of conventional fundoplication in obese patients. Whilst the intra-operative failure of fundoplication, including wrap disruption, is reportedly higher in obese individuals, the more significant issue surrounds symptom recurrence post-surgery. Could a bariatric procedure be considered in obese patients for weight management, to treat the GORD, and to also reduce the risk of recurrence? Roux-en-Y gastric bypass, a widely performed bariatric procedure, has been shown to be highly successful both in controlling GORD symptoms and in weight management in obese patients. Furthermore, NICE has published clear guidelines on eligibility for bariatric surgery, with the main criteria being type 3 obesity or type 2 obesity with the presence of significant co-morbidities that would improve with weight loss. This study aims to identify the proportion of patients who undergo conventional fundoplication for GORD and/or hiatus hernia, which would have been eligible for bariatric surgery referral according to NICE guidelines. Methods: All patients who underwent fundoplication procedures for GORD and/or hiatus hernia repair at a single NHS foundation trust over a 10-year period will be identified using the Trust’s health records database. Pre-operative patient records will be used to find BMI and the presence of significant co-morbidities at the time of consideration for surgery. This information will be compared to NICE guidelines to determine potential eligibility for the bariatric surgical referral at the time of initial surgical intervention. Results: A total of 321 patients underwent fundoplication procedures between January 2011 and December 2020; 133 (41.4%) had available data for BMI or to allow BMI to be estimated. Of those 133, 40 patients (30%) had a BMI greater than 30kg/m², and 7 (5.3%) had BMI >35kg/m². One patient (0.75%) had a BMI >40 and would therefore be automatically eligible according to NICE guidelines. 4 further patients had significant co-morbidities, such as hypertension and osteoarthritis, which likely be improved by weight management surgery and therefore also indicated eligibility for referral. Overall, 3.75% (5/133) of patients undergoing conventional fundoplication procedures would have been eligible for bariatric surgical referral, these patients were all female, and the average age was 60.4 years. Conclusions: Based on this Trust’s experience, around 4% of obese patients undergoing fundoplication would have been eligible for bariatric surgical intervention. Based on current evidence, in class 2/3 obese patients, there is likely to have been a notable proportion with recurrent disease, potentially requiring further intervention. These patient’s may have benefitted more through undergoing bariatric surgery, for example a Roux-en-Y gastric bypass, addressing both their obesity and GORD. Use of patient written notes to obtain BMI data for the 188 patients with missing BMI data and further analysis to determine outcomes following fundoplication in all patients, assessing for incidence of recurrent disease, will be undertaken to strengthen conclusions.Keywords: bariatric surgery, GORD, Nissen fundoplication, nice guidelines
Procedia PDF Downloads 60200 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging
Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi
Abstract:
Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA
Procedia PDF Downloads 279199 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic
Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak
Abstract:
Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.Keywords: business value, financial ratios, performance measurement, value drivers
Procedia PDF Downloads 222198 The Relationship between Wasting and Stunting in Young Children: A Systematic Review
Authors: Susan Thurstans, Natalie Sessions, Carmel Dolan, Kate Sadler, Bernardette Cichon, Shelia Isanaka, Dominique Roberfroid, Heather Stobagh, Patrick Webb, Tanya Khara
Abstract:
For many years, wasting and stunting have been viewed as separate conditions without clear evidence supporting this distinction. In 2014, the Emergency Nutrition Network (ENN) examined the relationship between wasting and stunting and published a report highlighting the evidence for linkages between the two forms of undernutrition. This systematic review aimed to update the evidence generated since this 2014 report to better understand the implications for improving child nutrition, health and survival. Following PRISMA guidelines, this review was conducted using search terms to describe the relationship between wasting and stunting. Studies related to children under five from low- and middle-income countries that assessed both ponderal growth/wasting and linear growth/stunting, as well as the association between the two, were included. Risk of bias was assessed in all included studies using SIGN checklists. 45 studies met the inclusion criteria- 39 peer reviewed studies, 1 manual chapter, 3 pre-print publications and 2 published reports. The review found that there is a strong association between the two conditions whereby episodes of wasting contribute to stunting and, to a lesser extent, stunting leads to wasting. Possible interconnected physiological processes and common risk factors drive an accumulation of vulnerabilities. Peak incidence of both wasting and stunting was found to be between birth and three months. A significant proportion of children experience concurrent wasting and stunting- Country level data suggests that up to 8% of children under 5 may be both wasted and stunted at the same time, global estimates translate to around 16 million children. Children with concurrent wasting and stunting have an elevated risk of mortality when compared to children with one deficit alone. These children should therefore be considered a high-risk group in the targeting of treatment. Wasting, stunting and concurrent wasting and stunting appear to be more prevalent in boys than girls and it appears that concurrent wasting and stunting peaks between 12- 30 months of age with younger children being the most affected. Seasonal patterns in prevalence of both wasting and stunting are seen in longitudinal and cross sectional data and in particular season of birth has been shown to have an impact on a child’s subsequent experience of wasting and stunting. Evidence suggests that the use of mid-upper-arm circumference combined with weight-for-age Z-score might effectively identify children most at risk of near-term mortality, including those concurrently wasted and stunted. Wasting and stunting frequently occur in the same child, either simultaneously or at different moments through their life course. Evidence suggests there is a process of accumulation of nutritional deficits and therefore risk over the life course of a child demonstrates the need for a more integrated approach to prevention and treatment strategies to interrupt this process. To achieve this, undernutrition policies, programmes, financing and research must become more unified.Keywords: Concurrent wasting and stunting, Review, Risk factors, Undernutrition
Procedia PDF Downloads 127197 System-Driven Design Process for Integrated Multifunctional Movable Concepts
Authors: Oliver Bertram, Leonel Akoto Chama
Abstract:
In today's civil transport aircraft, the design of flight control systems is based on the experience gained from previous aircraft configurations with a clear distinction between primary and secondary flight control functions for controlling the aircraft altitude and trajectory. Significant system improvements are now seen particularly in multifunctional moveable concepts where the flight control functions are no longer considered separate but integral. This allows new functions to be implemented in order to improve the overall aircraft performance. However, the classical design process of flight controls is sequential and insufficiently interdisciplinary. In particular, the systems discipline is involved only rudimentarily in the early phase. In many cases, the task of systems design is limited to meeting the requirements of the upstream disciplines, which may lead to integration problems later. For this reason, approaching design with an incremental development is required to reduce the risk of a complete redesign. Although the potential and the path to multifunctional moveable concepts are shown, the complete re-engineering of aircraft concepts with less classic moveable concepts is associated with a considerable risk for the design due to the lack of design methods. This represents an obstacle to major leaps in technology. This gap in state of the art is even further increased if, in the future, unconventional aircraft configurations shall be considered, where no reference data or architectures are available. This means that the use of the above-mentioned experience-based approach used for conventional configurations is limited and not applicable to the next generation of aircraft. In particular, there is a need for methods and tools for a rapid trade-off between new multifunctional flight control systems architectures. To close this gap in the state of the art, an integrated system-driven design process for multifunctional flight control systems of non-classical aircraft configurations will be presented. The overall goal of the design process is to find optimal solutions for single or combined target criteria in a fast process from the very large solution space for the flight control system. In contrast to the state of the art, all disciplines are involved for a holistic design in an integrated rather than a sequential process. To emphasize the systems discipline, this paper focuses on the methodology for designing moveable actuation systems in the context of this integrated design process of multifunctional moveables. The methodology includes different approaches for creating system architectures, component design methods as well as the necessary process outputs to evaluate the systems. An application example of a reference configuration is used to demonstrate the process and validate the results. For this, new unconventional hydraulic and electrical flight control system architectures are calculated which result from the higher requirements for multifunctional moveable concept. In addition to typical key performance indicators such as mass and required power requirements, the results regarding the feasibility and wing integration aspects of the system components are examined and discussed here. This is intended to show how the systems design can influence and drive the wing and overall aircraft design.Keywords: actuation systems, flight control surfaces, multi-functional movables, wing design process
Procedia PDF Downloads 144196 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence
Authors: Nasser Salah Eldin Mohammed Salih Shebka
Abstract:
Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic
Procedia PDF Downloads 113195 The Four Pillars of Islamic Design: A Methodology for an Objective Approach to the Design and Appraisal of Islamic Urban Planning and Architecture Based on Traditional Islamic Religious Knowledge
Authors: Azzah Aldeghather, Sara Alkhodair
Abstract:
In the modern urban planning and architecture landscape, with western ideologies and styles becoming the mainstay of experience and definitions globally, the Islamic world requires a methodology that defines its expression, which transcends cultural, societal, and national styles. This paper will propose a methodology as an objective system to define, evaluate and apply traditional Islamic knowledge to Islamic urban planning and architecture, providing the Islamic world with a system to manifest its approach to design. The methodology is expressed as Four Pillars which are based on traditional meanings of Arab words roughly translated as Pillar One: The Principles (Al Mabade’), Pillar Two: The Foundations (Al Asas), Pillar Three: The Purpose (Al Ghaya), Pillar Four: Presence (Al Hadara). Pillar One: (The Principles) expresses the unification (Tawheed) pillar of Islam: “There is no God but God” and is comprised of seven principles listed as: 1. Human values (Qiyam Al Insan), 2. Universal language as sacred geometry, 3. Fortitude© and Benefitability©, 4. Balance and Integration: conjoining the opposites, 5. Man, time, and place, 6. Body, mind, spirit, and essence, 7. Unity of design expression to achieve unity, harmony, and security in design. Pillar Two: The Foundations is based on two foundations: “Muhammad is the Prophet of God” and his relationship to the renaming of Medina City as a prototypical city or place, which defines a center space for collection conjoined by an analysis of the Medina Charter as a base for the humanistic design. Pillar Three: The Purpose (Al Ghaya) is comprised of four criteria: The naming of the design as a title, the intention of the design as an end goal, the reasoning behind the design, and the priorities of expression. Pillar Four: Presence (Al Hadara) is usually translated as a civilization; in Arabic, the root of Hadara is to be present. This has five primary definitions utilized to express the act of design: Wisdom (Hikma) as a philosophical concept, Identity (Hawiya) of the form, and Dialogue (Hiwar), which are the requirements of the project vis-a-vis what the designer wishes to convey, Expression (Al Ta’abeer) the designer wishes to apply, and Resources (Mawarid) available. The Proposal will provide examples, where applicable, of past and present designs that exemplify the manifestation of the Pillars. The proposed methodology endeavors to return Islamic urban planning and architecture design to its a priori position as a leading design expression adaptable to any place, time, and cultural expression while providing a base for analysis that transcends the concept of style and external form as a definition and expresses the singularity of the esoteric “Spiritual” aspects in a rational, principled, and logical manner clearly addressed in Islam’s essence.Keywords: Islamic architecture, Islamic design, Islamic urban planning, principles of Islamic design
Procedia PDF Downloads 105194 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 23193 Multicomponent Positive Psychology Intervention for Health Promotion of Retirees: A Feasibility Study
Authors: Helen Durgante, Mariana F. Sparremberger, Flavia C. Bernardes, Debora D. DellAglio
Abstract:
Health promotion programmes for retirees, based on Positive Psychology perspectives for the development of strengths and virtues, demand broadened empirical investigation in Brazil. In the case of evidence-based applied research, it is suggested feasibility studies are conducted prior to efficacy trials of the intervention, in order to identify and rectify possible faults in the design and implementation of the intervention. The aim of this study was to evaluate the feasibility of a multicomponent Positive Psychology programme for health promotion of retirees, based on Cognitive Behavioural Therapy and Positive Psychology perspectives. The programme structure included six weekly group sessions (two hours each) encompassing strengths such as Values and self-care, Optimism, Empathy, Gratitude, Forgiveness, and Meaning of life and work. The feasibility criteria evaluated were: Demand, Acceptability, Satisfaction with the programme and with the moderator, Comprehension/Generalization of contents, Evaluation of the moderator (Social Skills and Integrity/Fidelity), Adherence, and programme implementation. Overall, 11 retirees (F=11), age range 54-75, from the metropolitan region of Porto Alegre-RS-Brazil took part in the study. The instruments used were: Qualitative Admission Questionnaire; Moderator Field Diary; the Programme Evaluation Form to assess participants satisfaction with the programme and with the moderator (a six-item 4-point likert scale), and Comprehension/Generalization of contents (a three-item 4-point likert scale); Observers’ Evaluation Form to assess the moderator Social Skills (a five-item 4-point likert scale), Integrity/Fidelity (a 10 item 4-point likert scale), and Adherence (a nine-item 5-point likert scale). Qualitative data were analyzed using content analysis. Descriptive statistics as well as Intraclass Correlations coefficients were used for quantitative data and inter-rater reliability analysis. The results revealed high demand (N = 55 interested people) and acceptability (n = 10 concluded the programme with overall 88.3% frequency rate), satisfaction with the program and with the moderator (X = 3.76, SD = .34), and participants self-report of Comprehension/Generalization of contents provided in the programme (X = 2.82, SD = .51). In terms of the moderator Social Skills (X = 3.93; SD = .40; ICC = .752 [IC = .429-.919]), Integrity/Fidelity (X = 3.93; SD = .31; ICC = .936 [IC = .854-.981]), and participants Adherence (X = 4.90; SD = .29; ICC = .906 [IC = .783-.969]), evaluated by two independent observers present in each session of the programme, descriptive and Intraclass Correlation results were considered adequate. Structural changes were introduced in the intervention design and implementation methods, as well as the removal of items from questionnaires and evaluation forms. The obtained results were satisfactory, allowing changes to be made for further efficacy trials of the programme. Results are discussed taking cultural and contextual demands in Brazil into account.Keywords: feasibility study, health promotion, positive psychology intervention, programme evaluation, retirees
Procedia PDF Downloads 195192 Pioneering Conservation of Aquatic Ecosystems under Australian Law
Authors: Gina M. Newton
Abstract:
Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species
Procedia PDF Downloads 132191 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications
Authors: Marina Shargorodsky
Abstract:
Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.Keywords: obesity, pregnancy, complications, weight gain
Procedia PDF Downloads 54190 An Improved Approach for Hybrid Rocket Injection System Design
Authors: M. Invigorito, G. Elia, M. Panelli
Abstract:
Hybrid propulsion combines beneficial properties of both solid and liquid rockets, such as multiple restarts, throttability as well as simplicity and reduced costs. A nitrous oxide (N2O)/paraffin-based hybrid rocket engine demonstrator is currently under development at the Italian Aerospace Research Center (CIRA) within the national research program HYPROB, funded by the Italian Ministry of Research. Nitrous oxide belongs to the class of self-pressurizing propellants that exhibit a high vapor pressure at standard ambient temperature. This peculiar feature makes those fluids very attractive for space rocket applications because it avoids the use of complex pressurization systems, leading to great benefits in terms of weight savings and reliability. To avoid feed-system-coupled instabilities, the phase change is required to occur through the injectors. In this regard, the oxidizer is stored in liquid condition while target chamber pressures are designed to lie below vapor pressure. The consequent cavitation and flash vaporization constitute a remarkably complex phenomenology that arises great modelling challenges. Thus, it is clear that the design of the injection system is fundamental for the full exploitation of hybrid rocket engine throttability. The Analytical Hierarchy Process has been used to select the injection architecture as best compromise among different design criteria such as functionality, technology innovation and cost. The impossibility to use engineering simplified relations for the dimensioning of the injectors led to the needs of applying a numerical approach based on OpenFOAM®. The numerical tool has been validated with selected experimental data from literature. Quantitative, as well as qualitative comparisons are performed in terms of mass flow rate and pressure drop across the injector for several operating conditions. The results show satisfactory agreement with the experimental data. Modeling assumptions, together with their impact on numerical predictions are discussed in the paper. Once assessed the reliability of the numerical tool, the injection plate has been designed and sized to guarantee the required amount of oxidizer in the combustion chamber and therefore to assure high combustion efficiency. To this purpose, the plate has been designed with multiple injectors whose number and diameter have been selected in order to reach the requested mass flow rate for the two operating conditions of maximum and minimum thrust. The overall design has been finally verified through three-dimensional computations in cavitating non-reacting conditions and it has been verified that the proposed design solution is able to guarantee the requested values of mass flow rates.Keywords: hybrid rocket, injection system design, OpenFOAM®, cavitation
Procedia PDF Downloads 216189 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy
Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay
Abstract:
Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.Keywords: trauma, coagulopathy, prediction, model
Procedia PDF Downloads 176188 Integrative-Cyclical Approach to the Study of Quality Control of Resource Saving by the Use of Innovation Factors
Authors: Anatoliy A. Alabugin, Nikolay K. Topuzov, Sergei V. Aliukov
Abstract:
It is well known, that while we do a quantitative evaluation of the quality control of some economic processes (in particular, resource saving) with help innovation factors, there are three groups of problems: high uncertainty of indicators of the quality management, their considerable ambiguity, and high costs to provide a large-scale research. These problems are defined by the use of contradictory objectives of enhancing of the quality control in accordance with innovation factors and preservation of economic stability of the enterprise. The most acutely, such factors are felt in the countries lagging behind developed economies of the world according to criteria of innovativeness and effectiveness of management of the resource saving. In our opinion, the following two methods for reconciling of the above-mentioned objectives and reducing of conflictness of the problems are to solve this task most effectively: 1) the use of paradigms and concepts of evolutionary improvement of quality of resource-saving management in the cycle "from the project of an innovative product (technology) - to its commercialization and update parameters of customer value"; 2) the application of the so-called integrative-cyclical approach which consistent with complexity and type of the concept, to studies allowing to get quantitative assessment of the stages of achieving of the consistency of these objectives (from baseline of imbalance, their compromise to achievement of positive synergies). For implementation, the following mathematical tools are included in the integrative-cyclical approach: index-factor analysis (to identify the most relevant factors); regression analysis of relationship between the quality control and the factors; the use of results of the analysis in the model of fuzzy sets (to adjust the feature space); method of non-parametric statistics (for a decision on the completion or repetition of the cycle in the approach in depending on the focus and the closeness of the connection of indicator ranks of disbalance of purposes). The repetition is performed after partial substitution of technical and technological factors ("hard") by management factors ("soft") in accordance with our proposed methodology. Testing of the proposed approach has shown that in comparison with the world practice there are opportunities to improve the quality of resource-saving management using innovation factors. We believe that the implementation of this promising research, to provide consistent management decisions for reducing the severity of the above-mentioned contradictions and increasing the validity of the choice of resource-development strategies in terms of parameters of quality management and sustainability of enterprise, is perspective. Our existing experience in the field of quality resource-saving management and the achieved level of scientific competence of the authors allow us to hope that the use of the integrative-cyclical approach to the study and evaluation of the resulting and factor indicators will help raise the level of resource-saving characteristics up to the value existing in the developed economies of post-industrial type.Keywords: integrative-cyclical approach, quality control, evaluation, innovation factors. economic sustainability, innovation cycle of management, disbalance of goals of development
Procedia PDF Downloads 245187 Parenting Interventions for Refugee Families: A Systematic Scoping Review
Authors: Ripudaman S. Minhas, Pardeep K. Benipal, Aisha K. Yousafzai
Abstract:
Background: Children of refugee or asylum-seeking background have multiple, complex needs (e.g. trauma, mental health concerns, separation, relocation, poverty, etc.) that places them at an increased risk for developing learning problems. Families encounter challenges accessing support during resettlement, preventing children from achieving their full developmental potential. There are very few studies in literature that examine the unique parenting challenges refugee families’ face. Providing appropriate support services and educational resources that address these distinctive concerns of refugee parents, will alleviate these challenges allowing for a better developmental outcome for children. Objective: To identify the characteristics of effective parenting interventions that address the unique needs of refugee families. Methods: English-language articles published from 1997 onwards were included if they described or evaluated programmes or interventions for parents of refugee or asylum-seeking background, globally. Data were extracted and analyzed according to Arksey and O’Malley’s descriptive analysis model for scoping reviews. Results: Seven studies met criteria and were included, primarily studying families settled in high-income countries. Refugee parents identified parenting to be a major concern, citing they experienced: alienation/unwelcoming services, language barriers, and lack of familiarity with school and early years services. Services that focused on building the resilience of parents, parent education, or provided services in the family’s native language, and offered families safe spaces to promote parent-child interactions were most successful. Home-visit and family-centered programs showed particular success, minimizing barriers such as transportation and inflexible work schedules, while allowing caregivers to receive feedback from facilitators. The vast majority of studies evaluated programs implementing existing curricula and frameworks. Interventions were designed in a prescriptive manner, without direct participation by family members and not directly addressing accessibility barriers. The studies also did not employ evaluation measures of parenting practices or the caregiving environment, or child development outcomes, primarily focusing on parental perceptions. Conclusion: There is scarce literature describing parenting interventions for refugee families. Successful interventions focused on building parenting resilience and capacity in their native language. To date, there are no studies that employ a participatory approach to program design to tailor content or accessibility, and few that employ parenting, developmental, behavioural, or environmental outcome measures.Keywords: asylum-seekers, developmental pediatrics, parenting interventions, refugee families
Procedia PDF Downloads 161186 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 330185 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures
Authors: A. T. Al-Isawi, P. E. F. Collins
Abstract:
The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction
Procedia PDF Downloads 122184 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load
Authors: Ahmad Saadiq, Neeraj Sahu
Abstract:
Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve
Procedia PDF Downloads 325183 Developing Computational Thinking in Early Childhood Education
Authors: Kalliopi Kanaki, Michael Kalogiannakis
Abstract:
Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses
Procedia PDF Downloads 120182 Technology in Commercial Law Enforcement: Tanzania, Canada, and Singapore Comparatively
Authors: Katarina Revocati Mteule
Abstract:
The background of this research arises from global demands for fair business opportunities. As one of responses to these demands, nations embarked on reforms in commercial laws. In 1990s Tanzania resorted to economic transformation through liberalization to attract more investments included reform in commercial laws enforcement. This research scrutinizes the effectiveness of reforms in Tanzania in comparison with Canada and Singapore and the role of technology. The methodology to be used is doctrinal legal research mixed with international comparative legal research. It involves comparative analysis of library, online, and internet resources as well as Case Laws and Statutory Laws. Tanzania, Canada and Singapore are sampled comparators basing on their distinct level of economic development. The criteria of analysis includes the nature of reforms, type of technology, technological infrastructure and human resource technical competence in each country. As the world progresses towards reforms in commercial laws, improvements in law, policy, and regulatory frameworks are paramount. Specifically, commercial laws are essential in contract enforcement and dispute resolution and how it copes with modern technologies is a concern. Harnessing the best technology is necessary to cope with the modernity in world businesses. In line with this, Tanzania is improving its business environment, including law enforcement mechanisms that are supportive to investments. Reforms such as specialized commercial law enforcement coupled with alternative dispute resolutions such as arbitration, mediation, and reconciliation are emphasized. Court technology as one of the reform tools given high priority. This research evaluates the progress and the effectiveness of the reforms in Commercial Laws towards friendly business environment in Tanzania in comparison with Canada and Singapore. The experience of Tanzania is compared with Canada and Singapore to see what to improve for each country to enhance quick and fair enforcement of commercial law. The research proposes necessary global standards of procedures and in national laws to offer a business-friendly environment and the use of appropriate technology. Solutions are proposed in tackling the challenges of delays in enforcing Commercial Laws such as case management, funding, legal and procedural hindrances, laxity among staff, and abuse of Court process among litigants, all in line with modern technology. It is the finding of the research that proper use of technology has managed to reduce case backlogs and time taken to resolve a commercial dispute, to increase court integrity by minimizing human contacts in commercial law enforcement which may lead to solicitation of favors and saving of parties’ time due to online service. Among the three countries, each one is facing a distinct challenge due to the level of poverty and remoteness from online service. How solutions are found in one country is a lesson to another. To conclude, this paper is suggesting solutions for improving the commercial law enforcement mechanisms in line with modern technology. The call for technological transformation is essential for the enforcement of commercial laws.Keywords: commercial law, enforcement, technology
Procedia PDF Downloads 58181 Assessing the Outcomes of Collaboration with Students on Curriculum Development and Design on an Undergraduate Art History Module
Authors: Helen Potkin
Abstract:
This paper presents a practice-based case study of a project in which the student group designed and planned the curriculum content, classroom activities and assessment briefs in collaboration with the tutor. It focuses on the co-creation of the curriculum within a history and theory module, Researching the Contemporary, which runs for BA (Hons) Fine Art and Art History and for BA (Hons) Art Design History Practice at Kingston University, London. The paper analyses the potential of collaborative approaches to engender students’ investment in their own learning and to encourage reflective and self-conscious understandings of themselves as learners. It also addresses some of the challenges of working in this way, attending to the risks involved and feelings of uncertainty produced in experimental, fluid and open situations of learning. Alongside this, it acknowledges the tensions inherent in adopting such practices within the framework of the institution and within the wider of context of the commodification of higher education in the United Kingdom. The concept underpinning the initiative was to test out co-creation as a creative process and to explore the possibilities of altering the traditional hierarchical relationship between teacher and student in a more active, participatory environment. In other words, the project asked about: what kind of learning could be imagined if we were all in it together? It considered co-creation as producing different ways of being, or becoming, as learners, involving us reconfiguring multiple relationships: to learning, to each other, to research, to the institution and to our emotions. The project provided the opportunity for students to bring their own research and wider interests into the classroom, take ownership of sessions, collaborate with each other and to define the criteria against which they would be assessed. Drawing on students’ reflections on their experience of co-creation alongside theoretical considerations engaging with the processual nature of learning, concepts of equality and the generative qualities of the interrelationships in the classroom, the paper suggests that the dynamic nature of collaborative and participatory modes of engagement have the potential to foster relevant and significant learning experiences. The findings as a result of the project could be quantified in terms of the high level of student engagement in the project, specifically investment in the assessment, alongside the ambition and high quality of the student work produced. However, reflection on the outcomes of the experiment prompts a further set of questions about the nature of positionality in connection to learning, the ways our identities as learners are formed in and through our relationships in the classroom and the potential and productive nature of creative practice in education. Overall, the paper interrogates questions of what it means to work with students to invent and assemble the curriculum and it assesses the benefits and challenges of co-creation. Underpinning it is the argument that, particularly in the current climate of higher education, it is increasingly important to ask what it means to teach and to envisage what kinds of learning can be possible.Keywords: co-creation, collaboration, learning, participation, risk
Procedia PDF Downloads 120180 Prevalence and Risk Factors of Musculoskeletal Disorders among School Teachers in Mangalore: A Cross Sectional Study
Authors: Junaid Hamid Bhat
Abstract:
Background: Musculoskeletal disorders are one of the main causes of occupational illness. Mechanisms and the factors like repetitive work, physical effort and posture, endangering the risk of musculoskeletal disorders would now appear to have been properly identified. Teacher’s exposure to work-related musculoskeletal disorders appears to be insufficiently described in the literature. Little research has investigated the prevalence and risk factors of musculoskeletal disorders in teaching profession. Very few studies are available in this regard and there are no studies evident in India. Purpose: To determine the prevalence of musculoskeletal disorders and to identify and measure the association of such risk factors responsible for developing musculoskeletal disorders among school teachers. Methodology: An observational cross sectional study was carried out. 500 school teachers from primary, middle, high and secondary schools were selected, based on eligibility criteria. A signed consent was obtained and a self-administered, validated questionnaire was used. Descriptive statistics was used to compute the statistical mean and standard deviation, frequency and percentage to estimate the prevalence of musculoskeletal disorders among school teachers. The data analysis was done by using SPSS version 16.0. Results: Results indicated higher pain prevalence (99.6%) among school teachers during the past 12 months. Neck pain (66.1%), low back pain (61.8%) and knee pain (32.0%) were the most prevalent musculoskeletal complaints of the subjects. Prevalence of shoulder pain was also found to be high among school teachers (25.9%). 52.0% subjects reported pain as disabling in nature, causing sleep disturbance (44.8%) and pain was found to be associated with work (87.5%). A significant association was found between musculoskeletal disorders and sick leaves/absenteeism. Conclusion: Work-related musculoskeletal disorders particularly neck pain, low back pain, and knee pain, is highly prevalent and risk factors are responsible for the development of same in school teachers. There is little awareness of musculoskeletal disorders among school teachers, due to work load and prolonged/static postures. Further research should concentrate on specific risk factors like repetitive movements, psychological stress, and ergonomic factors and should be carried out all over the country and the school teachers should be studied carefully over a period of time. Also, an ergonomic investigation is needed to decrease the work-related musculoskeletal disorder problems. Implication: Recall bias and self-reporting can be considered as limitations. Also, cause and effect inferences cannot be ascertained. Based on these results, it is important to disseminate general recommendations for prevention of work-related musculoskeletal disorders with regards to the suitability of furniture, equipment and work tools, environmental conditions, work organization and rest time to school teachers. School teachers in the early stage of their careers should try to adapt the ergonomically favorable position whilst performing their work for a safe and healthy life later. Employers should be educated on practical aspects of prevention to reduce musculoskeletal disorders, since changes in workplace and work organization and physical/recreational activities are required.Keywords: work related musculoskeletal disorders, school teachers, risk factors funding, medical and health sciences
Procedia PDF Downloads 277179 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo
Abstract:
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm
Procedia PDF Downloads 228178 The Location of Park and Ride Facilities Using the Fuzzy Inference Model
Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas
Abstract:
Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location
Procedia PDF Downloads 325177 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem
Authors: Nan Xu
Abstract:
In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC
Procedia PDF Downloads 146176 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose
Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini
Abstract:
Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration
Procedia PDF Downloads 163