Search results for: optimal cluster scheme at fixed-fund
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5107

Search results for: optimal cluster scheme at fixed-fund

907 Fuzzy-Genetic Algorithm Multi-Objective Optimization Methodology for Cylindrical Stiffened Tanks Conceptual Design

Authors: H. Naseh, M. Mirshams, M. Mirdamadian, H. R. Fazeley

Abstract:

This paper presents an extension of fuzzy-genetic algorithm multi-objective optimization methodology that could effectively be used to find the overall satisfaction of objective functions (selecting the design variables) in the early stages of design process. The coupling of objective functions due to design variables in an engineering design process will result in difficulties in design optimization problems. In many cases, decision making on design variables conflicts with more than one discipline in system design. In space launch system conceptual design, decision making on some design variable (e.g. oxidizer to fuel mass flow rate O/F) in early stages of the design process is related to objective of liquid propellant engine (specific impulse) and Tanks (structure weight). Then, the primary application of this methodology is the design of a liquid propellant engine with the maximum specific impulse and cylindrical stiffened tank with the minimum weight. To this end, the design problem is established the fuzzy rule set based on designer's expert knowledge with a holistic approach. The independent design variables in this model are oxidizer to fuel mass flow rate, thickness of stringers, thickness of rings, shell thickness. To handle the mentioned problems, a fuzzy-genetic algorithm multi-objective optimization methodology is developed based on Pareto optimal set. Consequently, this methodology is modeled with the one stage of space launch system to illustrate accuracy and efficiency of proposed methodology.

Keywords: cylindrical stiffened tanks, multi-objective, genetic algorithm, fuzzy approach

Procedia PDF Downloads 651
906 Prevalence and Risk Factors of Cardiovascular Diseases among Bangladeshi Adults: Findings from a Cross Sectional Study

Authors: Fouzia Khanam, Belal Hossain, Kaosar Afsana, Mahfuzar Rahman

Abstract:

Aim: Although cardiovascular diseases (CVD) has already been recognized as a major cause of death in developed countries, its prevalence is rising in developing countries as well, and engendering a challenge for the health sector. Bangladesh has experienced an epidemiological transition from communicable to non-communicable diseases over the last few decades. So, the rising prevalence of CVD and its risk factors are imposing a major problem for the country. We aimed to examine the prevalence of CVDs and socioeconomic and lifestyle factors related to it from a population-based survey. Methods: The data used for this study were collected as a part of a large-scale cross-sectional study conducted to explore the overall health status of children, mothers and senior citizens of Bangladesh. Multistage cluster random sampling procedure was applied by considering unions as clusters and households as the primary sampling unit to select a total of 11,428 households for the base survey. Present analysis encompassed 12338 respondents of ≥ 35 years, selected from both rural areas and urban slums of the country. Socio-economic, demographic and lifestyle information were obtained through individual by a face-to-face interview which was noted in ODK platform. And height, weight, blood pressure and glycosuria were measured using standardized methods. Chi-square test, Univariate modified Poisson regression model, and multivariate modified Poisson regression model were done using STATA software (version 13.0) for analysis. Results: Overall, the prevalence of CVD was 4.51%, of which 1.78% had stroke and 3.17% suffered from heart diseases. Male had higher prevalence of stroke (2.20%) than their counterparts (1.37%). Notably, thirty percent of respondents had high blood pressure and 5% population had diabetes and more than half of the population was pre-hypertensive. Additionally, 20% were overweight, 77% were smoker or consumed smokeless tobacco and 28% of respondents were physically inactive. Eighty-two percent of respondents took extra salt while eating and 29% of respondents had deprived sleep. Furthermore, the prevalence of risk factor of CVD varied according to gender. Women had a higher prevalence of overweight, obesity and diabetes. Women were also less physically active compared to men and took more extra salt. Smoking was lower in women compared to men. Moreover, women slept less compared to their counterpart. After adjusting confounders in modified Poisson regression model, age, gender, occupation, wealth quintile, BMI, extra salt intake, daily sleep, tiredness, diabetes, and hypertension remained as risk factors for CVD. Conclusion: The prevalence of CVD is significant in Bangladesh, and there is an evidence of rising trend for its risk factors such as hypertension, diabetes especially in older population, women and high-income groups. Therefore, in this current epidemiological transition, immediate public health intervention is warranted to address the overwhelming CVD risk.

Keywords: cardiovascular diseases, diabetes, hypertension, stroke

Procedia PDF Downloads 378
905 An Experimental Study on the Temperature Reduction of Exhaust Gas at a Snorkeling of Submarine

Authors: Seok-Tae Yoon, Jae-Yeong Choi, Gyu-Mok Jeon, Yong-Jin Cho, Jong-Chun Park

Abstract:

Conventional submarines obtain propulsive force by using an electric propulsion system consisting of a diesel generator, battery, motor, and propeller. In the underwater, the submarine uses the electric power stored in the battery. After that, when a certain amount of electric power is consumed, the submarine floats near the sea water surface and recharges the electric power by using the diesel generator. The voyage carried out while charging the power is called a snorkel, and the high-temperature exhaust gas from the diesel generator forms a heat distribution on the sea water surface. The heat distribution is detected by weapon system equipped with thermo-detector and that is the main cause of reducing the survivability of the submarine. In this paper, an experimental study was carried out to establish optimal operating conditions of a submarine for reduction of infrared signature radiated from the sea water surface. For this, a hot gas generating system and a round acrylic water tank with adjustable water level were made. The control variables of the experiment were set as the mass flow rate, the temperature difference between the water and the hot gas in the water tank, and the water level difference between the air outlet and the water surface. The experimental instrumentation used a thermocouple of T-type to measure the released air temperature on the surface of the water, and a thermography system to measure the thermal energy distribution on the water surface. As a result of the experiment study, we analyzed the correlation between the final released temperature of the exhaust pipe exit in a submarine and the depth of the snorkel, and presented reasonable operating conditions for the infrared signature reduction of submarine.

Keywords: experiment study, flow rate, infrared signature, snorkeling, thermography

Procedia PDF Downloads 348
904 Optimizing Bridge Deck Construction: A Deep Neural Network Approach for Limiting Exterior Grider Rotation

Authors: Li Hui, Riyadh Hindi

Abstract:

In the United States, bridge construction often employs overhang brackets to support the deck overhang, the weight of fresh concrete, and loads from construction equipment. This approach, however, can lead to significant torsional moments on the exterior girders, potentially causing excessive girder rotation. Such rotations can result in various safety and maintenance issues, including thinning of the deck, reduced concrete cover, and cracking during service. Traditionally, these issues are addressed by installing temporary lateral bracing systems and conducting comprehensive torsional analysis through detailed finite element analysis for the construction of bridge deck overhang. However, this process is often intricate and time-intensive, with the spacing between temporary lateral bracing systems usually relying on the field engineers’ expertise. In this study, a deep neural network model is introduced to limit exterior girder rotation during bridge deck construction. The model predicts the optimal spacing between temporary bracing systems. To train this model, over 10,000 finite element models were generated in SAP2000, incorporating varying parameters such as girder dimensions, span length, and types and spacing of lateral bracing systems. The findings demonstrate that the deep neural network provides an effective and efficient alternative for limiting the exterior girder rotation for bridge deck construction. By reducing dependence on extensive finite element analyses, this approach stands out as a significant advancement in improving safety and maintenance effectiveness in the construction of bridge decks.

Keywords: bridge deck construction, exterior girder rotation, deep learning, finite element analysis

Procedia PDF Downloads 58
903 The Complex Relationship Between IQ and Attention Deficit Hyperactivity Disorder Symptoms: Insights From Behaviors, Cognition, and Brain in 5,138 Children With Attention Deficit Hyperactivity Disorder

Authors: Ningning Liu, Gaoding Jia, Yinshan Wang, Haimei Li, Xinian Zuo, Yufeng Wang, Lu Liu, Qiujin Qian

Abstract:

Background: There has been speculation that a high IQ may not necessarily provide protection against attention deficit hyperactivity disorder (ADHD), and there may be a U-shaped correlation between IQ and ADHD symptoms. However, this speculation has not been validated in the ADHD population in any study so far. Method: We conducted a study with 5,138 children who have been professionally diagnosed with ADHD and have a wide range of IQ levels. General Linear Models were used to determine the optimal model between IQ and ADHD core symptoms with sex and age as covariates. The ADHD symptoms we looked at included the total scores (TO), inattention (IA) and hyperactivity/impulsivity (HI). Wechsler Intelligence scale were used to assess IQ [Full-Scale IQ (FSIQ), Verbal IQ (VIQ), and Performance IQ (PIQ)]. Furthermore, we examined the correlation between IQ and the execution function [Behavior Rating Inventory of Executive Function (BRIEF)], as well as between IQ and brain surface area, to determine if the associations between IQ and ADHD symptoms are reflected in executive functions and brain structure. Results: Consistent with previous research, the results indicated that FSIQ and VIQ both showed a linear negative correlation with the TO and IA scores of ADHD. However, PIQ showed an inverted U-shaped relationship with the TO and HI scores of ADHD, with 103 as the peak point. These findings were also partially reflected in the relationship between IQ and executive functions, as well as IQ and brain surface area. Conclusion: To sum up, the relationship between IQ and ADHD symptoms is not straightforward. Our study confirms long-standing academic hypotheses and finds that PIQ exhibits an inverted U-shaped relationship with ADHD symptoms. This study enhances our understanding of symptoms and behaviors of ADHD with varying IQ characteristics and provides some evidence for targeted clinical intervention.

Keywords: ADHD, IQ, execution function, brain imaging

Procedia PDF Downloads 59
902 Adaptive Design of Large Prefabricated Concrete Panels Collective Housing

Authors: Daniel M. Muntean, Viorel Ungureanu

Abstract:

More than half of the urban population in Romania lives today in residential buildings made out of large prefabricated reinforced concrete panels. Since their initial design was made in the 1960’s, these housing units are now being technically and morally outdated, consuming large amounts of energy for heating, cooling, ventilation and lighting, while failing to meet the needs of the contemporary life-style. Due to their widespread use, the design of a system that improves their energy efficiency would have a real impact, not only on the energy consumption of the residential sector, but also on the quality of life that it offers. Furthermore, with the transition of today’s existing power grid to a “smart grid”, buildings could become an active element for future electricity networks by contributing in micro-generation and energy storage. One of the most addressed issues today is to find locally adapted strategies that can be applied considering the 20-20-20 EU policy criteria and to offer sustainable and innovative solutions for the cost-optimal energy performance of buildings adapted on the existing local market. This paper presents a possible adaptive design scenario towards sustainable retrofitting of these housing units. The apartments are transformed in order to meet the current living requirements and additional extensions are placed on top of the building, replacing the unused roof space, acting not only as housing units, but as active solar energy collection systems. An adaptive building envelope is ensured in order to achieve overall air-tightness and an elevator system is introduced to facilitate access to the upper levels.

Keywords: adaptive building, energy efficiency, retrofitting, residential buildings, smart grid

Procedia PDF Downloads 293
901 The Nexus of Federalism and Economic Development: A Politico-Economic Analysis of Balochistan, Pakistan

Authors: Rameesha Javaid

Abstract:

Balochistan, the largest landmass named after and dominated by the 55% Baloch population, which has had a difficult anti-center history like their brothers the Kurds of Middle East, reluctantly acceded to Pakistan in 1947. The region, which attained the status of a province after two decades of accession, has lagged behind in social development and economic growth as compared to the other three federating units. The province has seen the least financial autonomy and administrative decentralization both in autocratic and democratic dispensations under geostrategic and security considerations. Significant corrections have been recently made in the policy framework through changing the formula for intra-provincial National Finance Award, curtailing the number of subjects under federal control, and reactivating the Council of Common Interests. Yet policymaking remains overwhelmingly bureaucratic under a weak parliamentary oversight. The provincial coalition governments are unwieldy and directionless. The government machinery has much less than the optimal capability, character, integrity, will, and opportunity to perform. Decentralization further loses its semblance in the absence of local governments for long intervals and with the hold of hereditary tribal chiefs. Increased allocations failed to make an impact in the highest per capita cost environment due to long distances and scattered settlements. Decentralization, the basic ingredient of federalism has remained mortgaged to geo-strategic factors, internal security perceptions, autocratic and individualistic styles of governments, bureaucratic policymaking structures, bad governance, non-existent local governments, and feudalistic tribal lords. This suboptimal federalism speaks for the present underdevelopment in Balochistan and will earmark the milestones in the future.

Keywords: Balochistan, economic development, federalism, political economy

Procedia PDF Downloads 305
900 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 33
899 A Discussion on Urban Planning Methods after Globalization within the Context of Anticipatory Systems

Authors: Ceylan Sozer, Ece Ceylan Baba

Abstract:

The reforms and changes that began with industrialization in cities and continued with globalization in 1980’s, created many changes in urban environments. City centers which are desolated due to industrialization, began to get crowded with globalization and became the heart of technology, commerce and social activities. While the immediate and intense alterations are planned around rigorous visions in developed countries, several urban areas where the processes were underestimated and not taken precaution faced with irrevocable situations. When the effects of the globalization in the cities are examined, it is seen that there are some anticipatory system plans in the cities about the future problems. Several cities such as New York, London and Tokyo have planned to resolve probable future problems in a systematic scheme to decrease possible side effects during globalization. The decisions in urban planning and their applications are the main points in terms of sustainability and livability in such mega-cities. This article examines the effects of globalization on urban planning through 3 mega cities and the applications. When the applications of urban plannings of the three mega-cities are investigated, it is seen that the city plans are generated under light of past experiences and predictions of a certain future. In urban planning, past and present experiences of a city should have been examined and then future projections could be predicted together with current world dynamics by a systematic way. In this study, methods used in urban planning will be discussed and ‘Anticipatory System’ model will be explained and relations with global-urban planning will be discussed. The concept of ‘anticipation’ is a phenomenon that means creating foresights and predictions about the future by combining past, present and future within an action plan. The main distinctive feature that separates anticipatory systems from other systems is the combination of past, present and future and concluding with an act. Urban plans that consist of various parameters and interactions together are identified as ‘live’ and they have systematic integrities. Urban planning with an anticipatory system might be alive and can foresight some ‘side effects’ in design processes. After globalization, cities became more complex and should be designed within an anticipatory system model. These cities can be more livable and can have sustainable urban conditions for today and future.In this study, urban planning of Istanbul city is going to be analyzed with comparisons of New York, Tokyo and London city plans in terms of anticipatory system models. The lack of a system in İstanbul and its side effects will be discussed. When past and present actions in urban planning are approached through an anticipatory system, it can give more accurate and sustainable results in the future.

Keywords: globalization, urban planning, anticipatory system, New York, London, Tokyo, Istanbul

Procedia PDF Downloads 141
898 The Effectiveness of Multi-Media Experiential Training Programme on Advance Care Planning in Enhancing Acute Care Nurses’ Knowledge and Confidence in Advance Care Planning Discussion: An Interim Report

Authors: Carmen W. H. Chan, Helen Y. L. Chan, Kai Chow Choi, Ka Ming Chow, Cecilia W. M. Kwan, Nancy H. Y. Ng, Jackie Robinson

Abstract:

Introduction: In Hong Kong, a significant number of deaths occur in acute care wards, which requires nurses in these settings to provide end-of-life care and lead ACP implementation. However, nurses in these settings, in fact, have very low-level involvement in ACP discussions because of limited training in ACP conversations. Objective: This study aims to assess the impact of a multi-media experiential ACP (MEACP) training program, which is guided by the experiential learning model and theory of planned behaviour, on nurses' knowledge and confidence in assisting patients with ACP. Methodology: The study utilizes a cluster randomized controlled trial with a 12-week follow-up. Eligible nurses working in acute care hospital wards are randomly assigned at the ward level, in a 1:1 ratio, to either the control group (no ACP education) or the intervention group (4-week MEACP training program). The training programme includes training through a webpage and mobile application, as well as a face-to-face training workshop with enhanced lectures and role play, which is based on the Theory of Planned Behavior and Kolb's Experiential Learning Model. Questionnaires were distributed to assess nurses' knowledge (a 10-item true/false questionnaire) and level of confidence (five-point Likert scale) in ACP at baseline (T0), four weeks after the baseline assessment (T1), and 12 weeks after T1 (T2). In this interim report, data analysis was mainly descriptive in nature. Result: The interim report focuses on the preliminary results of 165 nurses at T0 (Control: 74, Intervention: 91) over a 5-month period, 69 nurses from the control group who completed the 4-week follow-up and 65 nurses from the intervention group who completed the 4-week MEACP training program at T1. The preliminary attrition rate is 6.8% and 28.6% for the control and intervention groups, respectively, as some nurses did not complete the whole set of online modules. At baseline, the two groups were generally homogeneous in terms of their years of nursing practice, weekly working hours, working title, and level of education, as well as ACP knowledge and confidence levels. The proportion of nurses who answered all ten knowledge questions correctly increased from 13.8% (T0) to 66.2% (T1) for the intervention group and from 13% (T0) to 20.3% (T1) for the control group. The nurses in the intervention group answered an average of 7.57 and 9.43 questions correctly at T0 and T1, respectively. They showed a greater improvement in the knowledge assessment at T1 with respect to T0 when compared with their counterparts in the control group (mean difference of change score, Δ=1.22). They also exhibited a greater gain in level of confidence at T1 compared to their colleagues in the control group (Δ=0.91). T2 data is yet available. Conclusion: The prevalence of nurses engaging in ACP and their level of knowledge about ACP in Hong Kong is low. The MEACP training program can enrich nurses by providing them with more knowledge about ACP and increasing their confidence in conducting ACP.

Keywords: advance directive, advance care planning, confidence, knowledge, multi-media experiential, randomised control trial

Procedia PDF Downloads 74
897 Performance Evaluation and Kinetics of Artocarpus heterophyllus Seed for the Purification of Paint Industrial Wastewater by Coagulation-Flocculation Process

Authors: Ifeoma Maryjane Iloamaeke, Kelvin Obazie, Mmesoma Offornze, Chiamaka Marysilvia Ifeaghalu, Cecilia Aduaka, Ugomma Chibuzo Onyeije, Claudine Ifunanaya Ogu, Ngozi Anastesia Okonkwo

Abstract:

This work investigated the effects of pH, settling time, and coagulant dosages on the removal of color, turbidity, and heavy metals from paint industrial wastewater using the seed of Artocarpus heterophyllus (AH) by the coagulation-flocculation process. The paint effluent was physicochemically characterized, while AH coagulant was instrumentally characterized by Scanning Electron Microscope (SEM), Fourier Transform Infrared (FTIR), and X-ray diffraction (XRD). A Jar test experiment was used for the coagulation-flocculation process. The result showed that paint effluent was polluted with color, turbidity (36000 NTU), mercury (1.392 mg/L), lead (0.252 mg/L), arsenic (1.236 mg/L), TSS (63.40mg/L), and COD (121.70 mg/L). The maximum color removal efficiency was 94.33% at the dosage of 0.2 g/L, pH 2 at a constant time of 50 mins, and 74.67% at constant pH 2, coagulant dosage of 0.2 g/L and 50 mins. The highest turbidity removal efficiency was 99.94% at 0.2 g/L and 50 mins at constant pH 2 and 96.66% at pH 2 and 0.2 g/L at constant time of 50 mins. The mercury removal efficiency of 99.29% was achieved at the optimal condition of 0.8 g/L coagulant dosage, pH 8, and constant time of 50 mins and 99.57% at coagulant dosage of 0.8 g/L, time of 50 mins constant pH 8. The highest lead removal efficiency was 99.76% at a coagulant dosage of 10 g/L, time of 40 mins at constant pH 10, and 96.53% at pH 10, coagulant dosage of 10 g/L and constant time of 40 mins. For arsenic, the removal efficiency is 75.24 % at 0.8 g/L coagulant dosage, time of 40 mins, and constant pH of 8. XRD imaging before treatment showed that Artocarpus heterophyllus coagulant was crystalline and changed to amorphous after treatment. The SEM and FTIR results of the AH coagulant and sludge suggested there were changes in the surface morphology and functional groups before and after treatment. The reaction kinetics were modeled best in the second order.

Keywords: Artocarpus heterophyllus, coagulation-flocculation, coagulant dosages, setting time, paint effluent

Procedia PDF Downloads 89
896 Auditory and Visual Perceptual Category Learning in Adults with ADHD: Implications for Learning Systems and Domain-General Factors

Authors: Yafit Gabay

Abstract:

Attention deficit hyperactivity disorder (ADHD) has been associated with both suboptimal functioning in the striatum and prefrontal cortex. Such abnormalities may impede the acquisition of perceptual categories, which are important for fundamental abilities such as object recognition and speech perception. Indeed, prior research has supported this possibility, demonstrating that children with ADHD have similar visual category learning performance as their neurotypical peers but use suboptimal learning strategies. However, much less is known about category learning processes in the auditory domain or among adults with ADHD in which prefrontal functions are more mature compared to children. Here, we investigated auditory and visual perceptual category learning in adults with ADHD and neurotypical individuals. Specifically, we examined learning of rule-based categories – presumed to be optimally learned by a frontal cortex-mediated hypothesis testing – and information-integration categories – hypothesized to be optimally learned by a striatally-mediated reinforcement learning system. Consistent with striatal and prefrontal cortical impairments observed in ADHD, our results show that across sensory modalities, both rule-based and information-integration category learning is impaired in adults with ADHD. Computational modeling analyses revealed that individuals with ADHD were slower to shift to optimal strategies than neurotypicals, regardless of category type or modality. Taken together, these results suggest that both explicit, frontally mediated and implicit, striatally mediated category learning are impaired in ADHD. These results suggest impairments across multiple learning systems in young adults with ADHD that extend across sensory modalities and likely arise from domain-general mechanisms.

Keywords: ADHD, category learning, modality, computational modeling

Procedia PDF Downloads 40
895 Challenges of Translation Knowledge for Pediatric Rehabilitation Technology

Authors: Patrice L. Weiss, Barbara Mazer, Tal Krasovsky, Naomi Gefen

Abstract:

Knowledge translation (KT) involves the process of applying the most promising research findings to practical settings, ensuring that new technological discoveries enhance healthcare accessibility, effectiveness, and accountability. This perspective paper aims to discuss and provide examples of how the KT process can be implemented during a time of rapid advancement in rehabilitation technologies, which have the potential to greatly influence pediatric healthcare. The analysis is grounded in a comprehensive systematic review of literature, where key studies from the past 34 years were carefully interpreted by four expert researchers in scientific and clinical fields. This review revealed both theoretical and practical insights into the factors that either facilitate or impede the successful implementation of new rehabilitation technologies. By utilizing the Knowledge-to-Action cycle, which encompasses the knowledge creation funnel and the action cycle, we demonstrated its application in integrating advanced technologies into clinical practice and guiding healthcare policy adjustments. We highlighted three successful technology applications: powered mobility, head support systems, and telerehabilitation. Moreover, we investigated emerging technologies, such as brain-computer interfaces and robotic assistive devices, which face challenges related to cost, durability, and usability. Recommendations include prioritizing early and ongoing design collaborations, transitioning from research to practical implementation, and determining the optimal timing for clinical adoption of new technologies. In conclusion, this paper informs, justifies, and strengthens the knowledge translation process, ensuring it remains relevant, rigorous, and significantly contributes to pediatric rehabilitation and other clinical fields.

Keywords: knowledge translation, rehabilitation technology, pediatrics, barriers, facilitators, stakeholders

Procedia PDF Downloads 14
894 Promoting Academic and Social-Emotional Growth of Students with Learning Differences Through Differentiated Instruction

Authors: Jolanta Jonak

Abstract:

Traditional classrooms are challenging for many students, but especially for students that learn differently due to cognitive makeup, learning preferences, or disability. These students often require different teaching approaches and learning opportunities to benefit from learning. Teachers frequently divert to using one teaching approach, the one that matches their own learning style. For instance, teachers that are auditory learners, likely default to providing auditory learning opportunities. However, if a student is a visual learner, he/she may not fully benefit from that teaching style. Based on research, students and their parents’ feedback, large numbers of students are not provided the type of education and types of supports they need in order to be successful in an academic environment. This eventually leads to not learning at an appropriate rate and ultimately leading to skill deficiencies and deficits. Providing varied learning approaches promote high academic and social-emotional growth of all students and it will prevent inaccurate Special Education referrals. Varied learning opportunities can be delivered for all students by providing Differentiated Instruction (DI). This type of instruction allows each student to learn in the most optimal way regardless of learning preferences and cognitive learning profiles. Using Differentiated Instruction will lead to a high level of student engagement and learning. In addition, experiencing success in the classroom, will contribute to increased social emotional wellbeing. Being cognizant of how teaching approaches impact student's learning, school staff can avoid inaccurate perceptions about the students’ learning abilities, unnecessary referrals for special education evaluations, and inaccurate decisions about the presence of a disability. This presentation will illustrate learning differences due to various factors, how to recognize them, and how to address them through Differentiated Instruction.

Keywords: special education, disability, differences, differentiated instruction, social emotional wellbeing

Procedia PDF Downloads 43
893 Development and Characterisation of Nonwoven Fabrics for Apparel Applications

Authors: Muhammad Cheema, Tahir Shah, Subhash Anand

Abstract:

The cost of making apparel fabrics for garment manufacturing is very high because of their conventional manufacturing processes and new methods/processes are being constantly developed for making fabrics by unconventional methods. With the advancements in technology and the availability of the innovative fibres, durable nonwoven fabrics by using the hydroentanglement process that can compete with the woven fabrics in terms of their aesthetic and tensile properties are being developed. In the work reported here, the hydroentangled nonwoven fabrics were developed through a hybrid nonwoven manufacturing processes by using fibrillated Tencel® and bi-component (sheath/core) polyethylene/polyester (PE/PET) fibres, in which the initial nonwoven fabrics were prepared by the needle-punching method followed by hydroentanglement process carried out at optimal pressures of 50 to 250bars. The prepared fabrics were characterized according to the British Standards (BS 3356:1990, BS 9237:1995, BS 13934-1:1999) and the attained results were compared with those for a standard plain-weave cotton, polyester woven fabric and commercially available nonwoven fabric (Evolon®). The developed hydroentangled fabrics showed better drape properties owing to their flexural rigidity of 252 mg.cm in the machine direction, while the corresponding commercial hydroentangled fabric displayed a value of 1340 mg.cm in the machine direction. The tensile strength of the developed hydroentangled fabrics showed an approximately 200% increase than the commercial hydroentangled fabrics. Similarly, the developed hydroentangled fabrics showed higher properties in term of air permeability, such as the developed hydroentangled fabric exhibited 448 mm/sec and Evolon fabric exhibited 69 mm/sec at 100 Pa pressure. Thus for apparel fabrics, the work combining the existing methods of nonwoven production, provides additional benefits in terms of cost, time and also helps in reducing the carbon footprint for the apparel fabric manufacture.

Keywords: hydroentanglement, nonwoven apparel, durable nonwoven, wearable nonwoven

Procedia PDF Downloads 261
892 Comparative Studies on the Needs and Development of Autotronic Maintenance Training Modules for the Training of Automobile Independent Workshop Service Technicians in North – Western Region, Nigeria

Authors: Muhammad Shuaibu Birniwa

Abstract:

Automobile Independent Workshop Service Technicians (popularly called roadside mechanics) are technical personals that repairs most of the automobile vehicles in Nigeria. Majority of these mechanics acquired their skills through apprenticeship training. Modern vehicle imported into the country posed greater challenges to the present automobile technicians particularly in the area of carrying out maintenance repairs of these latest automobile vehicles (autotronics vehicle) due to their inability to possessed autotronic skills competency. To source for solution to the above mentioned problems, therefore a research is carried out in North – Western region of Nigeria to produce a suitable maintenance training modules that can be used to train the technicians for them to upgrade/acquire the needed competencies for successful maintenance repair of the autotronic vehicles that were running everyday on the nation’s roads. A cluster sampling technique is used to obtain a sample from the population. The population of the study is all autotronic inclined lecturers, instructors and independent workshop service technicians that are within North – Western region of Nigeria. There are seven states (Jigawa, Kaduna, Kano, Katsina, Kebbi, Sokoto and Zamfara) in the study area, these serves as clusters in the population. Five (5) states were randomly selected to serve as the sample size. The five states are Jigawa, Kano, Katsina, Kebbi and Zamfara, the entire population of the five states which serves as clusters is (183), lecturers (44), instructors (49) and autotronic independent workshop service technicians (90), all of them were used in the study because of their manageable size. 183 copies of autotronic maintenance training module questionnaires (AMTMQ) with 174 and 149 question items respectively were administered and collected by the researcher with the help of an assistants, they are administered to 44 Polytechnic lecturers in the department of mechanical engineering, 49 instructors in skills acquisition centres/polytechnics and 90 master craftsmen of an independent workshops that are autotronic inclined. Data collected for answering research questions 1, 3, 4 and 5 were analysed using SPSS software version 22, Grand Mean and standard deviation were used to answer the research questions. Analysis of Variance (ANOVA) was used to test null hypotheses one (1) to three (3) and t-test statistical tool is used to analyzed hypotheses four (4) and five (5) all at 0.05 level of significance. The research conducted revealed that; all the objectives, contents/tasks, facilities, delivery systems and evaluation techniques contained in the questionnaire were required for the development of the autotronic maintenance training modules for independent workshop service technicians in the north – western zone of Nigeria. The skills upgrade training conducted by federal government in collaboration with SURE-P, NAC and SMEDEN was not successful because the educational status of the target population was not considered in drafting the needed training modules. The mode of training used does not also take cognizance of the theoretical aspect of the trainees, especially basic science which rendered the programme ineffective and insufficient for the tasks on ground.

Keywords: autotronics, roadside, mechanics, technicians, independent

Procedia PDF Downloads 68
891 Sensitive Electrochemical Sensor for Simultaneous Detection of Endocrine Disruptors, Bisphenol A and 4- Nitrophenol Using La₂Cu₂O₅ Modified Glassy Carbon Electrode

Authors: S. B. Mayil Vealan, C. Sekar

Abstract:

Bisphenol A (BIS A) and 4 Nitrophenol (4N) are the most prevalent environmental endocrine-disrupting chemicals which mimic hormones and have a direct relationship to the development and growth of animal and human reproductive systems. Moreover, intensive exposure to the compound is related to prostate and breast cancer, infertility, obesity, and diabetes. Hence, accurate and reliable determination techniques are crucial for preventing human exposure to these harmful chemicals. Lanthanum Copper Oxide (La₂Cu₂O₅) nanoparticles were synthesized and investigated through various techniques such as scanning electron microscopy, high-resolution transmission electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy, and electrochemical impedance spectroscopy. Cyclic voltammetry and square wave voltammetry techniques are employed to evaluate the electrochemical behavior of as-synthesized samples toward the electrochemical detection of Bisphenol A and 4-Nitrophenol. Under the optimal conditions, the oxidation current increased linearly with increasing the concentration of BIS A and 4-N in the range of 0.01 to 600 μM with a detection limit of 2.44 nM and 3.8 nM. These are the lowest limits of detection and the widest linear ranges in the literature for this determination. The method was applied to the simultaneous determination of BIS A and 4-N in real samples (food packing materials and river water) with excellent recovery values ranging from 95% to 99%. Better stability, sensitivity, selectivity and reproducibility, fast response, and ease of preparation made the sensor well-suitable for the simultaneous determination of bisphenol and 4 Nitrophenol. To the best of our knowledge, this is the first report in which La₂Cu₂O₅ nano particles were used as efficient electron mediators for the fabrication of endocrine disruptor (BIS A and 4N) chemical sensors.

Keywords: endocrine disruptors, electrochemical sensor, Food contacting materials, lanthanum cuprates, nanomaterials

Procedia PDF Downloads 82
890 The Gastroprotective Potential of Clematis Flammula Leaf Extracts

Authors: Dina Atmani-Kilani, Farah Yous, Djebbar Atmani

Abstract:

The etiology of peptic ulcer is closely related to stress, excessive consumption of nonsteroidal anti-inflammatory drugs, or ethanol. Clematis flammula (Ranunculaceae) is a medicinal plant widely used by rural populations to treat inflammatory disorders. This study was designed to assess the gastroprotective potential of C. flammula extracts. Gastric ulcer was induced by stress, indomethacin, HCl / ethanol, and absolute ethanol on NMRI-type mice. The antioxidant potency of the ethanolic extract of Clematis flammula (EECF) was evaluated on catalase (CAT), superoxide dismutase (SOD), glutathione peroxidase (GPx) activities. Glutathione (GSH) and malonaldehyde (MDA) levels were also quantified. The anti-inflammatory potential was evaluated through the effect of EECF on myeloperoxidase activity (MPO) and vascular permeability. Complementary tests concerning the quantification of mucus levels, gastric motility, inhibition of ATPase H+/K+activity, as well as a histopathological study were also undertaken to explore the mechanism of action of the EECF. The EECF exhibited a significant (p <0.001) and optimal (100 mg/kg) gastroprotective effect by elevating SOD, CAT, and GSH levels, thereby minimizing the production of MDA and lowering the activity of MPO and vascular permeability. EECF also increased the rate of mucus production, decreased gastric motility, and completely suppressed the H+/K+ ATPase activity. Histopathological study confirmed the effectiveness of the extract in the prevention of peptic ulcer. The results obtained in this study demonstrated the gastro-protective effect of EECF via acidic antioxidant, anti-inflammatory, cytoprotective and anti-secretory mechanisms, which may justify its use as a substitute in peptic ulcer treatment.

Keywords: clematis flammula, superoxide dismutase, myeloperoxidase, ATPase, pump

Procedia PDF Downloads 196
889 Small Town Big Urban Issues the Case of Kiryat Ono, Israel

Authors: Ruth Shapira

Abstract:

Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all – the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. This is reflected in the quality of the urban form and life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 100,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may be generic for similar cases. Basic Methodologies: The OBJECT, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue. Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the PLACE consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a coherent way. In Conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy framework for the accelerated urbanization of our chaotic present.

Keywords: housing, architecture, urban qualities, urban regeneration, conservation, intensification

Procedia PDF Downloads 359
888 Efficient Chiller Plant Control Using Modern Reinforcement Learning

Authors: Jingwei Du

Abstract:

The need of optimizing air conditioning systems for existing buildings calls for control methods designed with energy-efficiency as a primary goal. The majority of current control methods boil down to two categories: empirical and model-based. To be effective, the former heavily relies on engineering expertise and the latter requires extensive historical data. Reinforcement Learning (RL), on the other hand, is a model-free approach that explores the environment to obtain an optimal control strategy often referred to as “policy”. This research adopts Proximal Policy Optimization (PPO) to improve chiller plant control, and enable the RL agent to collaborate with experienced engineers. It exploits the fact that while the industry lacks historical data, abundant operational data is available and allows the agent to learn and evolve safely under human supervision. Thanks to the development of language models, renewed interest in RL has led to modern, online, policy-based RL algorithms such as the PPO. This research took inspiration from “alignment”, a process that utilizes human feedback to finetune the pretrained model in case of unsafe content. The methodology can be summarized into three steps. First, an initial policy model is generated based on minimal prior knowledge. Next, the prepared PPO agent is deployed so feedback from both critic model and human experts can be collected for future finetuning. Finally, the agent learns and adapts itself to the specific chiller plant, updates the policy model and is ready for the next iteration. Besides the proposed approach, this study also used traditional RL methods to optimize the same simulated chiller plants for comparison, and it turns out that the proposed method is safe and effective at the same time and needs less to no historical data to start up.

Keywords: chiller plant, control methods, energy efficiency, proximal policy optimization, reinforcement learning

Procedia PDF Downloads 18
887 A Systematic Review on Measuring the Physical Activity Level and Pattern in Persons with Chronic Fatigue Syndrome

Authors: Kuni Vergauwen, Ivan P. J. Huijnen, Astrid Depuydt, Jasmine Van Regenmortel, Mira Meeus

Abstract:

A lower activity level and imbalanced activity pattern are frequently observed in persons with chronic fatigue syndrome (CFS) / myalgic encephalomyelitis (ME) due to debilitating fatigue and post-exertional malaise (PEM). Identification of measurement instruments to evaluate the activity level and pattern is therefore important. The objective is to identify measurement instruments suited to evaluate the activity level and/or pattern in patients with CFS/ME and review their psychometric properties. A systematic literature search was performed in the electronic databases PubMed and Web of Science until 12 October 2016. Articles including relevant measurement instruments were identified and included for further analysis. The psychometric properties of relevant measurement instruments were extracted from the included articles and rated based on the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. The review was performed and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. A total of 49 articles and 15 unique measurement instruments were found, but only three instruments were evaluated in patients with CFS/ME: the Chronic Fatigue Syndrome-Activity Questionnaire (CFS-AQ), Activity Pattern Interview (API) and International Physical Activity Questionnaire-Short Form (IPAQ-SF), three self-report instruments measuring the physical activity level. The IPAQ-SF, CFS-AQ and API are all equally capable of evaluating the physical activity level, but none of the three measurement instruments are optimal to use. No studies about the psychometric properties of activity monitors in patients with CFS/ME were found, although they are often used as the gold standard to measure the physical activity pattern. More research is needed to evaluate the psychometric properties of existing instruments, including the use of activity monitors.

Keywords: chronic fatigue syndrome, data collection, physical activity, psychometrics

Procedia PDF Downloads 223
886 Water Re-Use Optimization in a Sugar Platform Biorefinery Using Municipal Solid Waste

Authors: Leo Paul Vaurs, Sonia Heaven, Charles Banks

Abstract:

Municipal solid waste (MSW) is a virtually unlimited source of lignocellulosic material in the form of a waste paper/cardboard mixture which can be converted into fermentable sugars via cellulolytic enzyme hydrolysis in a biorefinery. The extraction of the lignocellulosic fraction and its preparation, however, are energy and water demanding processes. The waste water generated is a rich organic liquor with a high Chemical Oxygen Demand that can be partially cleaned while generating biogas in an Upflow Anaerobic Sludge Blanket bioreactor and be further re-used in the process. In this work, an experiment was designed to determine the critical contaminant concentrations in water affecting either anaerobic digestion or enzymatic hydrolysis by simulating multiple water re-circulations. It was found that re-using more than 16.5 times the same water could decrease the hydrolysis yield by up to 65 % and led to a complete granules desegregation. Due to the complexity of the water stream, the contaminant(s) responsible for the performance decrease could not be identified but it was suspected to be caused by sodium, potassium, lipid accumulation for the anaerobic digestion (AD) process and heavy metal build-up for enzymatic hydrolysis. The experimental data were incorporated into a Water Pinch technology based model that was used to optimize the water re-utilization in the modelled system to reduce fresh water requirement and wastewater generation while ensuring all processes performed at optimal level. Multiple scenarios were modelled in which sub-process requirements were evaluated in term of importance, operational costs and impact on the CAPEX. The best compromise between water usage, AD and enzymatic hydrolysis yield was determined for each assumed contaminant degradations by anaerobic granules. Results from the model will be used to build the first MSW based biorefinery in the USA.

Keywords: anaerobic digestion, enzymatic hydrolysis, municipal solid waste, water optimization

Procedia PDF Downloads 315
885 A Review on Applications of Evolutionary Algorithms to Reservoir Operation for Hydropower Production

Authors: Nkechi Neboh, Josiah Adeyemo, Abimbola Enitan, Oludayo Olugbara

Abstract:

Evolutionary algorithms are techniques extensively used in the planning and management of water resources and systems. It is useful in finding optimal solutions to water resources problems considering the complexities involved in the analysis. River basin management is an essential area that involves the management of upstream, river inflow and outflow including downstream aspects of a reservoir. Water as a scarce resource is needed by human and the environment for survival and its management involve a lot of complexities. Management of this scarce resource is necessary for proper distribution to competing users in a river basin. This presents a lot of complexities involving many constraints and conflicting objectives. Evolutionary algorithms are very useful in solving this kind of complex problems with ease. Evolutionary algorithms are easy to use, fast and robust with many other advantages. Many applications of evolutionary algorithms, which are population based search algorithm, are discussed. Different methodologies involved in the modeling and simulation of water management problems in river basins are explained. It was found from this work that different evolutionary algorithms are suitable for different problems. Therefore, appropriate algorithms are suggested for different methodologies and applications based on results of previous studies reviewed. It is concluded that evolutionary algorithms, with wide applications in water resources management, are viable and easy algorithms for most of the applications. The results suggested that evolutionary algorithms, applied in the right application areas, can suggest superior solutions for river basin management especially in reservoir operations, irrigation planning and management, stream flow forecasting and real-time applications. The future directions in this work are suggested. This study will assist decision makers and stakeholders on the best evolutionary algorithm to use in varied optimization issues in water resources management.

Keywords: evolutionary algorithm, multi-objective, reservoir operation, river basin management

Procedia PDF Downloads 485
884 The Choosing the Right Projects With Multi-Criteria Decision Making to Ensure the Sustainability of the Projects

Authors: Saniye Çeşmecioğlu

Abstract:

The importance of project sustainability and success has become increasingly significant due to the proliferation of external environmental factors that have decreased project resistance in contemporary times. The primary approach to forestall the failure of projects is to ensure their long-term viability through the strategic selection of projects as creating judicious project selection framework within the organization. Decision-makers require precise decision contexts (models) that conform to the company's business objectives and sustainability expectations during the project selection process. The establishment of a rational model for project selection enables organizations to create a distinctive and objective framework for the selection process. Additionally, for the optimal implementation of this decision-making model, it is crucial to establish a Project Management Office (PMO) team and Project Steering Committee within the organizational structure to oversee the framework. These teams enable updating project selection criteria and weights in response to changing conditions, ensuring alignment with the company's business goals, and facilitating the selection of potentially viable projects. This paper presents a multi-criteria decision model for selecting project sustainability and project success criteria that ensures timely project completion and retention. The model was developed using MACBETH (Measuring Attractiveness by a Categorical Based Evaluation Technique) and was based on broadcaster companies’ expectations. The ultimate results of this study provide a model that endorses the process of selecting the appropriate project objectively by utilizing project selection and sustainability criteria along with their respective weights for organizations. Additionally, the study offers suggestions that may ascertain helpful in future endeavors.

Keywords: project portfolio management, project selection, multi-criteria decision making, project sustainability and success criteria, MACBETH

Procedia PDF Downloads 59
883 Deproteinization of Moroccan Sardine (Sardina pilchardus) Scales: A Pilot-Scale Study

Authors: F. Bellali, M. Kharroubi, Y. Rady, N. Bourhim

Abstract:

In Morocco, fish processing industry is an important source income for a large amount of by-products including skins, bones, heads, guts, and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Sardina plichardus scales from resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic, and biomedical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. And the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The advancement from lab scale to pilot scale is a critical stage in the technological development. In this study, the optimal condition for the deproteinization which was validated at laboratory scale was employed in the pilot scale procedure. The deproteinization of fish scale was then demonstrated on a pilot scale (2Kg scales, 20l NaOH), resulting in protein content (0,2mg/ml) and hydroxyproline content (2,11mg/l). These results indicated that the pilot-scale showed similar performances to those of lab-scale one.

Keywords: deproteinization, pilot scale, scale, sardine pilchardus

Procedia PDF Downloads 440
882 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 448
881 Hybrid Bimodal Magnetic Force Microscopy

Authors: Fernández-Brito David, Lopez-Medina Javier Alonso, Murillo-Bracamontes Eduardo Antonio, Palomino-Ovando Martha Alicia, Gervacio-Arciniega José Juan

Abstract:

Magnetic Force Microscopy (MFM) is an Atomic Force Microscopy (AFM) technique that characterizes, at a nanometric scale, the magnetic properties of ferromagnetic materials. Conventional MFM works by scanning in two different AFM modes. The first one is tapping mode, in which the cantilever has short-range force interactions with the sample, with the purpose to obtain the topography. Then, the lift AFM mode starts, raising the cantilever to maintain a fixed distance between the tip and the surface of the sample, only interacting with the magnetic field forces of the sample, which are long-ranged. In recent years, there have been attempts to improve the MFM technique. Bimodal MFM was first theoretically developed and later experimentally proven. In bimodal MFM, the AFM internal piezoelectric is used to cause the cantilever oscillations in two resonance modes simultaneously, the first mode detects the topography, while the second is more sensitive to the magnetic forces between the tip and the sample. However, it has been proven that the cantilever vibrations induced by the internal AFM piezoelectric ceramic are not optimal, affecting the bimodal MFM characterizations. Moreover, the Secondary Resonance Magnetic Force Microscopy (SR-MFM) was developed. In this technique, a coil located below the sample generates an external magnetic field. This alternating magnetic field excites the cantilever at a second frequency to apply the Bimodal MFM mode. Nonetheless, for ferromagnetic materials with a low coercive field, the external field used in SR-MFM technique can modify the magnetic domains of the sample. In this work, a Hybrid Bimodal MFM (HB-MFM) technique is proposed. In HB-MFM, the bimodal MFM is used, but the first resonance frequency of the cantilever is induced by the magnetic field of the ferromagnetic sample due to its vibrations caused by a piezoelectric element placed under the sample. The advantages of this new technique are demonstrated through the preliminary results obtained by HB-MFM on a hard disk sample. Additionally, traditional two pass MFM and HB-MFM measurements were compared.

Keywords: magnetic force microscopy, atomic force microscopy, magnetism, bimodal MFM

Procedia PDF Downloads 69
880 Integration of Thermal Energy Storage and Electric Heating with Combined Heat and Power Plants

Authors: Erich Ryan, Benjamin McDaniel, Dragoljub Kosanovic

Abstract:

Combined heat and power (CHP) plants are an efficient technology for meeting the heating and electric needs of large campus energy systems, but have come under greater scrutiny as the world pushes for emissions reductions and lower consumption of fossil fuels. The electrification of heating and cooling systems offers a great deal of potential for carbon savings, but these systems can be costly endeavors due to increased electric consumption and peak demand. Thermal energy storage (TES) has been shown to be an effective means of improving the viability of electrified systems, by shifting heating and cooling load to off-peak hours and reducing peak demand charges. In this study, we analyze the integration of an electrified heating and cooling system with thermal energy storage into a campus CHP plant, to investigate the potential of leveraging existing infrastructure and technologies with the climate goals of the 21st century. A TRNSYS model was built to simulate a ground source heat pump (GSHP) system with TES using measured campus heating and cooling loads. The GSHP with TES system is modeled to follow the parameters of industry standards and sized to provide an optimal balance of capital and operating costs. Using known CHP production information, costs and emissions were investigated for a unique large energy user rate structure that operates a CHP plant. The results highlight the cost and emissions benefits of a targeted integration of heat pump technology within the framework of existing CHP systems, along with the performance impacts and value of TES capability within the combined system.

Keywords: thermal energy storage, combined heat and power, heat pumps, electrification

Procedia PDF Downloads 86
879 Performance Evaluation of Using Genetic Programming Based Surrogate Models for Approximating Simulation Complex Geochemical Transport Processes

Authors: Hamed K. Esfahani, Bithin Datta

Abstract:

Transport of reactive chemical contaminant species in groundwater aquifers is a complex and highly non-linear physical and geochemical process especially for real life scenarios. Simulating this transport process involves solving complex nonlinear equations and generally requires huge computational time for a given aquifer study area. Development of optimal remediation strategies in aquifers may require repeated solution of such complex numerical simulation models. To overcome this computational limitation and improve the computational feasibility of large number of repeated simulations, Genetic Programming based trained surrogate models are developed to approximately simulate such complex transport processes. Transport process of acid mine drainage, a hazardous pollutant is first simulated using a numerical simulated model: HYDROGEOCHEM 5.0 for a contaminated aquifer in a historic mine site. Simulation model solution results for an illustrative contaminated aquifer site is then approximated by training and testing a Genetic Programming (GP) based surrogate model. Performance evaluation of the ensemble GP models as surrogate models for the reactive species transport in groundwater demonstrates the feasibility of its use and the associated computational advantages. The results show the efficiency and feasibility of using ensemble GP surrogate models as approximate simulators of complex hydrogeologic and geochemical processes in a contaminated groundwater aquifer incorporating uncertainties in historic mine site.

Keywords: geochemical transport simulation, acid mine drainage, surrogate models, ensemble genetic programming, contaminated aquifers, mine sites

Procedia PDF Downloads 274
878 Experimental Evaluation of Electrocoagulation for Hardness Removal of Bore Well Water

Authors: Pooja Kumbhare

Abstract:

Water is an important resource for the survival of life. The inadequate availability of surface water makes people depend on ground water for fulfilling their needs. However, ground water is generally too hard to satisfy the requirements for domestic as well as industrial applications. Removal of hardness involves various techniques such as lime soda process, ion exchange, reverse osmosis, nano-filtration, distillation, and, evaporation, etc. These techniques have individual problems such as high annual operating cost, sediment formation on membrane, sludge disposal problem, etc. Electrocoagulation (EC) is being explored as modern and cost-effective technology to cope up with the growing demand of high water quality at the consumer end. In general, earlier studies on electrocoagulation for hardness removal are found to deploy batch processes. As batch processes are always inappropriate to deal with large volume of water to be treated, it is essential to develop continuous flow EC process. So, in the present study, an attempt is made to investigate continuous flow EC process for decreasing excessive hardness of bore-well water. The experimental study has been conducted using 12 aluminum electrodes (25cm*10cm, 1cm thick) provided in EC reactor with volume of 8 L. Bore well water sample, collected from a local bore-well (i.e. at – Vishrambag, Sangli; Maharashtra) having average initial hardness of 680 mg/l (Range: 650 – 700 mg/l), was used for the study. Continuous flow electrocoagulation experiments were carried out by varying operating parameters specifically reaction time (Range: 10 – 60 min), voltage (Range: 5 – 20 V), current (Range: 1 – 5A). Based on the experimental study, it is found that hardness removal to the desired extent could be achieved even for continuous flow EC reactor, so the use of it is found promising.

Keywords: hardness, continuous flow EC process, aluminum electrode, optimal operating parameters

Procedia PDF Downloads 177