Search results for: full energy peak efficiency
784 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame
Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin
Abstract:
The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.Keywords: FACTS, multi-space-time frame, optimal control, TCSC
Procedia PDF Downloads 267783 Aspects Concerning the Use of Recycled Concrete Aggregates
Authors: Ion Robu, Claudiu Mazilu, Radu Deju
Abstract:
Natural aggregates (gravel and crushed) are essential non-renewable resources which are used for infrastructure works and civil engineering. In European Union member states from Southeast Europe, it is estimated that the construction industry will grow by 4.2% thereafter complicating aggregate supply management. In addition, a significant additional problem that can be associated to the aggregates industry is wasting potential resources through waste dumping of inert waste, especially waste from construction and demolition activities. In 2012, in Romania, less than 10% of construction and demolition waste (including concrete) are valorized, while the European Union requires that by 2020 this proportion should be at least 70% (Directive 2008/98/EC on waste, transposed into Romanian legislation by Law 211/2011). Depending on the efficiency of waste processing and the quality of recycled aggregate concrete (RCA) obtained, poor quality aggregate can be used as foundation material for roads and at the high quality for new concrete on construction. To obtain good quality concrete using recycled aggregate is necessary to meet the minimum requirements defined by the rules for the manufacture of concrete with natural aggregate. Properties of recycled aggregate (density, granulosity, granule shape, water absorption, weight loss to Los Angeles test, attached mortar content etc.) are the basis for concrete quality; also establishing appropriate proportions between components and the concrete production methods are extremely important for its quality. This paper presents a study on the use of recycled aggregates, from a concrete of specified class, to acquire new cement concrete with different percentages of recycled aggregates. To achieve recycled aggregates several batches of concrete class C16/20, C25/30 and C35/45 were made, the compositions calculation being made according NE012/2007 CP012/2007. Tests for producing recycled aggregate was carried out using concrete samples of the established three classes after 28 days of storage under the above conditions. Cubes with 150mm side were crushed in a first stage with a jaw crusher Liebherr type set at 50 mm nominally. The resulting material was separated by sieving on granulometric sorts and 10-50 sort was used for preliminary tests of crushing in the second stage with a jaw crusher BB 200 Retsch model, respectively a hammer crusher Buffalo Shuttle WA-12-H model. It was highlighted the influence of the type of crusher used to obtain recycled aggregates on granulometry and granule shape and the influence of the attached mortar on the density, water absorption, behavior to the Los Angeles test etc. The proportion of attached mortar was determined and correlated with provenance concrete class of the recycled aggregates and their granulometric sort. The aim to characterize the recycled aggregates is their valorification in new concrete used in construction. In this regard have been made a series of concrete in which the recycled aggregate content was varied from 0 to 100%. The new concrete were characterized by point of view of the change in the density and compressive strength with the proportion of recycled aggregates. It has been shown that an increase in recycled aggregate content not necessarily mean a reduction in compressive strength, quality of the aggregate having a decisive role.Keywords: recycled concrete aggregate, characteristics, recycled aggregate concrete, properties
Procedia PDF Downloads 217782 Exploring the Potential of Modular Housing Designs for the Emergency Housing Need in Türkiye after the February Earthquake in 2023
Authors: Hailemikael Negussie, Sebla Arın Ensarioğlu
Abstract:
In February 2023 Southeastern Türkiye and Northwestern Syria were hit by two consecutive earthquakes with high magnitude leaving thousands dead and thousands more homeless. The housing crisis in the affected areas has resulted in the need for a fast and qualified solution. There are a number of solutions, one of which is the use of modular designs to rebuild the cities that have been affected. Modular designs are prefabricated building components that can be quickly and efficiently assembled on-site, making them ideal to build structures with faster speed and higher quality. These structures are flexible, adaptable, and can be customized to meet the specific needs of the inhabitants, in addition to being more energy-efficient and sustainable. The prefabricated nature also assures that the quality of the products can be easily controlled. The reason for the collapse of most of the buildings during the earthquakes was found out to be the lack of quality during the construction stage. Using modular designs allows a higher control over the quality of the construction materials being used. The use of modular designs for a project of this scale presents some challenges, including the high upfront cost to design and manufacture components. However, if implemented correctly, modular designs can offer an effective and efficient solution to the urgent housing needs. The aim of this paper is to explore the potential of modular housing for mid- and long-term earthquake-resistant housing needs in the affected disaster zones after the earthquakes of February 2023. In the scope of this paper the adaptability of modular, prefabricated housing designs for the post-disaster environment, the advantages and disadvantages of this system will be examined. Elements such as; the current conditions of the region where the destruction happened, climatic data, topographic factors will be examined. Additionally, the paper will examine; examples of similar local and international modular post-earthquake housing projects. The region is projected to enter a rapid reconstruction phase in the following periods. Therefore, this paper will present a proposal for a system that can be used to produce safe and healthy urbanization policies without causing new aggrievements while meeting the housing needs of the people in the affected regions.Keywords: post-disaster housing, earthquake-resistant design, modular design, housing, Türkiye
Procedia PDF Downloads 89781 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review
Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni
Abstract:
Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing
Procedia PDF Downloads 71780 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem
Authors: Nan Xu
Abstract:
In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC
Procedia PDF Downloads 147779 Synthesis of Deformed Nuclei 260Rf, 261Rf and 262Rf in the Decay of 266Rf*Formed via Different Fusion Reactions: Entrance Channel Effects
Authors: Niyti, Aman Deep, Rajesh Kharab, Sahila Chopra, Raj. K. Gupta
Abstract:
Relatively long-lived transactinide elements (i.e., elements with atomic number Z≥104) up to Z = 108 have been produced in nuclear reactions between low Z projectiles (C to Al) and actinide targets. Cross sections have been observed to decrease steeply with increasing Z. Recently, production cross sections of several picobarns have been reported for comparatively neutron-rich nuclides of 112 through 118 produced via hot fusion reactions with 48Ca and actinide targets. Some of those heavy nuclides are reported to have lifetimes on the order of seconds or longer. The relatively high cross sections in these hot fusion reactions are not fully understood and this has renewed interest in systematic studies of heavy-ion reactions with actinide targets. The main aim of this work is to understand the dynamics hot fusion reactions 18O+ 248Cm and 22Ne+244Pu (carried out at RIKEN and TASCA respectively) using the collective clusterization technique, carried out by undertaking the decay of the compound nucleus 266Rf∗ into 4n, 5n and 6n neutron evaporation channels. Here we extend our earlier study of the excitation functions (EFs) of 266Rf∗, formed in fusion reaction 18O+248Cm, based on Dynamical Cluster-decay Model (DCM) using the pocket formula for nuclear proximity potential, to the use of other nuclear interaction potentials derived from Skyrme energy density formalism (SEDF) based on semiclassical extended Thomas Fermi (ETF) approach and also study entrance channel effects by considering the synthesis of 266Rf* in 22Ne+244Pu reaction. The Skyrme forces used are the old force SIII, and new forces GSkI and KDE0(v1). Here, the EFs for the production of 260Rf, 261Rf and 262Rf isotope via 6n, 5n and 4n decay channel from the 266Rf∗ compound nucleus are studied at Elab = 88.2 to 125 MeV, including quadrupole deformations β2i and ‘hot-optimum’ orientations θi. The calculations are made within the DCM where the neck-length ∆R is the only parameter representing the relative separation distance between two fragments and/or clusters Ai which assimilates the neck formation effects.Keywords: entrance channel effects, fusion reactions, skyrme force, superheavy nucleus
Procedia PDF Downloads 254778 Water Reclamation and Reuse in Asia’s Largest Sewage Treatment Plant
Authors: Naveen Porika, Snigdho Majumdar, Niraj Sethi
Abstract:
Water, food and energy securities are emerging as increasingly important and vital issues for India and the world. Hyderabad urban agglomeration (HUA), the capital city of Andhra Pradesh State in India, is the sixth largest city has a population of about 8.2 million. The Musi River, which is a tributary of Krishna river flows from west to east right through the heart of Hyderabad, about 80% of the water used by people is released back as sewage, which flows back into Musi every day with detrimental effects on the environment and people downstream of the city. The average daily sewage generated in Hyderabad city is 950 MLD, however, treatment capacity exists only for 541 Million Liters per Day (MLD) but only 407 MLD of sewage is treated. As a result, 543 MLD of sewage daily flows into Musi river. Hyderabad’s current estimated water demand stands at 320 Million Gallons per Day (MGD). However, its installed capacity is merely 270 MGD; by 2020 estimated demand will grow to 400 MGD. There is huge gap between current supply and demand, and this is likely to widen by 2021. Developing new fresh water sources is a challenge for Hyderabad, as the fresh water sources are few and far from the City (about 150-200 km) and requires excessive pumping. The constraints presented above make the conventional alternatives for supply augmentation unsustainable and unattractive .One such dependable and captive source of easily available water is the treated sewage. With proper treatment, water of desired quality can be recovered from the waste water (sewage) for recycle and reuse. Hyderabad Amberpet sewage treatment of capacity 339 MLD is Asia’s largest sewage treatment plant. Tertiary sewage treatment Standard basic engineering modules of 30 MLD,60 MLD, 120MLD & 180 MLD for sewage treatment plants has been developed which are utilized for developing Sewage Reclamation & Reuse model in Asia’s largest sewage treatment plant. This paper will focus on Hyderabad Water Supply & Demand, Sewage Generation & Treatment, Technical aspects of Tertiary Sewage Treatment and Utilization of developed standard modules for reclamation & reuse of treated sewage to overcome the deficit of 130 MGD as projected by 2021.Keywords: water reclamation, reuse, Andhra Pradesh, hyderabad, musi river, sewage, demand and supply, recycle, Amberpet, 339 MLD, engineering modules, tertiary treatment
Procedia PDF Downloads 617777 SARS-CoV-2: Prediction of Critical Charged Amino Acid Mutations
Authors: Atlal El-Assaad
Abstract:
Viruses change with time through mutations and result in new variants that may persist or disappear. A Mutation refers to an actual change in the virus genetic sequence, and a variant is a viral genome that may contain one or more mutations. Critical mutations may cause the virus to be more transmissible, with high disease severity, and more vulnerable to diagnostics, therapeutics, and vaccines. Thus, variants carrying such mutations may increase the risk to human health and are considered variants of concern (VOC). Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) - the contagious in humans, positive-sense single-stranded RNA virus that caused coronavirus disease 2019 (COVID-19) - has been studied thoroughly, and several variants were revealed across the world with their corresponding mutations. SARS-CoV-2 has four structural proteins, known as the S (spike), E (envelope), M (membrane), and N (nucleocapsid) proteins, but prior study and vaccines development focused on genetic mutations in the S protein due to its vital role in allowing the virus to attach and fuse with the membrane of a host cell. Specifically, subunit S1 catalyzes attachment, whereas subunit S2 mediates fusion. In this perspective, we studied all charged amino acid mutations of the SARS-CoV-2 viral spike protein S1 when bound to Antibody CC12.1 in a crystal structure and assessed the effect of different mutations. We generated all missense mutants of SARS-CoV-2 protein amino acids (AAs) within the SARS-CoV-2:CC12.1 complex model. To generate the family of mutants in each complex, we mutated every charged amino acid with all other charged amino acids (Lysine (K), Arginine (R), Glutamic Acid (E), and Aspartic Acid (D)) and studied the new binding of the complex after each mutation. We applied Poisson-Boltzmann electrostatic calculations feeding into free energy calculations to determine the effect of each mutation on binding. After analyzing our data, we identified charged amino acids keys for binding. Furthermore, we validated those findings against published experimental genetic data. Our results are the first to propose in silico potential life-threatening mutations of SARS-CoV-2 beyond the present mutations found in the five common variants found worldwide.Keywords: SARS-CoV-2, variant, ionic amino acid, protein-protein interactions, missense mutation, AESOP
Procedia PDF Downloads 113776 Stress-Strain Behavior of Banana Fiber Reinforced and Biochar Amended Compressed Stabilized Earth Blocks
Authors: Farnia Nayar Parshi, Mohammad Shariful Islam
Abstract:
Though earth construction is an ancient technology, researchers are working on increasing its strength by adding different types of stabilizers. Ordinary Portland cement for sandy soil and lime for clayey soil is very popular practice as well as recommended by various authorities for making stabilized blocks for satisfactory performance. The addition of these additives improves compressive strength but fails to improve ductility. The addition of both synthetic and natural fibers increases both compressive strength and ductility. Studies are conducted to make earth blocks more cost-effective, energy-efficient and sustainable. In this experiment, an agricultural waste banana fiber and biochar is used to study the compressive stress-strain behavior of earth blocks made with four types of soil low plastic clay, sandy low plastic clay, very fine sand and medium to fine sand. Biochar is a charcoal-like carbon usually produced from organic or agricultural waste in high temperatures through a controlled condition called pyrolysis. In this experimental study, biochar was collected from BBI (Bangladesh Biochar Initiative) produced from wood flakes around 400 deg. Celsius. Locally available PPC (Portland Pozzolana Cement) is used. 5 cm × 5 cm × 5 cm earth blocks were made with eight different combinations such as bare soil, soil with 6% cement, soil with 6% cement and 5% biochar, soil with 6% cement, 5% biochar and 1% fiber, soil with 1% fiber, soil with 5% biochar and 1% fiber and soil with 6% cement and 1% fiber. All samples were prepared with 10-12% water content. Uniaxial compressive strength tests were conducted on 21 days old earth blocks. Stress-strain diagram shows that the addition of banana fiber improved compressive strength drastically, but the combined effect of fiber and biochar is different based on different soil types. For clayey soil, 6% cement and 1% fiber give maximum compressive strength of 991 kPa, and for very fine sand, a combination of 5% biochar, 6% cement and 1% fiber gives maximum compressive strength of 522 kPa as well as ductility. For medium-to-find sand, 6% cement and 1% fiber give the best result, 1530 kPa, among other combinations. The addition of fiber increases not only ductility but also compressive strength as well. The effect of biochar with fiber varies with the soil type.Keywords: banana fiber, biochar, cement, compressed stabilized earth blocks, compressive strength
Procedia PDF Downloads 121775 Quantum Coherence Sets the Quantum Speed Limit for Mixed States
Authors: Debasis Mondal, Chandan Datta, S. K. Sazim
Abstract:
Quantum coherence is a key resource like entanglement and discord in quantum information theory. Wigner- Yanase skew information, which was shown to be the quantum part of the uncertainty, has recently been projected as an observable measure of quantum coherence. On the other hand, the quantum speed limit has been established as an important notion for developing the ultra-speed quantum computer and communication channel. Here, we show that both of these quantities are related. Thus, cast coherence as a resource to control the speed of quantum communication. In this work, we address three basic and fundamental questions. There have been rigorous attempts to achieve more and tighter evolution time bounds and to generalize them for mixed states. However, we are yet to know (i) what is the ultimate limit of quantum speed? (ii) Can we measure this speed of quantum evolution in the interferometry by measuring a physically realizable quantity? Most of the bounds in the literature are either not measurable in the interference experiments or not tight enough. As a result, cannot be effectively used in the experiments on quantum metrology, quantum thermodynamics, and quantum communication and especially in Unruh effect detection et cetera, where a small fluctuation in a parameter is needed to be detected. Therefore, a search for the tightest yet experimentally realisable bound is a need of the hour. It will be much more interesting if one can relate various properties of the states or operations, such as coherence, asymmetry, dimension, quantum correlations et cetera and QSL. Although, these understandings may help us to control and manipulate the speed of communication, apart from the particular cases like the Josephson junction and multipartite scenario, there has been a little advancement in this direction. Therefore, the third question we ask: (iii) Can we relate such quantities with QSL? In this paper, we address these fundamental questions and show that quantum coherence or asymmetry plays an important role in setting the QSL. An important question in the study of quantum speed limit may be how it behaves under classical mixing and partial elimination of states. This is because this may help us to choose properly a state or evolution operator to control the speed limit. In this paper, we try to address this question and show that the product of the time bound of the evolution and the quantum part of the uncertainty in energy or quantum coherence or asymmetry of the state with respect to the evolution operator decreases under classical mixing and partial elimination of states.Keywords: completely positive trace preserving maps, quantum coherence, quantum speed limit, Wigner-Yanase Skew information
Procedia PDF Downloads 356774 Management of Non-Revenue Municipal Water
Authors: Habib Muhammetoglu, I. Ethem Karadirek, Selami Kara, Ayse Muhammetoglu
Abstract:
The problem of non-revenue water (NRW) from municipal water distribution networks is common in many countries such as Turkey, where the average yearly water losses are around 50% . Water losses can be divided into two major types namely: 1) Real or physical water losses, and 2) Apparent or commercial water losses. Total water losses in Antalya city, Turkey is around 45%. Methods: A research study was conducted to develop appropriate methodologies to reduce NRW. A pilot study area of about 60 thousands inhabitants was chosen to apply the study. The pilot study area has a supervisory control and data acquisition (SCADA) system for the monitoring and control of many water quantity and quality parameters at the groundwater drinking wells, pumping stations, distribution reservoirs, and along the water mains. The pilot study area was divided into 18 District Metered Areas (DMAs) with different number of service connections that ranged between a few connections to less than 3000 connections. The flow rate and water pressure to each DMA were on-line continuously measured by an accurate flow meter and water pressure meter that were connected to the SCADA system. Customer water meters were installed to all billed and unbilled water users. The monthly water consumption as given by the water meters were recorded regularly. Water balance was carried out for each DMA using the well-know standard IWA approach. There were considerable variations in the water losses percentages and the components of the water losses among the DMAs of the pilot study area. Old Class B customer water meters at one DMA were replaced by more accurate new Class C water meters. Hydraulic modelling using the US-EPA EPANET model was carried out in the pilot study area for the prediction of water pressure variations at each DMA. The data sets required to calibrate and verify the hydraulic model were supplied by the SCADA system. It was noticed that a number of the DMAs exhibited high water pressure values. Therefore, pressure reducing valves (PRV) with constant head were installed to reduce the pressure up to a suitable level that was determined by the hydraulic model. On the other hand, the hydraulic model revealed that the water pressure at the other DMAs cannot be reduced when complying with the minimum pressure requirement (3 bars) as stated by the related standards. Results: Physical water losses were reduced considerably as a result of just reducing water pressure. Further physical water losses reduction was achieved by applying acoustic methods. The results of the water balances helped in identifying the DMAs that have considerable physical losses. Many bursts were detected especially in the DMAs that have high physical water losses. The SCADA system was very useful to assess the efficiency level of this method and to check the quality of repairs. Regarding apparent water losses reduction, changing the customer water meters resulted in increasing water revenue by more than 20%. Conclusions: DMA, SCADA, modelling, pressure management, leakage detection and accurate customer water meters are efficient for NRW.Keywords: NRW, water losses, pressure management, SCADA, apparent water losses, urban water distribution networks
Procedia PDF Downloads 406773 Investigating the Neural Heterogeneity of Developmental Dyscalculia
Authors: Fengjuan Wang, Azilawati Jamaludin
Abstract:
Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity
Procedia PDF Downloads 53772 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 111771 An Object-Oriented Modelica Model of the Water Level Swell during Depressurization of the Reactor Pressure Vessel of the Boiling Water Reactor
Authors: Rafal Bryk, Holger Schmidt, Thomas Mull, Ingo Ganzmann, Oliver Herbst
Abstract:
Prediction of the two-phase water mixture level during fast depressurization of the Reactor Pressure Vessel (RPV) resulting from an accident scenario is an important issue from the view point of the reactor safety. Since the level swell may influence the behavior of some passive safety systems, it has been recognized that an assumption which at the beginning may be considered as a conservative one, not necessary leads to a conservative result. This paper discusses outcomes obtained during simulations of the water dynamics and heat transfer during sudden depressurization of a vessel filled up to a certain level with liquid water under saturation conditions and with the rest of the vessel occupied by saturated steam. In case of the pressure decrease e.g. due to the main steam line break, the liquid water evaporates abruptly, being a reason thereby, of strong transients in the vessel. These transients and the sudden emergence of void in the region occupied at the beginning by liquid, cause elevation of the two-phase mixture. In this work, several models calculating the water collapse and swell levels are presented and validated against experimental data. Each of the models uses different approach to calculate void fraction. The object-oriented models were developed with the Modelica modelling language and the OpenModelica environment. The models represent the RPV of the Integral Test Facility Karlstein (INKA) – a dedicated test rig for simulation of KERENA – a new Boiling Water Reactor design of Framatome. The models are based on dynamic mass and energy equations. They are divided into several dynamic volumes in each of which, the fluid may be single-phase liquid, steam or a two-phase mixture. The heat transfer between the wall of the vessel and the fluid is taken into account. Additional heat flow rate may be applied to the first volume of the vessel in order to simulate the decay heat of the reactor core in a similar manner as it is simulated at INKA. The comparison of the simulations results against the reference data shows a good agreement.Keywords: boiling water reactor, level swell, Modelica, RPV depressurization, thermal-hydraulics
Procedia PDF Downloads 212770 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 264769 Levels of Heavy Metals and Arsenic in Sediment and in Clarias Gariepinus, of Lake Ngami
Authors: Nashaat Mazrui, Oarabile Mogobe, Barbara Ngwenya, Ketlhatlogile Mosepele, Mangaliso Gondwe
Abstract:
Over the last several decades, the world has seen a rapid increase in activities such as deforestation, agriculture, and energy use. Subsequently, trace elements are being deposited into our water bodies, where they can accumulate to toxic levels in aquatic organisms and can be transferred to humans through fish consumption. Thus, though fish is a good source of essential minerals and omega-3 fatty acids, it can also be a source of toxic elements. Monitoring trace elements in fish is important for the proper management of aquatic systems and the protection of human health. The aim of this study was to determine concentrations of trace elements in sediment and muscle tissues of Clarias gariepinus at Lake Ngami, in the Okavango Delta in northern Botswana, during low floods. The fish were bought from local fishermen, and samples of muscle tissue were acid-digested and analyzed for iron, zinc, copper, manganese, molybdenum, nickel, chromium, cadmium, lead, and arsenic using inductively coupled plasma optical emission spectroscopy (ICP-OES). Sediment samples were also collected and analyzed for the elements and for organic matter content. Results show that in all samples, iron was found in the greatest amount while cadmium was below the detection limit. Generally, the concentrations of elements in sediment were higher than in fish except for zinc and arsenic. While the concentration of zinc was similar in the two media, arsenic was almost 3 times higher in fish than sediment. To evaluate the risk to human health from fish consumption, the target hazard quotient (THQ) and cancer risk for an average adult in Botswana, sub-Saharan Africa, and riparian communities in the Okavango Delta was calculated for each element. All elements were found to be well below regulatory limits and do not pose a threat to human health except arsenic. The results suggest that other benthic feeding fish species could potentially have high arsenic levels too. This has serious implications for human health, especially riparian households to whom fish is a key component of food and nutrition security.Keywords: Arsenic, African sharp tooth cat fish, Okavango delta, trace elements
Procedia PDF Downloads 193768 Numerical Modelling of Wind Dispersal Seeds of Bromeliad Tillandsia recurvata L. (L.) Attached to Electric Power Lines
Authors: Bruna P. De Souza, Ricardo C. De Almeida
Abstract:
In some cities in the State of Parana – Brazil and in other countries atmospheric bromeliads (Tillandsia spp - Bromeliaceae) are considered weeds in trees, electric power lines, satellite dishes and other artificial supports. In this study, a numerical model was developed to simulate the seed dispersal of the Tillandsia recurvata species by wind with the objective of evaluating seeds displacement in the city of Ponta Grossa – PR, Brazil, since it is considered that the region is already infested. The model simulates the dispersal of each individual seed integrating parameters from the atmospheric boundary layer (ABL) and the local wind, simulated by the Weather Research Forecasting (WRF) mesoscale atmospheric model for the 2012 to 2015 period. The dispersal model also incorporates the approximate number of bromeliads and source height data collected from most infested electric power lines. The seeds terminal velocity, which is an important input data but was not available in the literature, was measured by an experiment with fifty-one seeds of Tillandsia recurvata. Wind is the main dispersal agent acting on plumed seeds whereas atmospheric turbulence is a determinant factor to transport the seeds to distances beyond 200 meters as well as to introduce random variability in the seed dispersal process. Such variability was added to the model through the application of an Inverse Fast Fourier Transform to wind velocity components energy spectra based on boundary-layer meteorology theory and estimated from micrometeorological parameters produced by the WRF model. Seasonal and annual wind means were obtained from the surface wind data simulated by WRF for Ponta Grossa. The mean wind direction is assumed to be the most probable direction of bromeliad seed trajectory. Moreover, the atmospheric turbulence effect and dispersal distances were analyzed in order to identify likely regions of infestation around Ponta Grossa urban area. It is important to mention that this model could be applied to any species and local as long as seed’s biological data and meteorological data for the region of interest are available.Keywords: atmospheric turbulence, bromeliad, numerical model, seed dispersal, terminal velocity, wind
Procedia PDF Downloads 142767 Viability of EBT3 Film in Small Dimensions to Be Use for in-Vivo Dosimetry in Radiation Therapy
Authors: Abdul Qadir Jangda, Khadija Mariam, Usman Ahmed, Sharib Ahmed
Abstract:
The Gafchromic EBT3 film has the characteristic of high spatial resolution, weak energy dependence and near tissue equivalence which makes them viable to be used for in-vivo dosimetry in External Beam and Brachytherapy applications. The aim of this study is to assess the smallest film dimension that may be feasible for the use in in-vivo dosimetry. To evaluate the viability, the film sizes from 3 x 3 mm to 20 x 20 mm were calibrated with 6 MV Photon and 6 MeV electron beams. The Gafchromic EBT3 (Lot no. A05151201, Make: ISP) film was cut into five different sizes in order to establish the relationship between absorbed dose vs. film dimensions. The film dimension were 3 x 3, 5 x 5, 10 x 10, 15 x 15, and 20 x 20 mm. The films were irradiated on Varian Clinac® 2100C linear accelerator for dose range from 0 to 1000 cGy using PTW solid water phantom. The irradiation was performed as per clinical absolute dose rate calibratin setup, i.e. 100 cm SAD, 5.0 cm depth and field size of 10x10 cm2 and 100 cm SSD, 1.4 cm depth and 15x15 cm2 applicator for photon and electron respectively. The irradiated films were scanned with the landscape orientation and a post development time of 48 hours (minimum). Film scanning accomplished using Epson Expression 10000 XL Flatbed Scanner and quantitative analysis carried out with ImageJ freeware software. Results show that the dose variation with different film dimension ranging from 3 x 3 mm to 20 x 20 mm is very minimal with a maximum standard deviation of 0.0058 in Optical Density for a dose level of 3000 cGy and the the standard deviation increases with the increase in dose level. So the precaution must be taken while using the small dimension films for higher doses. Analysis shows that there is insignificant variation in the absorbed dose with a change in film dimension of EBT3 film. Study concludes that the film dimension upto 3 x 3 mm can safely be used up to a dose level of 3000 cGy without the need of recalibration for particular dimension in use for dosimetric application. However, for higher dose levels, one may need to calibrate the films for a particular dimension in use for higher accuracy. It was also noticed that the crystalline structure of the film got damage at the edges while cutting the film, which can contribute to the wrong dose if the region of interest includes the damage area of the filmKeywords: external beam radiotherapy, film calibration, film dosimetery, in-vivo dosimetery
Procedia PDF Downloads 495766 Sustainability and Awareness with Natural Dyes in Textile
Authors: Recep Karadag
Abstract:
Natural dyeing had started since pre-historical times for dyeing of textile materials. The natural dyeing had continued to beginning of 20th century. At the end of 19th century some synthetic dyes were synthesized. Although development of dyeing technologies and methods, natural dyeing was not developed in recent years. Despite rapid advances of synthetic dyestuff industries, natural dye processes have not developed. Therefore natural dyeing was not competed against synthetic dyes. At the same time, it was very difficult that large quantities of coloured textile was dyed with natural dyes And it was very difficult to get reproducible results in the natural dyeing using classical and traditional processes. However, natural dyeing has used slightly in the textile handicraft up to now. It is very important view that re-using of natural dyes to create awareness in textiles in recent years. Natural dyes have got many awareness and sustainability properties. Natural dyes are more eco-friendly than synthetic dyes. A lot of natural dyes have got antioxidant, antibacterial, antimicrobial, antifungal and anti –UV properties. It had been known that were obtained limited numbers colours with natural dyes in the past. On the contrary, colour scale is too wide with natural dyes. Except fluorescent colours, numerous colours can be obtained with natural dyes. Fastnesses of dyed textiles with natural dyes are good that there are light, washing, rubbing, etc. The fastness values can be improved depend on dyeing processes. Thanks to these properties mass production can be made with natural dyes in textiles. Therefore fabric dyeing machine was designed. This machine is too suitable for natural dyeing and mass production. Also any dyeing machine can be modified for natural dyeing. Although dye extraction and dyeing are made separately in the traditional natural dyeing processes and these procedures are become by designed this machine. Firstly, colouring compounds are extracted from natural dye resources, then dyeing is made with extracted colouring compounds. The colouring compounds are moderately dissolved in water. Less water is used in the extraction of colouring compounds from dye resources and dyeing with this new technique on the contrary much quantity water needs to use for dissolve of the colouring compounds in the traditional dyeing. This dyeing technique is very useful method for mass productions with natural dyes in traditional natural dyeing that use less energy, less dye materials, less water, etc. than traditional natural dyeing techniques. In this work, cotton, silk, linen and wool fabrics were dyed with some natural dye plants by the technique. According to the analysis very good results were obtained by this new technique. These results are shown sustainability and awareness of natural dyes for textiles.Keywords: antibacterial, antimicrobial, natural dyes, sustainability
Procedia PDF Downloads 524765 Averting a Financial Crisis through Regulation, Including Legislation
Authors: Maria Krambia-Kapardis, Andreas Kapardis
Abstract:
The paper discusses regulatory and legislative measures implemented by various nations in an effort to avert another financial crisis. More specifically, to address the financial crisis, the European Commission followed the practice of other developed countries and implemented a European Economic Recovery Plan in an attempt to overhaul the regulatory and supervisory framework of the financial sector. In 2010 the Commission introduced the European Systemic Risk Board and in 2011 the European System of Financial Supervision. Some experts advocated that the type and extent of financial regulation introduced in the European crisis in the wake of the 2008 crisis has been excessive and counterproductive. In considering how different countries responded to the financial crisis, global regulators have shown a more focused commitment to combat industry misconduct and to pre-empt abusive behavior. Regulators have also increased funding and resources at their disposal; have increased regulatory fines, with an increasing trend towards action against individuals; and, finally, have focused on market abuse and market conduct issues. Financial regulation can be effected, first of all, through legislation. However, neither ex ante or ex post regulation is by itself effective in reducing systemic risk. Consequently, to avert a financial crisis, in their endeavor to achieve both economic efficiency and financial stability, governments need to balance the two approaches to financial regulation. Fiduciary duty is another means by which the behavior of actors in the financial world is constrained and, thus, regulated. Furthermore, fiduciary duties extend over and above other existing requirements set out by statute and/or common law and cover allegations of breach of fiduciary duty, negligence or fraud. Careful analysis of the etiology of the 2008 financial crisis demonstrates the great importance of corporate governance as a way of regulating boardroom behavior. In addition, the regulation of professions including accountants and auditors plays a crucial role as far as the financial management of companies is concerned. In the US, the Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board in order to protect investors from financial accounting fraud. In most countries around the world, however, accounting regulation consists of a legal framework, international standards, education, and licensure. Accounting regulation is necessary because of the information asymmetry and the conflict of interest that exists between managers and users of financial information. If a holistic approach is to be taken then one cannot ignore the regulation of legislators themselves which can take the form of hard or soft legislation. The science of averting a financial crisis is yet to be perfected and this, as shown by the preceding discussion, is unlikely to be achieved in the foreseeable future as ‘disaster myopia’ may be reduced but will not be eliminated. It is easier, of course, to be wise in hindsight and regulating unreasonably risky decisions and unethical or outright criminal behavior in the financial world remains major challenges for governments, corporations, and professions alike.Keywords: financial crisis, legislation, regulation, financial regulation
Procedia PDF Downloads 400764 Novel Electrospun Polymeric Nanofibers Loaded Different Medicaments as Drug Delivery Systems for Regenerative Endodontics
Authors: Nura Brimo, Dilek Cokeliler Serdaroglu, Tansel Uyar, Busra Uysal, Elif Bahar Cakici, Miris Dikmen, Zerrin Canturk
Abstract:
Background: A combination of antibiotics, including metronidazole (MET), ciprofloxacin (CIP), and minocycline (MINO), has been demonstrated to disinfect bacteria in necrotic teeth before regenerative processes. It has been presented clinically that antibiotic pastes may drive to possible stem cell death and difficulties in removing from the canal system, which can limit the regenerative procedure. This study was designed to (1) synthesize nanofibrous webs containing various concentrations of different medicaments (triple, double, and calcium hydroxide,Ca(OH)2), and (2) coat thiselectrospun fibrous gutta-percha (GP) cones. Methods: Poly(vinylpyrrolidone) (PVP)-based electrospun fibrous webs were processed with low medicaments concentrations. Scanning Electron Microscopy (SEM), Energy Dispersive X-Ray Spectroscopy (EDX), and X-Ray Photoelectron Spectroscopy (XPS) were carried out to investigate fiber morphology, antibiotic incorporation, and characterized GP-coated fibrous webs, respectively. The chemical and physical properties of dentine were carried out via Fourier Transform Infrared Spectroscopy (FTIR) and Nano-SEM, respectively. The antimicrobial properties of the different fibrous webs were assessed against various bacteria by direct nanofiber/bacteria contact. Cytocompatibility was measured by applying the MTT method. Results: The mean fiber diameter of the experiment groups of medicament-containing fibers ranged in the nm scale and was significantly smaller than PVP fibers. EDX analysis confirmed the presence of medicaments in the nanofibers. XPS analysis presented a complete coating of the fibers with GPs; FTIR and Nano-SEM showed no chemical and physical configuration of intracanal medicaments on the dentine surface. Meanwhile, nanofibrous webs led to a significant reduction in the percentage of viable bacteria compared with the negative control and PVP. Conclusion: Our findings suggest that TA-NFs, DA-NFs, and Cₐ(OH)₂)-NFs coated GP cones have significant potential in eliminating intracanal bacteria, cell-friendly behavior, and clinical usage features.Keywords: drug delivery, drug carrier, electrospinning, nano/microfibers, regenerative endodontic, morphology
Procedia PDF Downloads 112763 Implementation of Deep Neural Networks for Pavement Condition Index Prediction
Authors: M. Sirhan, S. Bekhor, A. Sidess
Abstract:
In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction
Procedia PDF Downloads 138762 An Appraisal of Blended Learning Approach for English Language Teaching in Saudi Arabia
Authors: H. Alqunayeer, S. Zamir
Abstract:
Blended learning, an ideal amalgamation of online learning and face to face traditional approach is a new approach that may result in outstanding outcomes in the realm of teaching and learning. The dexterity and effectiveness offered by e-learning experience cannot be guaranteed in a traditional classroom, whereas one-to-one interaction the essential element of learning that can only be found in a traditional classroom. In recent years, a spectacular expansion in the incorporation of technology in language teaching and learning is observed in many universities of Saudi Arabia. Some universities recognize the importance of blending face-to-face with online instruction in language pedagogy, Qassim University is one of the many universities adopting Blackboard Learning Management system (LMS). The university has adopted this new mode of teaching/learning in year 2015. Although the experience is immature; however great pedagogical transformations are anticipated in the university through this new approach. This paper examines the role of blended language learning with particular reference to the influence of Blackboard Learning Management System on the development of English language learning for EFL learners registered in Bachelors of English language program. This paper aims at exploring three main areas: (i) the present status of Blended learning in the educational process in Saudi Arabia especially in Qassim University by providing a survey report on the number of training courses on Blackboard LMS conducted for the male and female teachers at various colleges of Qassim University, (ii) a survey on teachers perception about the utility, application and the outcome of using blended Learning approach in teaching English language skills courses, (iii) the students’ views on the efficiency of Blended learning approach in learning English language skills courses. Besides, analysis of students’ limitations and challenges related to the experience of blended learning via Blackboard, the suggestion and recommendations offered by the language learners have also been thought-out. The study is empirical in nature. In order to gather data on the afore mentioned areas survey questionnaire method has been used: in order to study students’ perception, a 5 point Likert-scale questionnaire has been distributed to 200 students of English department registered in Bachelors in English program (level 5 through level 8). Teachers’ views have been surveyed with the help of interviewing 25 EFL teachers skilled in using Blackboard LMS in their lectures. In order to ensure the validity and reliability of questionnaire, the inter-rater approach and Cronbach’s Alpha analysis have been used respectively. Analysis of variance (ANOVA) has been used to analyze the students’ perception about the productivity of the Blended approach in learning English language skills. The analysis of feedback by Saudi teachers and students about the usefulness, ingenuity, and productivity of Blended Learning via Blackboard LMS highlights the need of encouraging and expanding the implementation of this new approach into the field of English language teaching in Saudi Arabia, in order to augment congenial learning aura. Furthermore, it is hoped that the propositions and practical suggestions offered by the study will be functional for other similar learning environments.Keywords: blended learning, black board learning management system, English as foreign language (EFL) learners, EFL teachers
Procedia PDF Downloads 156761 Lipid Emulsion versus DigiFab in a Rat Model of Acute Digoxin Toxicity
Authors: Cansu Arslan Turan, Tuba Cimilli Ozturk, Ebru Unal Akoglu, Kemal Aygun, Ecem Deniz Kırkpantur, Ozge Ecmel Onur
Abstract:
Although the mechanism of action is not well known, Intravenous Lipid Emulsion (ILE) has been shown to be effective in the treatment of lipophilic drug intoxications. It is thought that ILE probably separate the lipophilic drugs from target tissue by creating a lipid-rich compartment in the plasma. The second theory is that ILE provides energy to myocardium with high dose free fatty acids activating the voltage gated calcium channels in the myocytes. In this study, the effects of ILE treatment on digoxin overdose which are frequently observed in emergency departments was searched in an animal model in terms of cardiac side effects and survival. The study was carried out at Yeditepe University, Faculty of Medicine-Experimental Animals Research Center Labs in December 2015. 40 Sprague-Dawley rats weighing 300-400 g were divided into 5 groups randomly. As the pre-treatment, the first group received saline, the second group received lipid, the third group received DigiFab, and the fourth group received DigiFab and lipid. Following that, digoxin was infused to all groups until death except the control group. First arrhythmia and cardiac arrest occurrence times were recorded. As no medication causing arrhythmia was infused, Group 5 was excluded from the statistical analysis performed for the comparisons of first arrhythmia and death time. According to the results although there was no significant difference in the statistical analysis comparing the four groups, as the rats, only exposed to digoxin intoxication were compared with the rats pre-treated with ILE in terms of first arrhythmia time and cardiac arrest occurrence times, significant difference was observed between the groups. According to our results, using DigiFab treatment, intralipid treatment, intralipid and DigiFab treatment for the rats exposed to digoxin intoxication makes no significant difference in terms of the first arrhythmia and death occurrence time. However, it is not possible to say that at the doses we use in the study, ILE treatment might be successful at least as a known antidote. The fact that the statistical significance between the two groups is not observed in the inter-comparisons of all the groups, the study should be repeated in the larger groups.Keywords: arrhytmia, cardiac arrest, DigiFab, digoxin intoxication
Procedia PDF Downloads 235760 Potency of Some Dietary Acidifiers on Productive Performance and Controlling Salmonella enteritidis in Broilers
Authors: Mohamed M. Zaki, Maha M. Hady
Abstract:
Salmonella spp. have been categorized as the world’s biggest threats to human health and poultry products are mostly incriminated sources. In Egypt, it was found that S. enteritidis and S. typhimurium are the most prevalent ones in poultry farms. It is recommended to eliminate salmonella from living bird by competing for salmonella contamination in feed in order to establish a healthy gut. The Feed acidifiers are the group of feed additives containing low-molecular-weight organic acids and/ or their salts which act as performance promoters by lowering the pH in the gut, optimizes digestion and inhibit bacterial growth. The inclusion of organic acid in pure form nonetheless effective in feed, yet, it is difficult to handle in feed mills as it is corrosive and produce more losses during pelleting process. The current study aimed at to evaluate the impact of incorporation of sodium diformate (SDF) and a commercial acidifier, CA (a mixture of butyric and propionic acids and their ammonium salts) at 0.4% dietary levels on broilers performance and the control S. enteritidis infection. Two hundreds and seventy unsexed cobb chickens were allotted in one of three treatments (90/ group) which were, the control (no acidifier, C- &C+), the 0.4% SDF (SDF- & SDF +) and the 0.4% CA (CA- & CA +) dietary levels for 35 days. Before the allocation of the groups, ten extra birds and a diet sample were bacteriologically examined to ensure negative contamination with salmonella. The birds were raised on deep-litter separated pens and had free access to feed and water all the time. The experimentally formulated diets were kept at 40C. After 24h access to the different dietary treatments, all the birds in the positive groups (n=15/ replicate) were inoculated intra-crop with 0.2 ml of 24 h broth culture of S. entertidis containing 1X 107 organisms while the negative-treated groups were inoculated with the same amount of the negative broth and second inoculation was done at 22 d of age. Colocal swabs were collected individually from all birds 2 h pre-inoculation to assure the absence of salmonella, then 1, 3, 5, 7, 21 days post-inoculation to recover salmonella. Performance parameter (body weight gain and feed efficiency) were calculated. Mortalities were recorded and reisolation of the salmonella was adopted to ensure it was the inoculated ones. The results revealed that the dietary acidification with sodium diformate significantly improved broilers performance and tends to produce heavier birds as compared to the negative control and CA groups. Moreover, the dietary inclusion of both acidifiers at level of 0.4% was able to eliminate mortalities completely at the relevant inoculation time. Regarding the shedding of S. enteritidius in positive groups, the SDF treatment resulted in significant (p<0.05) cessation of the shedding at 3 days post-inoculation compared to 7 days post-inoculation for the CA-group. In conclusion, sodium diformate at 0.4% dietary level in broiler diets has a valuable effect not only on broilers performance but also by eliminating S. enteritidis the main source of salmonella contamination in poultry farms which is feed.Keywords: acidifier, broilers, Salmonalla spp, sodium diformate
Procedia PDF Downloads 288759 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks
Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci
Abstract:
The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization
Procedia PDF Downloads 130758 Experimental Evaluation of Contact Interface Stiffness and Damping to Sustain Transients and Resonances
Authors: Krystof Kryniski, Asa Kassman Rudolphi, Su Zhao, Per Lindholm
Abstract:
ABB offers range of turbochargers from 500 kW to 80+ MW diesel and gas engines. Those operate on ships, power stations, generator-sets, diesel locomotives and large, off-highway vehicles. The units need to sustain harsh operating conditions, exposure to high speeds, temperatures and varying loads. They are expected to work at over-critical speeds damping effectively any transients and encountered resonances. Components are often connected via friction joints. Designs of those interfaces need to account for surface roughness, texture, pre-stress, etc. to sustain against fretting fatigue. The experience from field contributed with valuable input on components performance in hash sea environment and their exposure to high temperature, speed and load conditions. Study of tribological interactions of oxide formations provided an insight into dynamic activities occurring between the surfaces. Oxidation was recognized as the dominant factor of a wear. Microscopic inspections of fatigue cracks on turbine indicated insufficient damping and unrestrained structural stress leading to catastrophic failure, if not prevented in time. The contact interface exhibits strongly non-linear mechanism and to describe it the piecewise approach was used. Set of samples representing the combinations of materials, texture, surface and heat treatment were tested on a friction rig under range of loads, frequencies and excitation amplitudes. Developed numerical technique extracted the friction coefficient, tangential contact stiffness and damping. Vast amount of experimental data was processed with the multi-harmonics balance (MHB) method to categorize the components subjected to the periodic excitations. At the pre-defined excitation level both force and displacement formed semi-elliptical hysteresis curves having the same area and secant as the actual ones. By cross-correlating the terms remaining in the phase and out of the phase, respectively it was possible to separate an elastic energy from dissipation and derive the stiffness and damping characteristics.Keywords: contact interface, fatigue, rotor-dynamics, torsional resonances
Procedia PDF Downloads 376757 A Re-Evaluation of Green Architecture and Its Contributions to Environmental Sustainability
Authors: Po-Ching Wang
Abstract:
Considering the notable effects of natural resource consumption and impacts on fragile ecosystems, reflection on contemporary sustainable design is critical. Nevertheless, the idea of ‘green’ has been misapplied and even abused, and, in fact, much damage to the environment has been done in its name. In 1996’s popular science fiction film Independence Day, an alien species, having exhausted the natural resources of one planet, moves on to another —a fairly obvious irony on contemporary human beings’ irresponsible use of the Earth’s natural resources in modern times. In fact, the human ambition to master nature and freely access the world’s resources has long been inherent in manifestos evinced by productions of the environmental design professions. Ron Herron’s Walking City, an experimental architectural piece of 1964, is one example that comes to mind here. For this design concept, the architect imagined a gigantic nomadic urban aggregate that by way of an insect-like robotic carrier would move all over the world, on land and sea, to wherever its inhabitants want. Given the contemporary crisis regarding natural resources, recently ideas pertinent to structuring a sustainable environment have been attracting much interest in architecture, a field that has been accused of significantly contributing to ecosystem degradation. Great art, such as Fallingwater building, has been regarded as nature-friendly, but its notion of ‘green’ might be inadequate in the face of the resource demands made by human populations today. This research suggests a more conservative and scrupulous attitude to attempting to modify nature for architectural settings. Designs that pursue spiritual or metaphysical interconnections through anthropocentric aesthetics are not sufficient to benefit ecosystem integrity; though high-tech energy-saving processes may contribute to a fine-scale sustainability, they may ultimately cause catastrophe in the global scale. Design with frugality is proposed in order to actively reduce environmental load. The aesthetic taste and ecological sensibility of design professions and the public alike may have to be reshaped in order to make the goals of environmental sustainability viable.Keywords: anthropocentric aesthetic, aquarium sustainability, biosphere 2, ecological aesthetic, ecological footprint, frugal design
Procedia PDF Downloads 210756 Using Life Cycle Assessment in Potable Water Treatment Plant: A Colombian Case Study
Authors: Oscar Orlando Ortiz Rodriguez, Raquel A. Villamizar-G, Alexander Araque
Abstract:
There is a total of 1027 municipal development plants in Colombia, 70% of municipalities had Potable Water Treatment Plants (PWTPs) in urban areas and 20% in rural areas. These PWTPs are typically supplied by surface waters (mainly rivers) and resort to gravity, pumping and/or mixed systems to get the water from the catchment point, where the first stage of the potable water process takes place. Subsequently, a series of conventional methods are applied, consisting in a more or less standardized sequence of physicochemical and, sometimes, biological treatment processes which vary depending on the quality of the water that enters the plant. These processes require energy and chemical supplies in order to guarantee an adequate product for human consumption. Therefore, in this paper, we applied the environmental methodology of Life Cycle Assessment (LCA) to evaluate the environmental loads of a potable water treatment plant (PWTP) located in northeastern Colombia following international guidelines of ISO 14040. The different stages of the potable water process, from the catchment point through pumping to the distribution network, were thoroughly assessed. The functional unit was defined as 1 m³ of water treated. The data were analyzed through the database Ecoinvent v.3.01, and modeled and processed in the software LCA-Data Manager. The results allowed determining that in the plant, the largest impact was caused by Clarifloc (82%), followed by Chlorine gas (13%) and power consumption (4%). In this context, the company involved in the sustainability of the potable water service should ideally reduce these environmental loads during the potable water process. A strategy could be the use of Clarifloc can be reduced by applying coadjuvants or other coagulant agents. Also, the preservation of the hydric source that supplies the treatment plant constitutes an important factor, since its deterioration confers unfavorable features to the water that is to be treated. By concluding, treatment processes and techniques, bioclimatic conditions and culturally driven consumption behavior vary from region to region. Furthermore, changes in treatment processes and techniques are likely to affect the environment during all stages of a plant’s operation cycle.Keywords: climate change, environmental impact, life cycle assessment, treated water
Procedia PDF Downloads 226755 A Biophysical Model of CRISPR/Cas9 on- and off-Target Binding for Rational Design of Guide RNAs
Authors: Iman Farasat, Howard M. Salis
Abstract:
The CRISPR/Cas9 system has revolutionized genome engineering by enabling site-directed and high-throughput genome editing, genome insertion, and gene knockdowns in several species, including bacteria, yeast, flies, worms, and human cell lines. This technology has the potential to enable human gene therapy to treat genetic diseases and cancer at the molecular level; however, the current CRISPR/Cas9 system suffers from seemingly sporadic off-target genome mutagenesis that prevents its use in gene therapy. A comprehensive mechanistic model that explains how the CRISPR/Cas9 functions would enable the rational design of the guide-RNAs responsible for target site selection while minimizing unexpected genome mutagenesis. Here, we present the first quantitative model of the CRISPR/Cas9 genome mutagenesis system that predicts how guide-RNA sequences (crRNAs) control target site selection and cleavage activity. We used statistical thermodynamics and law of mass action to develop a five-step biophysical model of cas9 cleavage, and examined it in vivo and in vitro. To predict a crRNA's binding specificities and cleavage rates, we then compiled a nearest neighbor (NN) energy model that accounts for all possible base pairings and mismatches between the crRNA and the possible genomic DNA sites. These calculations correctly predicted crRNA specificity across 5518 sites. Our analysis reveals that cas9 activity and specificity are anti-correlated, and, the trade-off between them is the determining factor in performing an RNA-mediated cleavage with minimal off-targets. To find an optimal solution, we first created a scheme of safe-design criteria for Cas9 target selection by systematic analysis of available high throughput measurements. We then used our biophysical model to determine the optimal Cas9 expression levels and timing that maximizes on-target cleavage and minimizes off-target activity. We successfully applied this approach in bacterial and mammalian cell lines to reduce off-target activity to near background mutagenesis level while maintaining high on-target cleavage rate.Keywords: biophysical model, CRISPR, Cas9, genome editing
Procedia PDF Downloads 406