Search results for: large amplitude
6093 Study of University Course Scheduling for Crowd Gathering Risk Prevention and Control in the Context of Routine Epidemic Prevention
Authors: Yuzhen Hu, Sirui Wang
Abstract:
As a training base for intellectual talents, universities have a large number of students. Teaching is a primary activity in universities, and during the teaching process, a large number of people gather both inside and outside the teaching buildings, posing a strong risk of close contact. The class schedule is the fundamental basis for teaching activities in universities and plays a crucial role in the management of teaching order. Different class schedules can lead to varying degrees of indoor gatherings and trajectories of class attendees. In recent years, highly contagious diseases have frequently occurred worldwide, and how to reduce the risk of infection has always been a hot issue related to public safety. "Reducing gatherings" is one of the core measures in epidemic prevention and control, and it can be controlled through scientific scheduling in specific environments. Therefore, the scientific prevention and control goal can be achieved by considering the reduction of the risk of excessive gathering of people during the course schedule arrangement. Firstly, we address the issue of personnel gathering in various pathways on campus, with the goal of minimizing congestion and maximizing teaching effectiveness, establishing a nonlinear mathematical model. Next, we design an improved genetic algorithm, incorporating real-time evacuation operations based on tracking search and multidimensional positive gradient cross-mutation operations, considering the characteristics of outdoor crowd evacuation. Finally, we apply undergraduate course data from a university in Harbin to conduct a case study. It compares and analyzes the effects of algorithm improvement and optimization of gathering situations and explores the impact of path blocking on the degree of gathering of individuals on other pathways.Keywords: the university timetabling problem, risk prevention, genetic algorithm, risk control
Procedia PDF Downloads 916092 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame
Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin
Abstract:
The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.Keywords: FACTS, multi-space-time frame, optimal control, TCSC
Procedia PDF Downloads 2676091 A Smart Sensor Network Approach Using Affordable River Water Level Sensors
Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan
Abstract:
Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.Keywords: smart sensing, internet of things, water level sensor, flooding
Procedia PDF Downloads 3826090 Integrated Genetic-A* Graph Search Algorithm Decision Model for Evaluating Cost and Quality of School Renovation Strategies
Authors: Yu-Ching Cheng, Yi-Kai Juan, Daniel Castro
Abstract:
Energy consumption of buildings has been an increasing concern for researchers and practitioners in the last decade. Sustainable building renovation can reduce energy consumption and carbon dioxide emissions; meanwhile, it also can extend existing buildings useful life and facilitate environmental sustainability while providing social and economic benefits to the society. School buildings are different from other designed spaces as they are more crowded and host the largest portion of daily activities and occupants. Strategies that focus on reducing energy use but also improve the students’ learning environment becomes a significant subject in sustainable school buildings development. A decision model is developed in this study to solve complicated and large-scale combinational, discrete and determinate problems such as school renovation projects. The task of this model is to automatically search for the most cost-effective (lower cost and higher quality) renovation strategies. In this study, the search process of optimal school building renovation solutions is by nature a large-scale zero-one programming determinate problem. A* is suitable for solving deterministic problems due to its stable and effective search process, and genetic algorithms (GA) provides opportunities to acquire global optimal solutions in a short time via its indeterminate search process based on probability. These two algorithms are combined in this study to consider trade-offs between renovation cost and improved quality, this decision model is able to evaluate current school environmental conditions and suggest an optimal scheme of sustainable school buildings renovation strategies. Through adoption of this decision model, school managers can overcome existing limitations and transform school buildings into spaces more beneficial to students and friendly to the environment.Keywords: decision model, school buildings, sustainable renovation, genetic algorithm, A* search algorithm
Procedia PDF Downloads 1196089 In-vitro Metabolic Fingerprinting Using Plasmonic Chips by Laser Desorption/Ionization Mass Spectrometry
Authors: Vadanasundari Vedarethinam, Kun Qian
Abstract:
The metabolic analysis is more distal over proteomics and genomics engaging in clinics and needs rationally distinct techniques, designed materials, and device for clinical diagnosis. Conventional techniques such as spectroscopic techniques, biochemical analyzers, and electrochemical have been used for metabolic diagnosis. Currently, there are four major challenges including (I) long-term process in sample pretreatment; (II) difficulties in direct metabolic analysis of biosamples due to complexity (III) low molecular weight metabolite detection with accuracy and (IV) construction of diagnostic tools by materials and device-based platforms for real case application in biomedical applications. Development of chips with nanomaterial is promising to address these critical issues. Mass spectroscopy (MS) has displayed high sensitivity and accuracy, throughput, reproducibility, and resolution for molecular analysis. Particularly laser desorption/ ionization mass spectrometry (LDI MS) combined with devices affords desirable speed for mass measurement in seconds and high sensitivity with low cost towards large scale uses. We developed a plasmonic chip for clinical metabolic fingerprinting as a hot carrier in LDI MS by series of chips with gold nanoshells on the surface through controlled particle synthesis, dip-coating, and gold sputtering for mass production. We integrated the optimized chip with microarrays for laboratory automation and nanoscaled experiments, which afforded direct high-performance metabolic fingerprinting by LDI MS using 500 nL of serum, urine, cerebrospinal fluids (CSF) and exosomes. Further, we demonstrated on-chip direct in-vitro metabolic diagnosis of early-stage lung cancer patients using serum and exosomes without any pretreatment or purifications. To our best knowledge, this work initiates a bionanotechnology based platform for advanced metabolic analysis toward large-scale diagnostic use.Keywords: plasmonic chip, metabolic fingerprinting, LDI MS, in-vitro diagnostics
Procedia PDF Downloads 1636088 Environmental Radioactivity Analysis by a Sequential Approach
Authors: G. Medkour Ishak-Boushaki, A. Taibi, M. Allab
Abstract:
Quantitative environmental radioactivity measurements are needed to determine the level of exposure of a population to ionizing radiations and for the assessment of the associated risks. Gamma spectrometry remains a very powerful tool for the analysis of radionuclides present in an environmental sample but the basic problem in such measurements is the low rate of detected events. Using large environmental samples could help to get around this difficulty but, unfortunately, new issues are raised by gamma rays attenuation and self-absorption. Recently, a new method has been suggested, to detect and identify without quantification, in a short time, a gamma ray of a low count source. This method does not require, as usually adopted in gamma spectrometry measurements, a pulse height spectrum acquisition. It is based on a chronological record of each detected photon by simultaneous measurements of its energy ε and its arrival time τ on the detector, the pair parameters [ε,τ] defining an event mode sequence (EMS). The EMS serials are analyzed sequentially by a Bayesian approach to detect the presence of a given radioactive source. The main object of the present work is to test the applicability of this sequential approach in radioactive environmental materials detection. Moreover, for an appropriate health oversight of the public and of the concerned workers, the analysis has been extended to get a reliable quantification of the radionuclides present in environmental samples. For illustration, we consider as an example, the problem of detection and quantification of 238U. Monte Carlo simulated experience is carried out consisting in the detection, by a Ge(Hp) semiconductor junction, of gamma rays of 63 keV emitted by 234Th (progeny of 238U). The generated EMS serials are analyzed by a Bayesian inference. The application of the sequential Bayesian approach, in environmental radioactivity analysis, offers the possibility of reducing the measurements time without requiring large environmental samples and consequently avoids the attached inconvenient. The work is still in progress.Keywords: Bayesian approach, event mode sequence, gamma spectrometry, Monte Carlo method
Procedia PDF Downloads 4956087 Numerical Simulation and Analysis of Axially Restrained Steel Cellular Beams in Fire
Authors: Asal Pournaghshband
Abstract:
This paper presents the development of a finite element model to study the large deflection behavior of restrained stainless steel cellular beams at elevated temperature. Cellular beams are widely used for efficient utilization of raw materials to facilitate long spans with faster construction resulting sustainable design solution that can enhance the performance and merit of any construction project. However, their load carrying capacity is less than the equivalent beams without opening due to developing shear-moment interaction at the openings. In structural frames due to elements continuity, such beams are restrained by their adjoining members which has a substantial effect on beams behavior in fire. Stainless steel has also become integral part of the build environment due to its excellent corrosion resistance, whole life-cycle costs, and sustainability. This paper reports the numerical investigations into the effect of structural continuity on the thermo-mechanical performance of restrained steel beams with circle and elongated circle shapes of web opening in fire. The numerical model is firstly validated using existing numerical results from the literature, and then employed to perform a parametric study. The structural continuity is evaluated through the application of different levels of axial restraints on the response of carbon steel and stainless steel cellular beam in fire. The transit temperature for stainless steel cellular beam is shown to be less affected by the level of axial stiffness than the equivalent carbon steel cellular beam. Overall, it was established that whereas stainless steel cellular beams show similar stages of behavior of carbon steel cellular beams in fire, they are capable of withstanding higher temperatures prior to the onset of catenary action in large deflection, despite the higher thermal expansion of stainless steel material.Keywords: axial restraint, catenary action, cellular beam, fire, numerical modeling, stainless steel, transit temperature
Procedia PDF Downloads 816086 Characterization of the Near-Wake of an Ahmed Body Profile
Authors: Stéphanie Pellerin, Bérengére Podvin, Luc Pastur
Abstract:
In aerovehicles context, the flow around an Ahmed body profile is simulated using the velocity-vorticity formulation of the Navier-Stokes equations, associated to a penalization method for solids and Large Eddy Simulation for turbulence. The study focuses both on the ground influence on the flow and on the dissymetry of the wake, observed for a ground clearance greater than 10% of the body height H. Unsteady and mean flows are presented and analyzed. POD study completes the analysis and gives information on the most energetic structures of the flow.Keywords: Ahmed body, bi-stability, LES, near wake
Procedia PDF Downloads 6256085 Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases
Authors: Andrew Cummings, Rhianne Boag, Sarah Bryson, Gordon Turner
Abstract:
Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.Keywords: aerosol processes, Brownian coagulation, gravitational settling, transport regulations
Procedia PDF Downloads 1176084 Effects of Different Thermal Processing Routes and Their Parameters on the Formation of Voids in PA6 Bonded Aluminum Joints
Authors: Muhammad Irfan, Guillermo Requena, Jan Haubrich
Abstract:
Adhesively bonded aluminum joints are common in automotive and aircraft industries and are one of the enablers of lightweight construction to minimize the carbon emissions during transportation for a sustainable life. This study is focused on the effects of two thermal processing routes, i.e., by direct and induction heating, and their parameters on void formation in PA6 bonded aluminum EN-AW6082 joints. The joints were characterized microanalytically as well as by lap shear experiments. The aging resistance of the joints was studied by accelerated aging tests at 80°C hot water. It was found that the processing of single lap joints by direct heating in a convection oven causes the formation of a large number of voids in the bond line. The formation of voids in the convection oven was due to longer processing times and was independent of any surface pretreatments of the metal as well as the processing temperature. However, when processing at low temperatures, a large number of small-sized voids were observed under the optical microscope, and they were larger in size but reduced in numbers at higher temperatures. An induction heating process was developed, which not only successfully reduced or eliminated the voids in PA6 bonded joints but also reduced the processing times for joining significantly. Consistent with the trend in direct heating, longer processing times and higher temperatures in induction heating also led to an increased formation of voids in the bond line. Subsequent single lap shear tests revealed that the increasing void contents led to a 21% reduction in lap shear strengths (i.e., from ~47 MPa for induction heating to ~37 MPa for direct heating). Also, there was a 17% reduction in lap shear strengths when the consolidation temperature was raised from 220˚C to 300˚C during induction heating. However, below a certain threshold of void contents, there was no observable effect on the lap shear strengths as well as on hydrothermal aging resistance of the joints consolidated by the induction heating process.Keywords: adhesive, aluminium, convection oven, induction heating, mechanical properties, nylon6 (PA6), pretreatment, void
Procedia PDF Downloads 1236083 Investigation of Shear Strength, and Dilative Behavior of Coarse-grained Samples Using Laboratory Test and Machine Learning Technique
Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari
Abstract:
Coarse-grained soils are known and commonly used in a wide range of geotechnical projects, including high earth dams or embankments for their high shear strength. The most important engineering property of these soils is friction angle which represents the interlocking between soil particles and can be applied widely in designing and constructing these earth structures. Friction angle and dilative behavior of coarse-grained soils can be estimated from empirical correlations with in-situ testing and physical properties of the soil or measured directly in the laboratory performing direct shear or triaxial tests. Unfortunately, large-scale testing is difficult, challenging, and expensive and is not possible in most soil mechanic laboratories. So, it is common to remove the large particles and do the tests, which cannot be counted as an exact estimation of the parameters and behavior of the original soil. This paper describes a new methodology to simulate particles grading distribution of a well-graded gravel sample to a smaller scale sample as it can be tested in an ordinary direct shear apparatus to estimate the stress-strain behavior, friction angle, and dilative behavior of the original coarse-grained soil considering its confining pressure, and relative density using a machine learning method. A total number of 72 direct shear tests are performed in 6 different sizes, 3 different confining pressures, and 4 different relative densities. Multivariate Adaptive Regression Spline (MARS) technique was used to develop an equation in order to predict shear strength and dilative behavior based on the size distribution of coarse-grained soil particles. Also, an uncertainty analysis was performed in order to examine the reliability of the proposed equation.Keywords: MARS, coarse-grained soil, shear strength, uncertainty analysis
Procedia PDF Downloads 1626082 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept
Authors: Ahmed El Naggar, Homyan Saleh
Abstract:
Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy
Procedia PDF Downloads 926081 Single Stage “Fix and Flap” Orthoplastic Approach to Severe Open Tibial Fractures: A Systematic Review of the Outcomes
Authors: Taylor Harris
Abstract:
Gustilo-anderson grade III tibial fractures are exquisitely difficult injuries to manage as they require extensive soft tissue repair in addition to fracture fixation. These injuries are best managed collaboratively by Orthopedic and Plastic surgeons. While utilizing an Orthoplastics approach has decreased the rates of adverse outcomes in these injuries, there is a large amount of variation in exactly how an Orthoplastics team approaches complex cases such as these. It is sometimes recommended that definitive bone fixation and soft tissue coverage be completed simultaneously in a single-stage manner, but there is a paucity of large scale studies to provide evidence to support this recommendation. It is the aim of this study to report the outcomes of a single-stage "fix-and-flap" approach through a systematic review of the available literature. Hopefully, this better informs an evidence-based Orthoplastics approach to managing open tibial fractures. Systematic review of the literature was performed. Medline and Google Scholar were used and all studies published since 2000, in English were included. 103 studies were initially evaluated for inclusion. Reference lists of all included studies were also examined for potentially eligible studies. Gustilo grade III tibial shaft fractures in adults that were managed with a single-stage Orthoplastics approach were identified and evaluated with regard to outcomes of interest. Exclusion criteria included studies with patients <16 years old, case studies, systemic reviews, meta-analyses. Primary outcomes of interest were the rates of deep infections and rates of limb salvage. Secondary outcomes of interest included time to bone union, rates of non-union, and rates of re-operation. 15 studies were eligible. 11 of these studies reported rates of deep infection as an outcome, with rates ranging from 0.98%-20%. The pooled rate between studies was 7.34%. 7 studies reported rates of limb salvage with a range of 96.25%-100%. The pooled rate of the associated studies was 97.8%. 6 reported rates of non-union with a range of 0%-14%, a pooled rate of 6.6%. 6 reported time to bone union with a range of 24 to 40.3 weeks and a pooled average time of 34.2 weeks, and 4 reported rates of reoperation ranging from 7%-55%, with a pooled rate of 31.1%. A few studies that compared a single stage to a multi stage approach side-by-side unanimously favored the single stage approach. Outcomes of Gustilo grade III open tibial fractures utilizing an Orthoplastics approach that is specifically done in a single-stage produce low rates of adverse outcomes. Large scale studies of Orthoplastic collaboration that were not completed in strictly a single stage, or were completed in multiple stages, have not reported as favorable outcomes. We recommend that not only should Orthopedic surgeons and Plastic surgeons collaborate in the management of severe open tibial fracture, but they should plan to undergo definitive fixation and coverage in a single-stage for improved outcomes.Keywords: orthoplastic, gustilo grade iii, single-stage, trauma, systematic review
Procedia PDF Downloads 866080 Temperature Effect on Changing of Electrical Impedance and Permittivity of Ouargla (Algeria) Dunes Sand at Different Frequencies
Authors: Naamane Remita, Mohammed laïd Mechri, Nouredine Zekri, Smaïl Chihi
Abstract:
The goal of this study is the estimation real and imaginary components of both electrical impedance and permittivity z', z'' and ε', ε'' respectively, in Ouargla dunes sand at different temperatures and different frequencies, with alternating current (AC) equal to 1 volt, using the impedance spectroscopy (IS). This method is simple and non-destructive. the results can frequently be correlated with a number of physical properties, dielectric properties and the impacts of the composition on the electrical conductivity of solids. The experimental results revealed that the real part of impedance is higher at higher temperature in the lower frequency region and gradually decreases with increasing frequency. As for the high frequencies, all the values of the real part of the impedance were positive. But at low frequency the values of the imaginary part were positive at all temperatures except for 1200 degrees which were negative. As for the medium frequencies, the reactance values were negative at temperatures 25, 400, 200 and 600 degrees, and then became positive at the rest of the temperatures. At high frequencies of the order of MHz, the values of the imaginary part of the electrical impedance were in contrast to what we recorded for the middle frequencies. The results showed that the electrical permittivity decreases with increasing frequency, at low frequency we recorded permittivity values of 10+ 11, and at medium frequencies it was 10+ 07, while at high frequencies it was 10+ 02. The values of the real part of the electrical permittivity were taken large values at the temperatures of 200 and 600 degrees Celsius and at the lowest frequency, while the smallest value for the permittivity was recorded at the temperature of 400 degrees Celsius at the highest frequency. The results showed that there are large values of the imaginary part of the electrical permittivity at the lowest frequency and then it starts decreasing as the latter increases (the higher the frequency the lower the values of the imaginary part of the electrical permittivity). The character of electrical impedance variation indicated an opportunity to realize the polarization of Ouargla dunes sand and acquaintance if this compound consumes or produces energy. It’s also possible to know the satisfactory of equivalent electric circuit, whether it’s miles induction or capacitance.Keywords: electrical impedance, electrical permittivity, temperature, impedance spectroscopy, dunes sand ouargla
Procedia PDF Downloads 486079 The Rule of Architectural Firms in Enhancing Building Energy Efficiency in Emerging Countries: Processes and Tools Evaluation of Architectural Firms in Egypt
Authors: Mahmoud F. Mohamadin, Ahmed Abdel Malek, Wessam Said
Abstract:
Achieving energy efficient architecture in general, and in emerging countries in particular, is a challenging process that requires the contribution of various governmental, institutional, and individual entities. The rule of architectural design is essential in this process as it is considered as one of the earliest steps on the road to sustainability. Architectural firms have a moral and professional responsibility to respond to these challenges and deliver buildings that consume less energy. This study aims to evaluate the design processes and tools in practice of Egyptian architectural firms based on a limited survey to investigate if their processes and methods can lead to projects that meet the Egyptian Code of Energy Efficiency Improvement. A case study of twenty architectural firms in Cairo was selected and categorized according to their scale; large-scale, medium-scale, and small-scale. A questionnaire was designed and distributed to the firms, and personal meetings with the firms’ representatives took place. The questionnaire answered three main points; the design processes adopted, the usage of performance-based simulation tools, and the usage of BIM tools for energy efficiency purposes. The results of the study revealed that only little percentage of the large-scale firms have clear strategies for building energy efficiency in their building design, however the application is limited to certain project types, or according to the client request. On the other hand, the percentage of medium-scale firms is much less, and it is almost absent in the small-scale ones. This demonstrates the urgent need of enhancing the awareness of the Egyptian architectural design community of the great importance of implementing these methods starting from the early stages of the building design. Finally, the study proposed recommendations for such firms to be able to create a healthy built environment and improve the quality of life in emerging countries.Keywords: architectural firms, emerging countries, energy efficiency, performance-based simulation tools
Procedia PDF Downloads 2846078 Satellite Derived Evapotranspiration and Turbulent Heat Fluxes Using Surface Energy Balance System (SEBS)
Authors: Muhammad Tayyab Afzal, Muhammad Arslan, Mirza Muhammad Waqar
Abstract:
One of the key components of the water cycle is evapotranspiration (ET), which represents water consumption by vegetated and non-vegetated surfaces. Conventional techniques for measurements of ET are point based and representative of the local scale only. Satellite remote sensing data with large area coverage and high temporal frequency provide representative measurements of several relevant biophysical parameters required for estimation of ET at regional scales. The objective is of this research is to exploit satellite data in order to estimate evapotranspiration. This study uses Surface Energy Balance System (SEBS) model to calculate daily actual evapotranspiration (ETa) in Larkana District, Sindh Pakistan using Landsat TM data for clouds-free days. As there is no flux tower in the study area for direct measurement of latent heat flux or evapotranspiration and sensible heat flux, therefore, the model estimated values of ET were compared with reference evapotranspiration (ETo) computed by FAO-56 Penman Monteith Method using meteorological data. For a country like Pakistan, agriculture by irrigation in the river basins is the largest user of fresh water. For the better assessment and management of irrigation water requirement, the estimation of consumptive use of water for agriculture is very important because it is the main consumer of water. ET is yet an essential issue of water imbalance due to major loss of irrigation water and precipitation on cropland. As large amount of irrigated water is lost through ET, therefore its accurate estimation can be helpful for efficient management of irrigation water. Results of this study can be used to analyse surface conditions, i.e. temperature, energy budgets and relevant characteristics. Through this information we can monitor vegetation health and suitable agricultural conditions and can take controlling steps to increase agriculture production.Keywords: SEBS, remote sensing, evapotranspiration, ETa
Procedia PDF Downloads 3336077 GIS Mapping of Sheep Population and Distribution Pattern in the Derived Savannah of Nigeria
Authors: Sosina Adedayo O., Babyemi Olaniyi J.
Abstract:
The location, population, and distribution pattern of sheep are severe challenges to agribusiness investment and policy formulation in the livestock industry. There is a significant disconnect between farmers' needs and the policy framework towards ameliorating the sheep production constraints. Information on the population, production, and distribution pattern of sheep remains very scanty. A multi-stage sampling technique was used to elicit information from 180 purposively selected respondents from the study area comprised of Oluyole, Ona-ara, Akinyele, Egbeda, Ido and Ibarapa East LGA. The Global Positioning Systems (GPS) of the farmers' location (distribution), and average sheep herd size (Total Livestock Unit, TLU) (population) were recorded, taking the longitude and latitude of the locations in question. The recorded GPS data of the study area were transferred into the ARC-GIS. The ARC-GIS software processed the data using the ARC-GIS model 10.0. Sheep production and distribution (TLU) ranged from 4.1 (Oluyole) to 25.0 (Ibarapa East), with Oluyole, Akinyele, Ona-ara and Egbeda having TLU of 5, 7, 8 and 20, respectively. The herd sizes were classified as less than 8 (smallholders), 9-25 (medium), 26-50 (large), and above 50 (commercial). The majority (45%) of farmers were smallholders. The FR CP (%) ranged from 5.81±0.26 (cassava leaf) to 24.91±0.91 (Amaranthus spinosus), NDF (%) ranged from 22.38±4.43 (Amaranthus spinosus) to 67.96 ± 2.58 (Althemanthe dedentata) while ME ranged from 7.88±0.24 (Althemanthe dedentata) to 10.68±0.18 (cassava leaf). The smallholders’ sheep farmers were the majority, evenly distributed across rural areas due to the availability of abundant feed resources (crop residues, tree crops, shrubs, natural pastures, and feed ingredients) coupled with a large expanse of land in the study area. Most feed resources available were below sheep protein requirement level, hence supplementation is necessary for productivity. Bio-informatics can provide relevant information for sheep production for policy framework and intervention strategies.Keywords: sheep enterprise, agribusiness investment, policy, bio-informatics, ecological zone
Procedia PDF Downloads 836076 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems
Authors: Samuel Kaspi, Sitalakshmi Venkatraman
Abstract:
In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.Keywords: in-memory database, disk-based system, hybrid database, concurrency control
Procedia PDF Downloads 4186075 Tailoring the Parameters of the Quantum MDS Codes Constructed from Constacyclic Codes
Authors: Jaskarn Singh Bhullar, Divya Taneja, Manish Gupta, Rajesh Kumar Narula
Abstract:
The existence conditions of dual containing constacyclic codes have opened a new path for finding quantum maximum distance separable (MDS) codes. Using these conditions parameters of length n=(q²+1)/2 quantum MDS codes were improved. A class of quantum MDS codes of length n=(q²+q+1)/h, where h>1 is an odd prime, have also been constructed having large minimum distance and these codes are new in the sense as these are not available in the literature.Keywords: hermitian construction, constacyclic codes, cyclotomic cosets, quantum MDS codes, singleton bound
Procedia PDF Downloads 3906074 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 1576073 The Impact of Heat Waves on Human Health: State of Art in Italy
Authors: Vito Telesca, Giuseppina A. Giorgio
Abstract:
The earth system is subject to a wide range of human activities that have changed the ecosystem more rapidly and extensively in the last five decades. These global changes have a large impact on human health. The relationship between extreme weather events and mortality are widely documented in different studies. In particular, a number of studies have investigated the relationship between climatological variations and the cardiovascular and respiratory system. The researchers have become interested in the evaluation of the effect of environmental variations on the occurrence of different diseases (such as infarction, ischemic heart disease, asthma, respiratory problems, etc.) and mortality. Among changes in weather conditions, the heat waves have been used for investigating the association between weather conditions and cardiovascular events and cerebrovascular, using thermal indices, which combine air temperature, relative humidity, and wind speed. The effects of heat waves on human health are mainly found in the urban areas and they are aggravated by the presence of atmospheric pollution. The consequences of these changes for human health are of growing concern. In particular, meteorological conditions are one of the environmental aspects because cardiovascular diseases are more common among the elderly population, and such people are more sensitive to weather changes. In addition, heat waves, or extreme heat events, are predicted to increase in frequency, intensity, and duration with climate change. In this context, are very important public health and climate change connections increasingly being recognized by the medical research, because these might help in informing the public at large. Policy experts claim that a growing awareness of the relationships of public health and climate change could be a key in breaking through political logjams impeding action on mitigation and adaptation. The aims of this study are to investigate about the importance of interactions between weather variables and your effects on human health, focusing on Italy. Also highlighting the need to define strategies and practical actions of monitoring, adaptation and mitigation of the phenomenon.Keywords: climate change, illness, Italy, temperature, weather
Procedia PDF Downloads 2476072 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 2906071 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 996070 Duration of Isolated Vowels in Infants with Cochlear Implants
Authors: Paris Binos
Abstract:
The present work investigates developmental aspects of the duration of isolated vowels in infants with normal hearing compared to those who received cochlear implants (CIs) before two years of age. Infants with normal hearing produced shorter vowel duration since this find related with more mature production abilities. First isolated vowels are transparent during the protophonic stage as evidence of an increased motor and linguistic control. Vowel duration is a crucial factor for the transition of prelexical speech to normal adult speech. Despite current knowledge of data for infants with normal hearing more research is needed to unravel productions skills in early implanted children. Thus, isolated vowel productions by two congenitally hearing-impaired Greek infants (implantation ages 1:4-1:11; post-implant ages 0:6-1:3) were recorded and sampled for six months after implantation with a Nucleus-24. The results compared with the productions of three normal hearing infants (chronological ages 0:8-1:1). Vegetative data and vocalizations masked by external noise or sounds were excluded. Participants had no other disabilities and had unknown deafness etiology. Prior to implantation the infants had an average unaided hearing loss of 95-110 dB HL while the post-implantation PTA decreased to 10-38 dB HL. The current research offers a methodology for the processing of the prelinguistic productions based on a combination of acoustical and auditory analyses. Based on the current methodological framework, duration measured through spectrograms based on wideband analysis, from the voicing onset to the end of the vowel. The end marked by two co-occurring events: 1) The onset of aperiodicity with a rapid change in amplitude in the waveform and 2) a loss in formant’s energy. Cut-off levels of significance were set at 0.05 for all tests. Bonferroni post hoc tests indicated that difference was significant between the mean duration of vowels of infants wearing CIs and their normal hearing peers. Thus, the mean vowel duration of CIs measured longer compared to the normal hearing peers (0.000). The current longitudinal findings contribute to the existing data for the performance of children wearing CIs at a very young age and enrich also the data of the Greek language. The above described weakness for CI’s performance is a challenge for future work in speech processing and CI’s processing strategies.Keywords: cochlear implant, duration, spectrogram, vowel
Procedia PDF Downloads 2616069 Mastering Digital Transformation with the Strategy Tandem Innovation Inside-Out/Outside-In: An Approach to Drive New Business Models, Services and Products in the Digital Age
Authors: S. N. Susenburger, D. Boecker
Abstract:
In the age of Volatility, Uncertainty, Complexity, and Ambiguity (VUCA), where digital transformation is challenging long standing traditional hardware and manufacturing companies, innovation needs a different methodology, strategy, mindset, and culture. What used to be a mindset of scaling per quantity is now shifting to orchestrating ecosystems, platform business models and service bundles. While large corporations are trying to mimic the nimbleness and versatile mindset of startups in the core of their digital strategies, they’re at the frontier of facing one of the largest organizational and cultural changes in history. This paper elaborates on how a manufacturing giant transformed its Corporate Information Technology (IT) to enable digital and Internet of Things (IoT) business while establishing the mindset and the approaches of the Innovation Inside-Out/Outside-In Strategy. It gives insights into the core elements of an innovation culture and the tactics and methodologies leveraged to support the cultural shift and transformation into an IoT company. This paper also outlines the core elements for an innovation culture and how the persona 'Connected Engineer' thrives in the digital innovation environment. Further, it explores how tapping domain-focused ecosystems in vibrant innovative cities can be used as a part of the strategy to facilitate partner co-innovation. Therefore, findings from several use cases, observations and surveys led to conclusion for the strategy tandem of Innovation Inside-Out/Outside-In. The findings indicate that it's crucial in which phases and maturity level the Innovation Inside-Out/Outside-In Strategy is activated: cultural aspects of the business and the regional ecosystem need to be considered, as well as cultural readiness from management and active contributors. The 'not invented here syndrome' is a barrier of large corporations that need to be addressed and managed to successfully drive partnerships, as well as embracing co-innovation and a mindset shifting away from physical products toward new business models, services, and IoT platforms. This paper elaborates on various methodologies and approaches tested in different countries and cultures, including the U.S., Brazil, Mexico, and Germany.Keywords: innovation management, innovation culture, innovation methodologies, digital transformation
Procedia PDF Downloads 1466068 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models
Authors: Morten Brøgger, Kim Wittchen
Abstract:
Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.Keywords: building stock energy modelling, energy-savings, archetype
Procedia PDF Downloads 1546067 Window Opening Behavior in High-Density Housing Development in Subtropical Climate
Authors: Minjung Maing, Sibei Liu
Abstract:
This research discusses the results of a study of window opening behavior of large housing developments in the high-density megacity of Hong Kong. The methods used for the study involved field observations using photo documentation of the four cardinal elevations (north, south-east, and west) of two large housing developments in a very dense urban area of approx. 46,000 persons per square meter within the city of Hong Kong. The targeted housing developments (A and B) are large public housing with a population of about 13,000 in each development of lower income. However, the mean income level in development A is about 40% higher than development B and home ownership is 60% in development A and 0% in development B. Mapping of the surrounding amenities and layout of the developments were also studied to understand the available activities to the residents. The photo documentation of the elevations was taken from November 2016 to February 2018 to gather a full spectrum of different seasons and both in the morning and afternoon (am/pm) times. From the photograph, the window opening behavior was measured by counting the amount of windows opened as a percentage of all the windows on that façade. For each date of survey data collected, weather data was recorded from weather stations located in the same region to collect temperature, humidity and wind speed. To further understand the behavior, simulation studies of microclimate conditions of the housing development was conducted using the software ENVI-met, a widely used simulation tool by researchers studying urban climate. Four major conclusions can be drawn from the data analysis and simulation results. Firstly, there is little change in the amount of window opening during the different seasons within a temperature range of 10 to 35 degrees Celsius. This means that people who tend to open their windows have consistent window opening behavior throughout the year and high tolerance of indoor thermal conditions. Secondly, for all four elevations the lower-income development B opened more windows (almost two times more units) than higher-income development A meaning window opening behavior had strong correlations with income level. Thirdly, there is a lack of correlation between outdoor horizontal wind speed and window opening behavior, as the changes of wind speed do not seem to affect the action of opening windows in most conditions. Similar to the low correlation between horizontal wind speed and window opening percentage, it is found that vertical wind speed also cannot explain the window opening behavior of occupants. Fourthly, there is a slightly higher average of window opening on the south elevation than the north elevation, which may be due to the south elevation being well shaded from high angle sun during the summer and allowing heat into units from lower angle sun during the winter season. These findings are important to providing insight into how to better design urban environments and indoor thermal environments for a liveable high density city.Keywords: high-density housing, subtropical climate, urban behavior, window opening
Procedia PDF Downloads 1256066 Evaluation of the Gasification Process for the Generation of Syngas Using Solid Waste at the Autónoma de Colombia University
Authors: Yeraldin Galindo, Soraida Mora
Abstract:
Solid urban waste represents one of the largest sources of global environmental pollution due to the large quantities of these that are produced every day; thus, the elimination of such waste is a major problem for the environmental authorities who must look for alternatives to reduce the volume of waste with the possibility of obtaining an energy recovery. At the Autónoma de Colombia University, approximately 423.27 kg/d of solid waste are generated mainly paper, cardboard, and plastic. A large amount of these solid wastes has as final disposition the sanitary landfill of the city, wasting the energy potential that these could have, this, added to the emissions generated by the collection and transport of the same, has as consequence the increase of atmospheric pollutants. One of the alternative process used in the last years to generate electrical energy from solid waste such as paper, cardboard, plastic and, mainly, organic waste or biomass to replace the use of fossil fuels is the gasification. This is a thermal conversion process of biomass. The objective of it is to generate a combustible gas as the result of a series of chemical reactions propitiated by the addition of heat and the reaction agents. This project was developed with the intention of giving an energetic use to the waste (paper, cardboard, and plastic) produced inside the university, using them to generate a synthesis gas with a gasifier prototype. The gas produced was evaluated to determine their benefits in terms of electricity generation or raw material for the chemical industry. In this process, air was used as gasifying agent. The characterization of the synthesis gas was carried out by a gas chromatography carried out by the Chemical Engineering Laboratory of the National University of Colombia. Taking into account the results obtained, it was concluded that the gas generated is of acceptable quality in terms of the concentration of its components, but it is a gas of low calorific value. For this reason, the syngas generated in this project is not viable for the production of electrical energy but for the production of methanol transformed by the Fischer-Tropsch cycle.Keywords: alternative energies, gasification, gasifying agent, solid urban waste, syngas
Procedia PDF Downloads 2586065 Banking Union: A New Step towards Completing the Economic and Monetary Union
Authors: Marijana Ivanov, Roman Šubić
Abstract:
The single rulebook together with the Single Supervisory Mechanism and the Single Resolution Mechanism - as two main pillars of the banking union, represent important steps towards completing the Economic and Monetary Union. It should provide a consistent application of common rules and administrative standards for supervision, recovery and resolution of banks – with the final aim that a former practice of the bail-out is replaced with the bail-in system through which bank failures will be resolved by their own funds, i.e. with minimal costs for taxpayers and real economy. It has to reduce the financial fragmentation recorded in the years of crisis as the result of divergent behaviors in risk premium, lending activities, and interest rates between the core and the periphery. In addition, it should strengthen the effectiveness of monetary transmission channels, in particular the credit channels and overflows of liquidity on the single interbank money market. However, contrary to all the positive expectations related to the future functioning of the banking union, low and unbalanced economic growth rates remain a challenge for the maintenance of financial stability in the euro area, and this problem cannot be resolved just by a single supervision. In many countries bank assets exceed their GDP by several times, and large banks are still a matter of concern because of their systemic importance for individual countries and the euro zone as a whole. The creation of the SSM and the SRM should increase transparency of the banking system in the euro area and restore confidence that have been disturbed during the depression. It would provide a new opportunity to strengthen economic and financial systems in the peripheral countries. On the other hand, there is a potential threat that future focus of the ECB, resolution mechanism and other relevant institutions will be extremely oriented to the large and significant banks (whereby one half of them operate in the core and most important euro area countries), while it is questionable to what extent the common resolution funds will be used for rescue of less important institutions.Keywords: banking union, financial integration, single supervision mechanism (SSM)
Procedia PDF Downloads 4706064 Investigation of Oscillation Mechanism of a Large-scale Solar Photovoltaic and Wind Hybrid Power Plant
Authors: Ting Kai Chia, Ruifeng Yan, Feifei Bai, Tapan Saha
Abstract:
This research presents a real-world power system oscillation incident in 2022 originated by a hybrid solar photovoltaic (PV) and wind renewable energy farm with a rated capacity of approximately 300MW in Australia. The voltage and reactive power outputs recorded at the point of common coupling (PCC) oscillated at a sub-synchronous frequency region, which sustained for approximately five hours in the network. The reactive power oscillation gradually increased over time and reached a recorded maximum of approximately 250MVar peak-to-peak (from inductive to capacitive). The network service provider was not able to quickly identify the location of the oscillation source because the issue was widespread across the network. After the incident, the original equipment manufacturer (OEM) concluded that the oscillation problem was caused by the incorrect setting recovery of the hybrid power plant controller (HPPC) in the voltage and reactive power control loop after a loss of communication event. The voltage controller normally outputs a reactive (Q) reference value to the Q controller which controls the Q dispatch setpoint of PV and wind plants in the hybrid farm. Meanwhile, a feed-forward (FF) configuration is used to bypass the Q controller in case there is a loss of communication. Further study found that the FF control mode was still engaged when communication was re-established, which ultimately resulted in the oscillation event. However, there was no detailed explanation of why the FF control mode can cause instability in the hybrid farm. Also, there was no duplication of the event in the simulation to analyze the root cause of the oscillation. Therefore, this research aims to model and replicate the oscillation event in a simulation environment and investigate the underlying behavior of the HPPC and the consequent oscillation mechanism during the incident. The outcome of this research will provide significant benefits to the safe operation of large-scale renewable energy generators and power networks.Keywords: PV, oscillation, modelling, wind
Procedia PDF Downloads 37