Search results for: battery energy storage efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14658

Search results for: battery energy storage efficiency

828 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 267
827 Microbial Activity and Greenhouse Gas (GHG) Emissions in Recovery Process in a Grassland of China

Authors: Qiushi Ning

Abstract:

The nitrogen (N) is an important limiting factor of various ecosystems, and the N deposition rate is increasing unprecedentedly due to anthropogenic activities. The N deposition altered the microbial growth and activity, and microbial mediated N cycling through changing soil pH, the availability of N and carbon (C). The CO2, CH4 and N2O are important greenhouse gas which threaten the sustainability and function of the ecosystem. With the prolonged and increasing N enrichment, the soil acidification and C limitation will be aggravated, and the microbial biomass will be further declined. The soil acidification and lack of C induced by N addition are argued as two important factors regulating the microbial activity and growth, and the studies combined soil acidification with lack of C on microbial community are scarce. In order to restore the ecosystem affected by chronic N loading, we determined the responses of microbial activity and GHG emssions to lime and glucose (control, 1‰ lime, 2‰ lime, glucose, 1‰ lime×glucose and 2‰ lime×glucose) addition which was used to alleviate the soil acidification and supply C resource into soils with N addition rates 0-50 g N m–2yr–1. The results showed no significant responses of soil respiration and microbial biomass (MBC and MBN) to lime addition, however, the glucose substantially improved the soil respiration and microbial biomass (MBC and MBN); the cumulative CO2 emission and microbial biomass of lime×glucose treatments were not significantly higher than those of only glucose treatment. The glucose and lime×glucose treatments reduced the net mineralization and nitrification rate, due to inspired microbial growth via C supply incorporating more inorganic N to the biomass, and mineralization of organic N was relatively reduced. The glucose addition also increased the CH4 and N2O emissions, CH4 emissions was regulated mainly by C resource as a substrate for methanogen. However, the N2O emissions were regulated by both C resources and soil pH, the C was important energy and the increased soil pH could benefit the nitrifiers and denitrifiers which were primary producers of N2O. The soil respiration and N2O emissions increased with increasing N addition rates in all glucose treatments, as the external C resource improved microbial N utilization. Compared with alleviated soil acidification, the improved availability of C substantially increased microbial activity, therefore, the C should be the main limiting factor in long-term N loading soils. The most important, when we use the organic C fertilization to improve the production of the ecosystems, the GHG emissions and consequent warming potentials should be carefully considered.

Keywords: acidification and C limitation, greenhouse gas emission, microbial activity, N deposition

Procedia PDF Downloads 305
826 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 71
825 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 146
824 Identification of the Best Blend Composition of Natural Rubber-High Density Polyethylene Blends for Roofing Applications

Authors: W. V. W. H. Wickramaarachchi, S. Walpalage, S. M. Egodage

Abstract:

Thermoplastic elastomer (TPE) is a multifunctional polymeric material which possesses a combination of excellent properties of parent materials. Basically, TPE has a rubber phase and a thermoplastic phase which gives processability as thermoplastics. When the rubber phase is partially or fully crosslinked in the thermoplastic matrix, TPE is called as thermoplastic elastomer vulcanizate (TPV). If the rubber phase is non-crosslinked, it is called as thermoplastic elastomer olefin (TPO). Nowadays TPEs are introduced into the commercial market with different products. However, the application of TPE as a roofing material is limited. Out of the commercially available roofing products from different materials, only single ply roofing membranes and plastic roofing sheets are produced from rubbers and plastics. Natural rubber (NR) and high density polyethylene (HDPE) are used in various industrial applications individually with some drawbacks. Therefore, this study was focused to develop both TPO and TPV blends from NR and HDPE at different compositions and then to identify the best blend composition to use as a roofing material. A series of blends by varying NR loading from 10 wt% to 50 wt%, at 10 wt% intervals, were prepared using a twin screw extruder. Dicumyl peroxide was used as a crosslinker for TPV. The standard properties for a roofing material like tensile properties tear strength, hardness, impact strength, water absorption, swell/gel analysis and thermal characteristics of the blends were investigated. Change of tensile strength after exposing to UV radiation was also studied. Tensile strength, hardness, tear strength, melting temperature and gel content of TPVs show higher values compared to TPOs at every loading studied, while water absorption and swelling index show lower values, suggesting TPVs are more suitable than TPOs for roofing applications. Most of the optimum properties were shown at 10/90 (NR/HDPE) composition. However, high impact strength and gel content were shown at 20/80 (NR/HDPE) composition. Impact strength, as being an energy absorbing property, is the most important for a roofing material in order to resist impact loads. Therefore, 20/80 (NR/HDPE) is identified as the best blend composition. UV resistance and other properties required for a roofing material could be achieved by incorporating suitable additives to TPVs.

Keywords: thermoplastic elastomer, natural rubber, high density polyethylene, roofing material

Procedia PDF Downloads 126
823 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement

Authors: Rajkumar Ghosh

Abstract:

Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.

Keywords: earthquake, out-of-sequence thrust, disaster, human life

Procedia PDF Downloads 77
822 Multi-Omics Integrative Analysis Coupled to Control Theory and Computational Simulation of a Genome-Scale Metabolic Model Reveal Controlling Biological Switches in Human Astrocytes under Palmitic Acid-Induced Lipotoxicity

Authors: Janneth Gonzalez, Andrés Pinzon Velasco, Maria Angarita

Abstract:

Astrocytes play an important role in various processes in the brain, including pathological conditions such as neurodegenerative diseases. Recent studies have shown that the increase in saturated fatty acids such as palmitic acid (PA) triggers pro-inflammatorypathways in the brain. The use of synthetic neurosteroids such as tibolone has demonstrated neuro-protective mechanisms. However, broad studies with a systemic point of view on the neurodegenerative role of PA and the neuro-protective mechanisms of tibolone are lacking. In this study, we performed the integration of multi-omic data (transcriptome and proteome) into a human astrocyte genomic scale metabolic model to study the astrocytic response during palmitate treatment. We evaluated metabolic fluxes in three scenarios (healthy, induced inflammation by PA, and tibolone treatment under PA inflammation). We also applied a control theory approach to identify those reactions that exert more control in the astrocytic system. Our results suggest that PA generates a modulation of central and secondary metabolism, showing a switch in energy source use through inhibition of folate cycle and fatty acid β‐oxidation and upregulation of ketone bodies formation. We found 25 metabolic switches under PA‐mediated cellular regulation, 9 of which were critical only in the inflammatory scenario but not in the protective tibolone one. Within these reactions, inhibitory, total, and directional coupling profiles were key findings, playing a fundamental role in the (de)regulation of metabolic pathways that may increase neurotoxicity and represent potential treatment targets. Finally, the overall framework of our approach facilitates the understanding of complex metabolic regulation, and it can be used for in silico exploration of the mechanisms of astrocytic cell regulation, directing a more complex future experimental work in neurodegenerative diseases.

Keywords: astrocytes, data integration, palmitic acid, computational model, multi-omics

Procedia PDF Downloads 97
821 Optical Characterization of Transition Metal Ion Doped ZnO Microspheres Synthesized via Laser Ablation in Air

Authors: Parvathy Anitha, Nilesh J. Vasa, M. S. Ramachandra Rao

Abstract:

ZnO is a semiconducting material with a direct wide band gap of 3.37 eV and a large exciton binding energy of 60 meV at room temperature. Microspheres with high sphericity and symmetry exhibit unique functionalities which makes them excellent omnidirectional optical resonators. Hence there is an advent interest in fabrication of single crystalline semiconductor microspheres especially magnetic ZnO microspheres, as ZnO is a promising material for semiconductor device applications. Also, ZnO is non-toxic and biocompatible, implying it is a potential material for biomedical applications. Room temperature Photoluminescence (PL) spectra of the fabricated ZnO microspheres were measured, at an excitation wavelength of 325 nm. The ultraviolet (UV) luminescence observed is attributed to the room-temperature free exciton related near-band-edge (NBE) emission in ZnO. Besides the NBE luminescence, weak and broad visible luminescence (~560nm) was also observed. This broad emission band in the visible range is associated with oxygen vacancies related to structural defects. In transition metal (TM) ion-doped ZnO, 3d levels emissions of TM ions will modify the inherent characteristic emissions of ZnO. A micron-sized ZnO crystal has generally a wurtzite structure with a natural hexagonal cross section, which will serve as a WGM (whispering gallery mode) lasing micro cavity due to its high refractive index (~2.2). But hexagonal cavities suffers more optical loss at their corners in comparison to spherical structures; hence spheres may be a better candidate to achieve effective light confinement. In our study, highly smooth spherical shaped micro particles with different diameters ranging from ~4 to 6 μm were grown on different substrates. SEM (Scanning Electron Microscopy) and AFM (Atomic Force Microscopy) images show the presence of uniform smooth surfaced spheres. Raman scattering measurements from the fabricated samples at 488 nm light excitation provide convincing supports for the wurtzite structure of the prepared ZnO microspheres. WGM lasing studies from TM-doped ZnO microparticles are in progress.

Keywords: laser ablation, microcavity, photoluminescence, ZnO microsphere

Procedia PDF Downloads 217
820 Development of Positron Emission Tomography (PET) Tracers for the in-Vivo Imaging of α-Synuclein Aggregates in α-Synucleinopathies

Authors: Bright Chukwunwike Uzuegbunam, Wojciech Paslawski, Hans Agren, Christer Halldin, Wolfgang Weber, Markus Luster, Thomas Arzberger, Behrooz Hooshyar Yousefi

Abstract:

There is a need to develop a PET tracer that will enable to diagnosis and track the progression of Alpha-synucleinopathies (Parkinson’s disease [PD], dementia with Lewy bodies [DLB], multiple system atrophy [MSA]) in living subjects over time. Alpha-synuclein aggregates (a-syn), which are present in all the stages of disease progression, for instance, in PD, are a suitable target for in vivo PET imaging. For this reason, we have developed some promising a-syn tracers based on a disarylbisthiazole (DABTA) scaffold. The precursors are synthesized via a modified Hantzsch thiazole synthesis. The precursors were then radiolabeled via one- or two-step radiofluorination methods. The ligands were initially screened using a combination of molecular dynamics and quantum/molecular mechanics approaches in order to calculate the binding affinity to a-syn (in silico binding experiments). Experimental in vitro binding assays were also performed. The ligands were further screened in other experiments such as log D, in vitro plasma protein binding & plasma stability, biodistribution & brain metabolite analyses in healthy mice. Radiochemical yields were up to 30% - 72% in some cases. Molecular docking revealed possible binding sites in a-syn and also the free energy of binding to those sites (-28.9 - -66.9 kcal/mol), which correlated to the high binding affinity of the DABTAs to a-syn (Ki as low as 0.5 nM) and selectivity (> 100-fold) over Aβ and tau, which usually co-exist with a-synin some pathologies. The log D values range from 2.88 - 2.34, which correlated with free-protein fraction of 0.28% - 0.5%. Biodistribution experiments revealed that the tracers are taken up (5.6 %ID/g - 7.3 %ID/g) in the brain at 5 min (post-injection) p.i., and cleared out (values as low as 0.39 %ID/g were obtained at 120 min p.i. Analyses of the mice brain 20 min p.i. Revealed almost no radiometabolites in the brain in most cases. It can be concluded that in silico study presents a new venue for the rational development of radioligands with suitable features. The results obtained so far are promising and encourage us to further validate the DABTAs in autoradiography, immunohistochemistry, and in vivo imaging in non-human primates and humans.

Keywords: alpha-synuclein aggregates, alpha-synucleinopathies, PET imaging, tracer development

Procedia PDF Downloads 235
819 Management of Non-Revenue Municipal Water

Authors: Habib Muhammetoglu, I. Ethem Karadirek, Selami Kara, Ayse Muhammetoglu

Abstract:

The problem of non-revenue water (NRW) from municipal water distribution networks is common in many countries such as Turkey, where the average yearly water losses are around 50% . Water losses can be divided into two major types namely: 1) Real or physical water losses, and 2) Apparent or commercial water losses. Total water losses in Antalya city, Turkey is around 45%. Methods: A research study was conducted to develop appropriate methodologies to reduce NRW. A pilot study area of about 60 thousands inhabitants was chosen to apply the study. The pilot study area has a supervisory control and data acquisition (SCADA) system for the monitoring and control of many water quantity and quality parameters at the groundwater drinking wells, pumping stations, distribution reservoirs, and along the water mains. The pilot study area was divided into 18 District Metered Areas (DMAs) with different number of service connections that ranged between a few connections to less than 3000 connections. The flow rate and water pressure to each DMA were on-line continuously measured by an accurate flow meter and water pressure meter that were connected to the SCADA system. Customer water meters were installed to all billed and unbilled water users. The monthly water consumption as given by the water meters were recorded regularly. Water balance was carried out for each DMA using the well-know standard IWA approach. There were considerable variations in the water losses percentages and the components of the water losses among the DMAs of the pilot study area. Old Class B customer water meters at one DMA were replaced by more accurate new Class C water meters. Hydraulic modelling using the US-EPA EPANET model was carried out in the pilot study area for the prediction of water pressure variations at each DMA. The data sets required to calibrate and verify the hydraulic model were supplied by the SCADA system. It was noticed that a number of the DMAs exhibited high water pressure values. Therefore, pressure reducing valves (PRV) with constant head were installed to reduce the pressure up to a suitable level that was determined by the hydraulic model. On the other hand, the hydraulic model revealed that the water pressure at the other DMAs cannot be reduced when complying with the minimum pressure requirement (3 bars) as stated by the related standards. Results: Physical water losses were reduced considerably as a result of just reducing water pressure. Further physical water losses reduction was achieved by applying acoustic methods. The results of the water balances helped in identifying the DMAs that have considerable physical losses. Many bursts were detected especially in the DMAs that have high physical water losses. The SCADA system was very useful to assess the efficiency level of this method and to check the quality of repairs. Regarding apparent water losses reduction, changing the customer water meters resulted in increasing water revenue by more than 20%. Conclusions: DMA, SCADA, modelling, pressure management, leakage detection and accurate customer water meters are efficient for NRW.

Keywords: NRW, water losses, pressure management, SCADA, apparent water losses, urban water distribution networks

Procedia PDF Downloads 405
818 Duration of Isolated Vowels in Infants with Cochlear Implants

Authors: Paris Binos

Abstract:

The present work investigates developmental aspects of the duration of isolated vowels in infants with normal hearing compared to those who received cochlear implants (CIs) before two years of age. Infants with normal hearing produced shorter vowel duration since this find related with more mature production abilities. First isolated vowels are transparent during the protophonic stage as evidence of an increased motor and linguistic control. Vowel duration is a crucial factor for the transition of prelexical speech to normal adult speech. Despite current knowledge of data for infants with normal hearing more research is needed to unravel productions skills in early implanted children. Thus, isolated vowel productions by two congenitally hearing-impaired Greek infants (implantation ages 1:4-1:11; post-implant ages 0:6-1:3) were recorded and sampled for six months after implantation with a Nucleus-24. The results compared with the productions of three normal hearing infants (chronological ages 0:8-1:1). Vegetative data and vocalizations masked by external noise or sounds were excluded. Participants had no other disabilities and had unknown deafness etiology. Prior to implantation the infants had an average unaided hearing loss of 95-110 dB HL while the post-implantation PTA decreased to 10-38 dB HL. The current research offers a methodology for the processing of the prelinguistic productions based on a combination of acoustical and auditory analyses. Based on the current methodological framework, duration measured through spectrograms based on wideband analysis, from the voicing onset to the end of the vowel. The end marked by two co-occurring events: 1) The onset of aperiodicity with a rapid change in amplitude in the waveform and 2) a loss in formant’s energy. Cut-off levels of significance were set at 0.05 for all tests. Bonferroni post hoc tests indicated that difference was significant between the mean duration of vowels of infants wearing CIs and their normal hearing peers. Thus, the mean vowel duration of CIs measured longer compared to the normal hearing peers (0.000). The current longitudinal findings contribute to the existing data for the performance of children wearing CIs at a very young age and enrich also the data of the Greek language. The above described weakness for CI’s performance is a challenge for future work in speech processing and CI’s processing strategies.

Keywords: cochlear implant, duration, spectrogram, vowel

Procedia PDF Downloads 261
817 Comparison of Monte Carlo Simulations and Experimental Results for the Measurement of Complex DNA Damage Induced by Ionizing Radiations of Different Quality

Authors: Ifigeneia V. Mavragani, Zacharenia Nikitaki, George Kalantzis, George Iliakis, Alexandros G. Georgakilas

Abstract:

Complex DNA damage consisting of a combination of DNA lesions, such as Double Strand Breaks (DSBs) and non-DSB base lesions occurring in a small volume is considered as one of the most important biological endpoints regarding ionizing radiation (IR) exposure. Strong theoretical (Monte Carlo simulations) and experimental evidence suggests an increment of the complexity of DNA damage and therefore repair resistance with increasing linear energy transfer (LET). Experimental detection of complex (clustered) DNA damage is often associated with technical deficiencies limiting its measurement, especially in cellular or tissue systems. Our groups have recently made significant improvements towards the identification of key parameters relating to the efficient detection of complex DSBs and non-DSBs in human cellular systems exposed to IR of varying quality (γ-, X-rays 0.3-1 keV/μm, α-particles 116 keV/μm and 36Ar ions 270 keV/μm). The induction and processing of DSB and non-DSB-oxidative clusters were measured using adaptations of immunofluorescence (γH2AX or 53PB1 foci staining as DSB probes and human repair enzymes OGG1 or APE1 as probes for oxidized purines and abasic sites respectively). In the current study, Relative Biological Effectiveness (RBE) values for DSB and non-DSB induction have been measured in different human normal (FEP18-11-T1) and cancerous cell lines (MCF7, HepG2, A549, MO59K/J). The experimental results are compared to simulation data obtained using a validated microdosimetric fast Monte Carlo DNA Damage Simulation code (MCDS). Moreover, this simulation approach is implemented in two realistic clinical cases, i.e. prostate cancer treatment using X-rays generated by a linear accelerator and a pediatric osteosarcoma case using a 200.6 MeV proton pencil beam. RBE values for complex DNA damage induction are calculated for the tumor areas. These results reveal a disparity between theory and experiment and underline the necessity for implementing highly precise and more efficient experimental and simulation approaches.

Keywords: complex DNA damage, DNA damage simulation, protons, radiotherapy

Procedia PDF Downloads 325
816 Physical Activity, Mental Health, and Body Composition in College Students after COVID-19 Lockdown

Authors: Manuela Caciula, Luis Torres, Simion Tomoiaga

Abstract:

Introduction: The SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2), more commonly referred to as COVID-19, has wreaked havoc on all facets of higher education since its inception in late 2019. College students, in particular, significantly reduced their daily energy expenditure and increased the time spent sitting to listen to online classes and complete their studies from home. This change, in combination with the associated COVID-19 lockdown, presumably decreased physical activity levels, increased mental health symptoms, and led to the promotion of unhealthy eating habits. Objectives: The main objective of this study was to determine the current self-reported physical activity levels, mental health symptoms, and body composition of college students after the COVID-19 lockdown in order to develop future interventions for the overall improvement of health. Methods: All participants completed pre-existing, well-validated surveys for both physical activity (International Physical Activity Questionnaire - long form) and mental health (Hospital Anxiety and Depression Scale). Body composition was assessed in person with the use of an Inbody 570 device. Results: Of the 90 American college students (M age = 22.52 ± 4.54, 50 females) who participated in this study, depressive and anxious symptom scores consistent with 58% (N = 52) heightened symptomatology, 17% (N = 15) moderate borderline symptomatology, and 25% (N = 23) asymptomatology were reported. In regard to physical activity, 79% (N = 71) of the students were highly physically active, 18% (N = 16) were moderately active, and 3% (N = 3) reported low levels of physical activity. Additionally, 46% (N = 41) of the students maintained an unhealthy body fat percentage based on World Health Organization recommendations. Strong, significant relationships were found between anxiety and depression symptomatology and body fat percentage (P = .003) and skeletal muscle mass (P = .015), with said symptomatology increasing with added body fat and decreasing with added skeletal muscle mass. Conclusions: Future health interventions for American college students should be focused on strategies to reduce stress, anxiety, and depressive characteristics, as well as nutritional information on healthy eating, regardless of self-reported physical activity levels.

Keywords: physical activity, mental health, body composition, COVID-19

Procedia PDF Downloads 98
815 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 53
814 EU-SOLARIS: The European Infrastructure for Concentrated Solar Thermal and Solar Chemistry Technologies

Authors: Vassiliki Drosou, Theoni Oikonomou

Abstract:

EU-SOLARIS will form a new legal entity to explore and implement improved rules and procedures for Research Infrastructures (RI) for Concentrated Solar Thermal (CST) and solar chemistry technologies, in order to optimize RI development and R&D coordination. It is expected to be the first of its kind, where industrial needs and private funding will play a significant role. The success of EU-SOLARIS initiative will be the establishment of a new governance body, aided by sustainable financial models. EU-SOLARIS is expected to be an important tool, which will provide the most complete, high quality scientific infrastructure portfolio at international level and to facilitate researchers' access to highly specialised research infrastructure through a single access point. This will be accomplished by linking scientific communities, industry and universities involved in the CST sector. The access to be offered by EU-SOLARIS will guarantee the direct contact of experienced scientists with newcomers and interested students. The set of RIs participating in EU-SOLARIS will offer access to state of the art infrastructures, high-quality services, and will enable users to conduct high quality research. Access to these facilities will contribute to the enhancement of the European research area by: -Opening installations to European and non-European scientists, coming from both academia and industry, thus improving co-operation. -Improving scientific critical mass in domains where knowledge is now widely dispersed. -Generating strong Europe-wide R&D project consortia, increasing the competitiveness of each member alone. EU-SOLARIS will be created in the framework of a European project, co-funded by the 7th Framework Programme of the European Union –whose initiative is to foster, contribute and promote the scientific and technological development of the CST and solar chemistry technologies. Primary objective of EU-SOLARIS is to contribute to the improvement of the state of the art of these technologies with the aim of preserving and reinforcing the European leadership in this field, in which EU-SOLARIS is expected to be a valuable instrument. EU-SOLARIS scope, activities, objectives, current status and vision will be given in the article. Moreover, the rules, processes and criteria regulating the access to the research infrastructures included in EU-SOLARIS will be presented.

Keywords: concentrated solar thermal (CST) technology, renewable energy sources, research infrastructures, solar chemistry

Procedia PDF Downloads 238
813 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 110
812 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 262
811 Exploring the Potential of Modular Housing Designs for the Emergency Housing Need in Türkiye after the February Earthquake in 2023

Authors: Hailemikael Negussie, Sebla Arın Ensarioğlu

Abstract:

In February 2023 Southeastern Türkiye and Northwestern Syria were hit by two consecutive earthquakes with high magnitude leaving thousands dead and thousands more homeless. The housing crisis in the affected areas has resulted in the need for a fast and qualified solution. There are a number of solutions, one of which is the use of modular designs to rebuild the cities that have been affected. Modular designs are prefabricated building components that can be quickly and efficiently assembled on-site, making them ideal to build structures with faster speed and higher quality. These structures are flexible, adaptable, and can be customized to meet the specific needs of the inhabitants, in addition to being more energy-efficient and sustainable. The prefabricated nature also assures that the quality of the products can be easily controlled. The reason for the collapse of most of the buildings during the earthquakes was found out to be the lack of quality during the construction stage. Using modular designs allows a higher control over the quality of the construction materials being used. The use of modular designs for a project of this scale presents some challenges, including the high upfront cost to design and manufacture components. However, if implemented correctly, modular designs can offer an effective and efficient solution to the urgent housing needs. The aim of this paper is to explore the potential of modular housing for mid- and long-term earthquake-resistant housing needs in the affected disaster zones after the earthquakes of February 2023. In the scope of this paper the adaptability of modular, prefabricated housing designs for the post-disaster environment, the advantages and disadvantages of this system will be examined. Elements such as; the current conditions of the region where the destruction happened, climatic data, topographic factors will be examined. Additionally, the paper will examine; examples of similar local and international modular post-earthquake housing projects. The region is projected to enter a rapid reconstruction phase in the following periods. Therefore, this paper will present a proposal for a system that can be used to produce safe and healthy urbanization policies without causing new aggrievements while meeting the housing needs of the people in the affected regions.

Keywords: post-disaster housing, earthquake-resistant design, modular design, housing, Türkiye

Procedia PDF Downloads 88
810 Averting a Financial Crisis through Regulation, Including Legislation

Authors: Maria Krambia-Kapardis, Andreas Kapardis

Abstract:

The paper discusses regulatory and legislative measures implemented by various nations in an effort to avert another financial crisis. More specifically, to address the financial crisis, the European Commission followed the practice of other developed countries and implemented a European Economic Recovery Plan in an attempt to overhaul the regulatory and supervisory framework of the financial sector. In 2010 the Commission introduced the European Systemic Risk Board and in 2011 the European System of Financial Supervision. Some experts advocated that the type and extent of financial regulation introduced in the European crisis in the wake of the 2008 crisis has been excessive and counterproductive. In considering how different countries responded to the financial crisis, global regulators have shown a more focused commitment to combat industry misconduct and to pre-empt abusive behavior. Regulators have also increased funding and resources at their disposal; have increased regulatory fines, with an increasing trend towards action against individuals; and, finally, have focused on market abuse and market conduct issues. Financial regulation can be effected, first of all, through legislation. However, neither ex ante or ex post regulation is by itself effective in reducing systemic risk. Consequently, to avert a financial crisis, in their endeavor to achieve both economic efficiency and financial stability, governments need to balance the two approaches to financial regulation. Fiduciary duty is another means by which the behavior of actors in the financial world is constrained and, thus, regulated. Furthermore, fiduciary duties extend over and above other existing requirements set out by statute and/or common law and cover allegations of breach of fiduciary duty, negligence or fraud. Careful analysis of the etiology of the 2008 financial crisis demonstrates the great importance of corporate governance as a way of regulating boardroom behavior. In addition, the regulation of professions including accountants and auditors plays a crucial role as far as the financial management of companies is concerned. In the US, the Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board in order to protect investors from financial accounting fraud. In most countries around the world, however, accounting regulation consists of a legal framework, international standards, education, and licensure. Accounting regulation is necessary because of the information asymmetry and the conflict of interest that exists between managers and users of financial information. If a holistic approach is to be taken then one cannot ignore the regulation of legislators themselves which can take the form of hard or soft legislation. The science of averting a financial crisis is yet to be perfected and this, as shown by the preceding discussion, is unlikely to be achieved in the foreseeable future as ‘disaster myopia’ may be reduced but will not be eliminated. It is easier, of course, to be wise in hindsight and regulating unreasonably risky decisions and unethical or outright criminal behavior in the financial world remains major challenges for governments, corporations, and professions alike.

Keywords: financial crisis, legislation, regulation, financial regulation

Procedia PDF Downloads 398
809 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 137
808 Synthesis of Deformed Nuclei 260Rf, 261Rf and 262Rf in the Decay of 266Rf*Formed via Different Fusion Reactions: Entrance Channel Effects

Authors: Niyti, Aman Deep, Rajesh Kharab, Sahila Chopra, Raj. K. Gupta

Abstract:

Relatively long-lived transactinide elements (i.e., elements with atomic number Z≥104) up to Z = 108 have been produced in nuclear reactions between low Z projectiles (C to Al) and actinide targets. Cross sections have been observed to decrease steeply with increasing Z. Recently, production cross sections of several picobarns have been reported for comparatively neutron-rich nuclides of 112 through 118 produced via hot fusion reactions with 48Ca and actinide targets. Some of those heavy nuclides are reported to have lifetimes on the order of seconds or longer. The relatively high cross sections in these hot fusion reactions are not fully understood and this has renewed interest in systematic studies of heavy-ion reactions with actinide targets. The main aim of this work is to understand the dynamics hot fusion reactions 18O+ 248Cm and 22Ne+244Pu (carried out at RIKEN and TASCA respectively) using the collective clusterization technique, carried out by undertaking the decay of the compound nucleus 266Rf∗ into 4n, 5n and 6n neutron evaporation channels. Here we extend our earlier study of the excitation functions (EFs) of 266Rf∗, formed in fusion reaction 18O+248Cm, based on Dynamical Cluster-decay Model (DCM) using the pocket formula for nuclear proximity potential, to the use of other nuclear interaction potentials derived from Skyrme energy density formalism (SEDF) based on semiclassical extended Thomas Fermi (ETF) approach and also study entrance channel effects by considering the synthesis of 266Rf* in 22Ne+244Pu reaction. The Skyrme forces used are the old force SIII, and new forces GSkI and KDE0(v1). Here, the EFs for the production of 260Rf, 261Rf and 262Rf isotope via 6n, 5n and 4n decay channel from the 266Rf∗ compound nucleus are studied at Elab = 88.2 to 125 MeV, including quadrupole deformations β2i and ‘hot-optimum’ orientations θi. The calculations are made within the DCM where the neck-length ∆R is the only parameter representing the relative separation distance between two fragments and/or clusters Ai which assimilates the neck formation effects.

Keywords: entrance channel effects, fusion reactions, skyrme force, superheavy nucleus

Procedia PDF Downloads 253
807 Water Reclamation and Reuse in Asia’s Largest Sewage Treatment Plant

Authors: Naveen Porika, Snigdho Majumdar, Niraj Sethi

Abstract:

Water, food and energy securities are emerging as increasingly important and vital issues for India and the world. Hyderabad urban agglomeration (HUA), the capital city of Andhra Pradesh State in India, is the sixth largest city has a population of about 8.2 million. The Musi River, which is a tributary of Krishna river flows from west to east right through the heart of Hyderabad, about 80% of the water used by people is released back as sewage, which flows back into Musi every day with detrimental effects on the environment and people downstream of the city. The average daily sewage generated in Hyderabad city is 950 MLD, however, treatment capacity exists only for 541 Million Liters per Day (MLD) but only 407 MLD of sewage is treated. As a result, 543 MLD of sewage daily flows into Musi river. Hyderabad’s current estimated water demand stands at 320 Million Gallons per Day (MGD). However, its installed capacity is merely 270 MGD; by 2020 estimated demand will grow to 400 MGD. There is huge gap between current supply and demand, and this is likely to widen by 2021. Developing new fresh water sources is a challenge for Hyderabad, as the fresh water sources are few and far from the City (about 150-200 km) and requires excessive pumping. The constraints presented above make the conventional alternatives for supply augmentation unsustainable and unattractive .One such dependable and captive source of easily available water is the treated sewage. With proper treatment, water of desired quality can be recovered from the waste water (sewage) for recycle and reuse. Hyderabad Amberpet sewage treatment of capacity 339 MLD is Asia’s largest sewage treatment plant. Tertiary sewage treatment Standard basic engineering modules of 30 MLD,60 MLD, 120MLD & 180 MLD for sewage treatment plants has been developed which are utilized for developing Sewage Reclamation & Reuse model in Asia’s largest sewage treatment plant. This paper will focus on Hyderabad Water Supply & Demand, Sewage Generation & Treatment, Technical aspects of Tertiary Sewage Treatment and Utilization of developed standard modules for reclamation & reuse of treated sewage to overcome the deficit of 130 MGD as projected by 2021.

Keywords: water reclamation, reuse, Andhra Pradesh, hyderabad, musi river, sewage, demand and supply, recycle, Amberpet, 339 MLD, engineering modules, tertiary treatment

Procedia PDF Downloads 617
806 SARS-CoV-2: Prediction of Critical Charged Amino Acid Mutations

Authors: Atlal El-Assaad

Abstract:

Viruses change with time through mutations and result in new variants that may persist or disappear. A Mutation refers to an actual change in the virus genetic sequence, and a variant is a viral genome that may contain one or more mutations. Critical mutations may cause the virus to be more transmissible, with high disease severity, and more vulnerable to diagnostics, therapeutics, and vaccines. Thus, variants carrying such mutations may increase the risk to human health and are considered variants of concern (VOC). Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) - the contagious in humans, positive-sense single-stranded RNA virus that caused coronavirus disease 2019 (COVID-19) - has been studied thoroughly, and several variants were revealed across the world with their corresponding mutations. SARS-CoV-2 has four structural proteins, known as the S (spike), E (envelope), M (membrane), and N (nucleocapsid) proteins, but prior study and vaccines development focused on genetic mutations in the S protein due to its vital role in allowing the virus to attach and fuse with the membrane of a host cell. Specifically, subunit S1 catalyzes attachment, whereas subunit S2 mediates fusion. In this perspective, we studied all charged amino acid mutations of the SARS-CoV-2 viral spike protein S1 when bound to Antibody CC12.1 in a crystal structure and assessed the effect of different mutations. We generated all missense mutants of SARS-CoV-2 protein amino acids (AAs) within the SARS-CoV-2:CC12.1 complex model. To generate the family of mutants in each complex, we mutated every charged amino acid with all other charged amino acids (Lysine (K), Arginine (R), Glutamic Acid (E), and Aspartic Acid (D)) and studied the new binding of the complex after each mutation. We applied Poisson-Boltzmann electrostatic calculations feeding into free energy calculations to determine the effect of each mutation on binding. After analyzing our data, we identified charged amino acids keys for binding. Furthermore, we validated those findings against published experimental genetic data. Our results are the first to propose in silico potential life-threatening mutations of SARS-CoV-2 beyond the present mutations found in the five common variants found worldwide.

Keywords: SARS-CoV-2, variant, ionic amino acid, protein-protein interactions, missense mutation, AESOP

Procedia PDF Downloads 113
805 Stress-Strain Behavior of Banana Fiber Reinforced and Biochar Amended Compressed Stabilized Earth Blocks

Authors: Farnia Nayar Parshi, Mohammad Shariful Islam

Abstract:

Though earth construction is an ancient technology, researchers are working on increasing its strength by adding different types of stabilizers. Ordinary Portland cement for sandy soil and lime for clayey soil is very popular practice as well as recommended by various authorities for making stabilized blocks for satisfactory performance. The addition of these additives improves compressive strength but fails to improve ductility. The addition of both synthetic and natural fibers increases both compressive strength and ductility. Studies are conducted to make earth blocks more cost-effective, energy-efficient and sustainable. In this experiment, an agricultural waste banana fiber and biochar is used to study the compressive stress-strain behavior of earth blocks made with four types of soil low plastic clay, sandy low plastic clay, very fine sand and medium to fine sand. Biochar is a charcoal-like carbon usually produced from organic or agricultural waste in high temperatures through a controlled condition called pyrolysis. In this experimental study, biochar was collected from BBI (Bangladesh Biochar Initiative) produced from wood flakes around 400 deg. Celsius. Locally available PPC (Portland Pozzolana Cement) is used. 5 cm × 5 cm × 5 cm earth blocks were made with eight different combinations such as bare soil, soil with 6% cement, soil with 6% cement and 5% biochar, soil with 6% cement, 5% biochar and 1% fiber, soil with 1% fiber, soil with 5% biochar and 1% fiber and soil with 6% cement and 1% fiber. All samples were prepared with 10-12% water content. Uniaxial compressive strength tests were conducted on 21 days old earth blocks. Stress-strain diagram shows that the addition of banana fiber improved compressive strength drastically, but the combined effect of fiber and biochar is different based on different soil types. For clayey soil, 6% cement and 1% fiber give maximum compressive strength of 991 kPa, and for very fine sand, a combination of 5% biochar, 6% cement and 1% fiber gives maximum compressive strength of 522 kPa as well as ductility. For medium-to-find sand, 6% cement and 1% fiber give the best result, 1530 kPa, among other combinations. The addition of fiber increases not only ductility but also compressive strength as well. The effect of biochar with fiber varies with the soil type.

Keywords: banana fiber, biochar, cement, compressed stabilized earth blocks, compressive strength

Procedia PDF Downloads 121
804 Quantum Coherence Sets the Quantum Speed Limit for Mixed States

Authors: Debasis Mondal, Chandan Datta, S. K. Sazim

Abstract:

Quantum coherence is a key resource like entanglement and discord in quantum information theory. Wigner- Yanase skew information, which was shown to be the quantum part of the uncertainty, has recently been projected as an observable measure of quantum coherence. On the other hand, the quantum speed limit has been established as an important notion for developing the ultra-speed quantum computer and communication channel. Here, we show that both of these quantities are related. Thus, cast coherence as a resource to control the speed of quantum communication. In this work, we address three basic and fundamental questions. There have been rigorous attempts to achieve more and tighter evolution time bounds and to generalize them for mixed states. However, we are yet to know (i) what is the ultimate limit of quantum speed? (ii) Can we measure this speed of quantum evolution in the interferometry by measuring a physically realizable quantity? Most of the bounds in the literature are either not measurable in the interference experiments or not tight enough. As a result, cannot be effectively used in the experiments on quantum metrology, quantum thermodynamics, and quantum communication and especially in Unruh effect detection et cetera, where a small fluctuation in a parameter is needed to be detected. Therefore, a search for the tightest yet experimentally realisable bound is a need of the hour. It will be much more interesting if one can relate various properties of the states or operations, such as coherence, asymmetry, dimension, quantum correlations et cetera and QSL. Although, these understandings may help us to control and manipulate the speed of communication, apart from the particular cases like the Josephson junction and multipartite scenario, there has been a little advancement in this direction. Therefore, the third question we ask: (iii) Can we relate such quantities with QSL? In this paper, we address these fundamental questions and show that quantum coherence or asymmetry plays an important role in setting the QSL. An important question in the study of quantum speed limit may be how it behaves under classical mixing and partial elimination of states. This is because this may help us to choose properly a state or evolution operator to control the speed limit. In this paper, we try to address this question and show that the product of the time bound of the evolution and the quantum part of the uncertainty in energy or quantum coherence or asymmetry of the state with respect to the evolution operator decreases under classical mixing and partial elimination of states.

Keywords: completely positive trace preserving maps, quantum coherence, quantum speed limit, Wigner-Yanase Skew information

Procedia PDF Downloads 353
803 An Object-Oriented Modelica Model of the Water Level Swell during Depressurization of the Reactor Pressure Vessel of the Boiling Water Reactor

Authors: Rafal Bryk, Holger Schmidt, Thomas Mull, Ingo Ganzmann, Oliver Herbst

Abstract:

Prediction of the two-phase water mixture level during fast depressurization of the Reactor Pressure Vessel (RPV) resulting from an accident scenario is an important issue from the view point of the reactor safety. Since the level swell may influence the behavior of some passive safety systems, it has been recognized that an assumption which at the beginning may be considered as a conservative one, not necessary leads to a conservative result. This paper discusses outcomes obtained during simulations of the water dynamics and heat transfer during sudden depressurization of a vessel filled up to a certain level with liquid water under saturation conditions and with the rest of the vessel occupied by saturated steam. In case of the pressure decrease e.g. due to the main steam line break, the liquid water evaporates abruptly, being a reason thereby, of strong transients in the vessel. These transients and the sudden emergence of void in the region occupied at the beginning by liquid, cause elevation of the two-phase mixture. In this work, several models calculating the water collapse and swell levels are presented and validated against experimental data. Each of the models uses different approach to calculate void fraction. The object-oriented models were developed with the Modelica modelling language and the OpenModelica environment. The models represent the RPV of the Integral Test Facility Karlstein (INKA) – a dedicated test rig for simulation of KERENA – a new Boiling Water Reactor design of Framatome. The models are based on dynamic mass and energy equations. They are divided into several dynamic volumes in each of which, the fluid may be single-phase liquid, steam or a two-phase mixture. The heat transfer between the wall of the vessel and the fluid is taken into account. Additional heat flow rate may be applied to the first volume of the vessel in order to simulate the decay heat of the reactor core in a similar manner as it is simulated at INKA. The comparison of the simulations results against the reference data shows a good agreement.

Keywords: boiling water reactor, level swell, Modelica, RPV depressurization, thermal-hydraulics

Procedia PDF Downloads 210
802 An Appraisal of Blended Learning Approach for English Language Teaching in Saudi Arabia

Authors: H. Alqunayeer, S. Zamir

Abstract:

Blended learning, an ideal amalgamation of online learning and face to face traditional approach is a new approach that may result in outstanding outcomes in the realm of teaching and learning. The dexterity and effectiveness offered by e-learning experience cannot be guaranteed in a traditional classroom, whereas one-to-one interaction the essential element of learning that can only be found in a traditional classroom. In recent years, a spectacular expansion in the incorporation of technology in language teaching and learning is observed in many universities of Saudi Arabia. Some universities recognize the importance of blending face-to-face with online instruction in language pedagogy, Qassim University is one of the many universities adopting Blackboard Learning Management system (LMS). The university has adopted this new mode of teaching/learning in year 2015. Although the experience is immature; however great pedagogical transformations are anticipated in the university through this new approach. This paper examines the role of blended language learning with particular reference to the influence of Blackboard Learning Management System on the development of English language learning for EFL learners registered in Bachelors of English language program. This paper aims at exploring three main areas: (i) the present status of Blended learning in the educational process in Saudi Arabia especially in Qassim University by providing a survey report on the number of training courses on Blackboard LMS conducted for the male and female teachers at various colleges of Qassim University, (ii) a survey on teachers perception about the utility, application and the outcome of using blended Learning approach in teaching English language skills courses, (iii) the students’ views on the efficiency of Blended learning approach in learning English language skills courses. Besides, analysis of students’ limitations and challenges related to the experience of blended learning via Blackboard, the suggestion and recommendations offered by the language learners have also been thought-out. The study is empirical in nature. In order to gather data on the afore mentioned areas survey questionnaire method has been used: in order to study students’ perception, a 5 point Likert-scale questionnaire has been distributed to 200 students of English department registered in Bachelors in English program (level 5 through level 8). Teachers’ views have been surveyed with the help of interviewing 25 EFL teachers skilled in using Blackboard LMS in their lectures. In order to ensure the validity and reliability of questionnaire, the inter-rater approach and Cronbach’s Alpha analysis have been used respectively. Analysis of variance (ANOVA) has been used to analyze the students’ perception about the productivity of the Blended approach in learning English language skills. The analysis of feedback by Saudi teachers and students about the usefulness, ingenuity, and productivity of Blended Learning via Blackboard LMS highlights the need of encouraging and expanding the implementation of this new approach into the field of English language teaching in Saudi Arabia, in order to augment congenial learning aura. Furthermore, it is hoped that the propositions and practical suggestions offered by the study will be functional for other similar learning environments.

Keywords: blended learning, black board learning management system, English as foreign language (EFL) learners, EFL teachers

Procedia PDF Downloads 156
801 Levels of Heavy Metals and Arsenic in Sediment and in Clarias Gariepinus, of Lake Ngami

Authors: Nashaat Mazrui, Oarabile Mogobe, Barbara Ngwenya, Ketlhatlogile Mosepele, Mangaliso Gondwe

Abstract:

Over the last several decades, the world has seen a rapid increase in activities such as deforestation, agriculture, and energy use. Subsequently, trace elements are being deposited into our water bodies, where they can accumulate to toxic levels in aquatic organisms and can be transferred to humans through fish consumption. Thus, though fish is a good source of essential minerals and omega-3 fatty acids, it can also be a source of toxic elements. Monitoring trace elements in fish is important for the proper management of aquatic systems and the protection of human health. The aim of this study was to determine concentrations of trace elements in sediment and muscle tissues of Clarias gariepinus at Lake Ngami, in the Okavango Delta in northern Botswana, during low floods. The fish were bought from local fishermen, and samples of muscle tissue were acid-digested and analyzed for iron, zinc, copper, manganese, molybdenum, nickel, chromium, cadmium, lead, and arsenic using inductively coupled plasma optical emission spectroscopy (ICP-OES). Sediment samples were also collected and analyzed for the elements and for organic matter content. Results show that in all samples, iron was found in the greatest amount while cadmium was below the detection limit. Generally, the concentrations of elements in sediment were higher than in fish except for zinc and arsenic. While the concentration of zinc was similar in the two media, arsenic was almost 3 times higher in fish than sediment. To evaluate the risk to human health from fish consumption, the target hazard quotient (THQ) and cancer risk for an average adult in Botswana, sub-Saharan Africa, and riparian communities in the Okavango Delta was calculated for each element. All elements were found to be well below regulatory limits and do not pose a threat to human health except arsenic. The results suggest that other benthic feeding fish species could potentially have high arsenic levels too. This has serious implications for human health, especially riparian households to whom fish is a key component of food and nutrition security.

Keywords: Arsenic, African sharp tooth cat fish, Okavango delta, trace elements

Procedia PDF Downloads 192
800 Numerical Modelling of Wind Dispersal Seeds of Bromeliad Tillandsia recurvata L. (L.) Attached to Electric Power Lines

Authors: Bruna P. De Souza, Ricardo C. De Almeida

Abstract:

In some cities in the State of Parana – Brazil and in other countries atmospheric bromeliads (Tillandsia spp - Bromeliaceae) are considered weeds in trees, electric power lines, satellite dishes and other artificial supports. In this study, a numerical model was developed to simulate the seed dispersal of the Tillandsia recurvata species by wind with the objective of evaluating seeds displacement in the city of Ponta Grossa – PR, Brazil, since it is considered that the region is already infested. The model simulates the dispersal of each individual seed integrating parameters from the atmospheric boundary layer (ABL) and the local wind, simulated by the Weather Research Forecasting (WRF) mesoscale atmospheric model for the 2012 to 2015 period. The dispersal model also incorporates the approximate number of bromeliads and source height data collected from most infested electric power lines. The seeds terminal velocity, which is an important input data but was not available in the literature, was measured by an experiment with fifty-one seeds of Tillandsia recurvata. Wind is the main dispersal agent acting on plumed seeds whereas atmospheric turbulence is a determinant factor to transport the seeds to distances beyond 200 meters as well as to introduce random variability in the seed dispersal process. Such variability was added to the model through the application of an Inverse Fast Fourier Transform to wind velocity components energy spectra based on boundary-layer meteorology theory and estimated from micrometeorological parameters produced by the WRF model. Seasonal and annual wind means were obtained from the surface wind data simulated by WRF for Ponta Grossa. The mean wind direction is assumed to be the most probable direction of bromeliad seed trajectory. Moreover, the atmospheric turbulence effect and dispersal distances were analyzed in order to identify likely regions of infestation around Ponta Grossa urban area. It is important to mention that this model could be applied to any species and local as long as seed’s biological data and meteorological data for the region of interest are available.

Keywords: atmospheric turbulence, bromeliad, numerical model, seed dispersal, terminal velocity, wind

Procedia PDF Downloads 141
799 Potency of Some Dietary Acidifiers on Productive Performance and Controlling Salmonella enteritidis in Broilers

Authors: Mohamed M. Zaki, Maha M. Hady

Abstract:

Salmonella spp. have been categorized as the world’s biggest threats to human health and poultry products are mostly incriminated sources. In Egypt, it was found that S. enteritidis and S. typhimurium are the most prevalent ones in poultry farms. It is recommended to eliminate salmonella from living bird by competing for salmonella contamination in feed in order to establish a healthy gut. The Feed acidifiers are the group of feed additives containing low-molecular-weight organic acids and/ or their salts which act as performance promoters by lowering the pH in the gut, optimizes digestion and inhibit bacterial growth. The inclusion of organic acid in pure form nonetheless effective in feed, yet, it is difficult to handle in feed mills as it is corrosive and produce more losses during pelleting process. The current study aimed at to evaluate the impact of incorporation of sodium diformate (SDF) and a commercial acidifier, CA (a mixture of butyric and propionic acids and their ammonium salts) at 0.4% dietary levels on broilers performance and the control S. enteritidis infection. Two hundreds and seventy unsexed cobb chickens were allotted in one of three treatments (90/ group) which were, the control (no acidifier, C- &C+), the 0.4% SDF (SDF- & SDF +) and the 0.4% CA (CA- & CA +) dietary levels for 35 days. Before the allocation of the groups, ten extra birds and a diet sample were bacteriologically examined to ensure negative contamination with salmonella. The birds were raised on deep-litter separated pens and had free access to feed and water all the time. The experimentally formulated diets were kept at 40C. After 24h access to the different dietary treatments, all the birds in the positive groups (n=15/ replicate) were inoculated intra-crop with 0.2 ml of 24 h broth culture of S. entertidis containing 1X 107 organisms while the negative-treated groups were inoculated with the same amount of the negative broth and second inoculation was done at 22 d of age. Colocal swabs were collected individually from all birds 2 h pre-inoculation to assure the absence of salmonella, then 1, 3, 5, 7, 21 days post-inoculation to recover salmonella. Performance parameter (body weight gain and feed efficiency) were calculated. Mortalities were recorded and reisolation of the salmonella was adopted to ensure it was the inoculated ones. The results revealed that the dietary acidification with sodium diformate significantly improved broilers performance and tends to produce heavier birds as compared to the negative control and CA groups. Moreover, the dietary inclusion of both acidifiers at level of 0.4% was able to eliminate mortalities completely at the relevant inoculation time. Regarding the shedding of S. enteritidius in positive groups, the SDF treatment resulted in significant (p<0.05) cessation of the shedding at 3 days post-inoculation compared to 7 days post-inoculation for the CA-group. In conclusion, sodium diformate at 0.4% dietary level in broiler diets has a valuable effect not only on broilers performance but also by eliminating S. enteritidis the main source of salmonella contamination in poultry farms which is feed.

Keywords: acidifier, broilers, Salmonalla spp, sodium diformate

Procedia PDF Downloads 285