Search results for: photovoltaic power generation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8838

Search results for: photovoltaic power generation

6468 Comparison of the Effects of Continuous Flow Microwave Pre-Treatment with Different Intensities on the Anaerobic Digestion of Sewage Sludge for Sustainable Energy Recovery from Sewage Treatment Plant

Authors: D. Hephzibah, P. Kumaran, N. M. Saifuddin

Abstract:

Anaerobic digestion is a well-known technique for sustainable energy recovery from sewage sludge. However, sewage sludge digestion is restricted due to certain factors. Pre-treatment methods have been established in various publications as a promising technique to improve the digestibility of the sewage sludge and to enhance the biogas generated which can be used for energy recovery. In this study, continuous flow microwave (MW) pre-treatment with different intensities were compared by using 5 L semi-continuous digesters at a hydraulic retention time of 27 days. We focused on the effects of MW at different intensities on the sludge solubilization, sludge digestibility, and biogas production of the untreated and MW pre-treated sludge. The MW pre-treatment demonstrated an increase in the ratio of soluble chemical oxygen demand to total chemical oxygen demand (sCOD/tCOD) and volatile fatty acid (VFA) concentration. Besides that, the total volatile solid (TVS) removal efficiency and tCOD removal efficiency also increased during the digestion of the MW pre-treated sewage sludge compared to the untreated sewage sludge. Furthermore, the biogas yield also subsequently increases due to the pre-treatment effect. A higher MW power level and irradiation time generally enhanced the biogas generation which has potential for sustainable energy recovery from sewage treatment plant. However, the net energy balance tabulation shows that the MW pre-treatment leads to negative net energy production.

Keywords: anaerobic digestion, biogas, microwave pre-treatment, sewage sludge

Procedia PDF Downloads 304
6467 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 254
6466 Generation of Catalytic Films of Zeolite Y and ZSM-5 on FeCrAlloy Metal

Authors: Rana Th. A. Al-Rubaye, Arthur A. Garforth

Abstract:

This work details the generation of thin films of structured zeolite catalysts (ZSM–5 and Y) onto the surface of a metal substrate (FeCrAlloy) using in-situ hydrothermal synthesis. In addition, the zeolite Y is post-synthetically modified by acidified ammonium ion exchange to generate US-Y. Finally the catalytic activity of the structured ZSM-5 catalyst films (Si/Al = 11, thickness 146 µm) and structured US–Y catalyst film (Si/Al = 8, thickness 23µm) were compared with the pelleted powder form of ZSM–5 and USY catalysts of similar Si/Al ratios. The structured catalyst films have been characterised using a range of techniques, including X-ray diffraction (XRD), Electron microscopy (SEM), Energy Dispersive X–ray analysis (EDX) and Thermogravimetric Analysis (TGA). The transition from oxide-on-alloy wires to hydrothermally synthesised uniformly zeolite coated surfaces was followed using SEM and XRD. In addition, the robustness of the prepared coating was confirmed by subjecting these to thermal cycling (ambient to 550°C). The cracking of n–heptane over the pellets and structured catalysts for both ZSM–5 and Y zeolite showed very similar product selectivities for similar amounts of catalyst with an apparent activation energy of around 60 kJ mol-1. This paper demonstrates that structured catalysts can be manufactured with excellent zeolite adherence and when suitably activated/modified give comparable cracking results to the pelleted powder forms. These structured catalysts will improve temperature distribution in highly exothermic and endothermic catalysed processes.

Keywords: FeCrAlloy, structured catalyst, zeolite Y, zeolite ZSM-5

Procedia PDF Downloads 366
6465 India-Afghanistan Relations Post 9\11

Authors: Saifurahman Fayiz

Abstract:

Geo-strategically and geo-politically location of Afghanistan has endured the consideration of Indian government policy. Afghanistan has a durable and widespread economic, historical, military, and cultural relationship with India. Afghanistan has significant and durable bilateral relations with its neighbor India. India has enjoyed friendly relations with Afghanistan since 1947. After the collapse of the Taliban regime, India and Afghanistan started diplomatic relations. The relationship between the two countries was friendly and stable. The objective of this research is to study the India- Afghanistan relationship from 2001 to 2021 from different aspects. The research conducted a qualitative research method based on descriptive. The research findings propose that India should expand its soft power in Afghanistan, and India’s foreign policy in Afghanistan should be evaluated.

Keywords: relation, policy, soft power, sector

Procedia PDF Downloads 150
6464 An Investigation into the Isolation and Bandwidth Characteristics of X-Band Chireix Power Amplifier Combiners

Authors: Daniel P. Clayton, Edward A. Ball

Abstract:

This paper describes an investigation into the isolation characteristics and bandwidth performance of RF combiners that are used as part of Chireix PA architectures, designed for use in the X-Band range of frequencies. Combiner designs investigated are the typical Chireix and Wilkinson configurations which also include simulation of the Wilkinson using manufacturer’s data for the isolation resistor. Another simulation was the less common approach of using a Branchline coupler to form the combiner, as well as simulation results from adding an additional stage. This paper presents the findings of this investigation and compares the bandwidth performance and isolation characteristics to determine suitability.

Keywords: bandwidth, Chireix, couplers, outphasing, power amplifiers, Wilkinson, X-Band

Procedia PDF Downloads 243
6463 Sensitivity Analysis of External-Rotor Permanent Magnet Assisted Synchronous Reluctance Motor

Authors: Hadi Aghazadeh, Seyed Ebrahim Afjei, Alireza Siadatan

Abstract:

In this paper, a proper approach is taken to assess a set of the most effective rotor design parameters for an external-rotor permanent magnet assisted synchronous reluctance motor (PMaSynRM) and therefore to tackle the design complexity of the rotor structure. There are different advantages for introducing permanent magnets into the rotor flux barriers, some of which are to saturate the rotor iron ribs, to increase the motor torque density and to improve the power factor. Moreover, the d-axis and q-axis inductances are of great importance to simultaneously achieve maximum developed torque and low torque ripple. Therefore, sensitivity analysis of the rotor geometry of an 8-pole external-rotor permanent magnet assisted synchronous reluctance motor is performed. Several magnetically accurate finite element analyses (FEA) are conducted to characterize the electromagnetic performance of the motor. The analyses validate torque and power factor equations for the proposed external-rotor motor. Based upon the obtained results and due to an additional term, permanent magnet torque, added to the reluctance torque, the electromagnetic torque of the PMaSynRM increases.

Keywords: permanent magnet assisted synchronous reluctance motor, flux barrier, flux carrier, electromagnetic torque, and power factor

Procedia PDF Downloads 315
6462 Long-Term Exposure Assessments for Cooking Workers Exposed to Polycyclic Aromatic Hydrocarbons and Aldehydes Containing in Cooking Fumes

Authors: Chun-Yu Chen, Kua-Rong Wu, Yu-Cheng Chen, Perng-Jy Tsai

Abstract:

Cooking fumes are known containing polycyclic aromatic hydrocarbons (PAHs) and aldehydes, and some of them have been proven carcinogenic or possibly carcinogenic to humans. Considering their chronic health effects, long-term exposure data is required for assessing cooking workers’ lifetime health risks. Previous exposure assessment studies, due to both time and cost constraints, mostly were based on the cross-sectional data. Therefore, establishing a long-term exposure data has become an important issue for conducting health risk assessment for cooking workers. An approach was proposed in this study. Here, the generation rates of both PAHs and aldehydes from a cooking process were determined by placing a sampling train exactly under the under the exhaust fan under the both the total enclosure condition and normal operating condition, respectively. Subtracting the concentration collected by the former (representing the total emitted concentration) from that of the latter (representing the hood collected concentration), the fugitive emitted concentration was determined. The above data was further converted to determine the generation rates based on the flow rates specified for the exhaust fan. The determinations of the above generation rates were conducted in a testing chamber with a selected cooking process (deep-frying chicken nuggets under 3 L peanut oil at 200°C). The sampling train installed under the exhaust fan consisted respectively an IOM inhalable sampler with a glass fiber filter for collecting particle-phase PAHs, followed by a XAD-2 tube for gas-phase PAHs. The above was also used to sample aldehydes, however, installed with a filter pre-coated with DNPH, and followed by a 2,4-DNPH-cartridge for collecting particle-phase and gas-phase aldehydes, respectively. PAHs and aldehydes samples were analyzed by GC/MS-MS (Agilent 7890B), and HPLC-UV (HITACHI L-7100), respectively. The obtained generation rates of both PAHs and aldehydes were applied to the near-field/ far-field exposure model to estimate the exposures of cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration). For validating purposes, both PAHs and aldehydes samplings were conducted simultaneously using the same sampling train at both near-field and far-field sites of the testing chamber. The sampling results, together with the use of the mixed-effect model, were used to calibrate the estimated near-field/ far-field exposures. In the present study, the obtained emission rates were further converted to emission factor of both PAHs and aldehydes according to the amount of food oil consumed. Applying the long-term food oil consumption records, the emission rates for both PAHs and aldehydes were determined, and the long-term exposure databanks for cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration) were then determined. Results show that the proposed approach was adequate to determine the generation rates of both PAHs and aldehydes under various fan exhaust flow rate conditions. The estimated near-field/ far-field exposures, though were significantly different from that obtained from the field, can be calibrated using the mixed effect model. Finally, the established long-term data bank could provide a useful basis for conducting long-term exposure assessments for cooking workers exposed to PAHs and aldehydes.

Keywords: aldehydes, cooking oil fumes, long-term exposure assessment, modeling, polycyclic aromatic hydrocarbons (PAHs)

Procedia PDF Downloads 123
6461 Michel Foucault’s Docile Bodies and The Matrix Trilogy: A Close Reading Applied to the Human Pods and Growing Fields in the Films

Authors: Julian Iliev

Abstract:

The recent release of The Matrix Resurrections persuaded many film scholars that The Matrix trilogy had lost its appeal and its concepts were largely outdated. This study examines the human pods and growing fields in the trilogy. Their functionality is compared to Michel Foucault’s concept of docile bodies: linking fictional and contemporary worlds. This paradigm is scrutinized through surveillance literature. The analogy brings to light common elements of hidden surveillance practices in technologies. The comparison illustrates the effects of body manipulation portrayed in the movies and their relevance with contemporary surveillance practices. Many scholars have utilized a close reading methodology in film studies (J.Bizzocchi, J.Tanenbaum, P.Larsen, S. Herbrechter, and Deacon et al.). The use of a particular lens through which media text is examined is an indispensable factor that needs to be incorporated into the methodology. The study spotlights both scenes from the trilogy depicting the human pods and growing fields. The functionality of the pods and the fields compare directly with Foucault’s concept of docile bodies. By utilizing Foucault’s study as a lens, the research will unearth hidden components and insights into the films. Foucault recognizes three disciplines that produce docile bodies: 1) manipulation and the interchangeability of individual bodies, 2) elimination of unnecessary movements and management of time, and 3) command system guaranteeing constant supervision and continuity protection. These disciplines can be found in the pods and growing fields. Each body occupies a single pod aiding easier manipulation and fast interchangeability. The movement of the bodies in the pods is reduced to the absolute minimum. Thus, the body is transformed into the ultimate object of control – minimum movement correlates to maximum energy generation. Supervision is exercised by wiring the body with numerous types of cables. This ultimate supervision of body activity reduces the body’s purpose to mere functioning. If a body does not function as an energy source, then it’s unplugged, ejected, and liquefied. The command system secures the constant supervision and continuity of the process. To Foucault, the disciplines are distinctly different from slavery because they stop short of a total takeover of the bodies. This is a clear difference from the slave system implemented in the films. Even though their system might lack sophistication, it makes up for it in the elevation of functionality. Further, surveillance literature illustrates the connection between the generation of body energy in The Matrix trilogy to the generation of individual data in contemporary society. This study found that the three disciplines producing docile bodies were present in the portrayal of the pods and fields in The Matrix trilogy. The above comparison combined with surveillance literature yields insights into analogous processes and contemporary surveillance practices. Thus, the constant generation of energy in The Matrix trilogy can be equated to the consistent data generation in contemporary society. This essay shows the relevance of the body manipulation concept in the Matrix films with contemporary surveillance practices.

Keywords: docile bodies, film trilogies, matrix movies, michel foucault, privacy loss, surveillance

Procedia PDF Downloads 75
6460 Adhesion of Sputtered Copper Thin Films Deposited on Flexible Substrates

Authors: Rwei-Ching Chang, Bo-Yu Su

Abstract:

Adhesion of copper thin films deposited on polyethylene terephthAdhesion of copper thin films deposited on polyethylene terephthalate substrate by direct current sputtering with different sputtering parameters is discussed in this work. The effects of plasma treatment with 0, 5, and 10 minutes on the thin film properties are investigated first. Various argon flow rates at 40, 50, 60 standard cubic centimeters per minute (sccm), deposition power at 30, 40, 50 W, and film thickness at 100, 200, 300 nm are also discussed. The 3-dimensional surface profilometer, micro scratch machine, and optical microscope are used to characterize the thin film properties. The results show that the increase of the plasma treatment time on the polyethylene terephthalate surface affects the roughness and critical load of the films. The critical load increases as the plasma treatment time increases. When the plasma treatment time was adjusted from 5 minutes to 10 minutes, the adhesion increased from 8.20 mN to 13.67 mN. When the argon flow rate is decreased from 60 sccm to 40 sccm, the adhesion increases from 8.27 mN to 13.67 mN. The adhesion is also increased by the condition of higher power, where the adhesion increased from 13.67 mN to 25.07 mN as the power increases from 30 W to 50 W. The adhesion of the film increases from 13.67 mN to 21.41mN as the film thickness increases from 100 nm to 300 nm. Comparing all the deposition parameters, it indicates the change of the power and thickness has much improvement on the film adhesion.alate substrate by direct current sputtering with different sputtering parameters is discussed in this work. The effects of plasma treatment with 0, 5, and 10 minutes on the thin film properties are investigated first. Various argon flow rates at 40, 50, 60 standard cubic centimeters per minute (sccm), deposition power at 30, 40, 50 W, and film thickness at 100, 200, 300 nm are also discussed. The 3-dimensional surface profilometer, micro scratch machine, and optical microscope are used to characterize the thin film properties. The results show that the increase of the plasma treatment time on the polyethylene terephthalate surface affects the roughness and critical load of the films. The critical load increases as the plasma treatment time increases. When the plasma treatment time was adjusted from 5 minutes to 10 minutes, the adhesion increased from 8.20 mN to 13.67 mN. When the argon flow rate is decreased from 60 sccm to 40 sccm, the adhesion increases from 8.27 mN to 13.67 mN. The adhesion is also increased by the condition of higher power, where the adhesion increased from 13.67 mN to 25.07 mN as the power increases from 30 W to 50 W. The adhesion of the film increases from 13.67 mN to 21.41mN as the film thickness increases from 100 nm to 300 nm. Comparing all the deposition parameters, it indicates the change of the power and thickness has much improvement on the film adhesion.

Keywords: flexible substrate, sputtering, adhesion, copper thin film

Procedia PDF Downloads 118
6459 An Ultra-Low Output Impedance Power Amplifier for Tx Array in 7-Tesla Magnetic Resonance Imaging

Authors: Ashraf Abuelhaija, Klaus Solbach

Abstract:

In Ultra high-field MRI scanners (3T and higher), parallel RF transmission techniques using multiple RF chains with multiple transmit elements are a promising approach to overcome the high-field MRI challenges in terms of inhomogeneity in the RF magnetic field and SAR. However, mutual coupling between the transmit array elements disturbs the desirable independent control of the RF waveforms for each element. This contribution demonstrates a 18 dB improvement of decoupling (isolation) performance due to the very low output impedance of our 1 kW power amplifier.

Keywords: EM coupling, inter-element isolation, magnetic resonance imaging (mri), parallel transmit

Procedia PDF Downloads 480
6458 Improved Multilevel Inverter with Hybrid Power Selector and Solar Panel Cleaner in a Solar System

Authors: S. Oladoyinbo, A. A. Tijani

Abstract:

Multilevel inverters (MLI) are used at high power application based on their operation. There are 3 main types of multilevel inverters (MLI); diode clamped, flying capacitor and cascaded MLI. A cascaded MLI requires the least number of components to achieve same number of voltage levels when compared to other types of MLI while the flying capacitor has the minimum harmonic distortion. However, maximizing the advantage of cascaded H-bridge MLI and flying capacitor MLI, an improved MLI can be achieved with fewer components and better performance. In this paper an improved MLI is presented by asymmetrically integrating a flying capacitor to a cascaded H-bridge MLI also integrating an auxiliary transformer to the main transformer to decrease the total harmonics distortion (THD) with increased number of output voltage levels. Furthermore, the system is incorporated with a hybrid time and climate based solar panel cleaner and power selector which intelligently manage the input of the MLI and clean the solar panel weekly ensuring the environmental factor effect on the panel is reduced to minimum.

Keywords: multilevel inverter, total harmonics distortion, cascaded h-bridge inverter, flying capacitor

Procedia PDF Downloads 348
6457 Frequency- and Content-Based Tag Cloud Font Distribution Algorithm

Authors: Ágnes Bogárdi-Mészöly, Takeshi Hashimoto, Shohei Yokoyama, Hiroshi Ishikawa

Abstract:

The spread of Web 2.0 has caused user-generated content explosion. Users can tag resources to describe and organize them. Tag clouds provide rough impression of relative importance of each tag within overall cloud in order to facilitate browsing among numerous tags and resources. The goal of our paper is to enrich visualization of tag clouds. A font distribution algorithm has been proposed to calculate a novel metric based on frequency and content, and to classify among classes from this metric based on power law distribution and percentages. The suggested algorithm has been validated and verified on the tag cloud of a real-world thesis portal.

Keywords: tag cloud, font distribution algorithm, frequency-based, content-based, power law

Procedia PDF Downloads 487
6456 Turbulent Channel Flow Synthesis using Generative Adversarial Networks

Authors: John M. Lyne, K. Andrea Scott

Abstract:

In fluid dynamics, direct numerical simulations (DNS) of turbulent flows require large amounts of nodes to appropriately resolve all scales of energy transfer. Due to the size of these databases, sharing these datasets amongst the academic community is a challenge. Recent work has been done to investigate the use of super-resolution to enable database sharing, where a low-resolution flow field is super-resolved to high resolutions using a neural network. Recently, Generative Adversarial Networks (GAN) have grown in popularity with impressive results in the generation of faces, landscapes, and more. This work investigates the generation of unique high-resolution channel flow velocity fields from a low-dimensional latent space using a GAN. The training objective of the GAN is to generate samples in which the distribution of the generated samplesis ideally indistinguishable from the distribution of the training data. In this study, the network is trained using samples drawn from a statistically stationary channel flow at a Reynolds number of 560. Results show that the turbulent statistics and energy spectra of the generated flow fields are within reasonable agreement with those of the DNS data, demonstrating that GANscan produce the intricate multi-scale phenomena of turbulence.

Keywords: computational fluid dynamics, channel flow, turbulence, generative adversarial network

Procedia PDF Downloads 188
6455 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 99
6454 Synchrotron Radiation and Inverse Compton Scattering in Astrophysical Plasma

Authors: S. S. Sathiesh

Abstract:

The aim of this project is to study the radiation mechanism synchrotron and Inverse Compton scattering. Theoretically, we discussed spectral energy distribution for both. Programming is done for plotting the graph of Power-law spectrum for synchrotron Radiation using fortran90. The importance of power law spectrum was discussed and studied to infer its physical parameters from the model fitting. We also discussed how to infer the physical parameters from the theoretically drawn graph, we have seen how one can infer B (magnetic field of the source), γ min, γ max, spectral indices (p1, p2) while fitting the curve to the observed data.

Keywords: blazars/quasars, beaming, synchrotron radiation, Synchrotron Self Compton, inverse Compton scattering, mrk421

Procedia PDF Downloads 404
6453 Dual-Channel Multi-Band Spectral Subtraction Algorithm Dedicated to a Bilateral Cochlear Implant

Authors: Fathi Kallel, Ahmed Ben Hamida, Christian Berger-Vachon

Abstract:

In this paper, a Speech Enhancement Algorithm based on Multi-Band Spectral Subtraction (MBSS) principle is evaluated for Bilateral Cochlear Implant (BCI) users. Specifically, dual-channel noise power spectral estimation algorithm using Power Spectral Densities (PSD) and Cross Power Spectral Densities (CPSD) of the observed signals is studied. The enhanced speech signal is obtained using Dual-Channel Multi-Band Spectral Subtraction ‘DC-MBSS’ algorithm. For performance evaluation, objective speech assessment test relying on Perceptual Evaluation of Speech Quality (PESQ) score is performed to fix the optimal number of frequency bands needed in DC-MBSS algorithm. In order to evaluate the speech intelligibility, subjective listening tests are assessed with 3 deafened BCI patients. Experimental results obtained using French Lafon database corrupted by an additive babble noise at different Signal-to-Noise Ratios (SNR) showed that DC-MBSS algorithm improves speech understanding for single and multiple interfering noise sources.

Keywords: speech enhancement, spectral substracion, noise estimation, cochlear impalnt

Procedia PDF Downloads 535
6452 Moisture Impact on the Utilization of Recycled Concrete Fine Aggregate to Produce Mortar

Authors: Rahimullah Habibzai

Abstract:

To achieve a sustainable concrete industry, reduce exploitation of the natural aggregate resources, and mitigate waste concrete environmental burden, one way is to use recycled concrete aggregate. The utilization of low-quality fine aggregate inclusively recycled concrete sand that is produced from crushing waste concrete recently has become a popular and challenging topic among researchers nowadays. This study provides a scientific base for promoting the application of concrete waste as fine aggregate in producing concrete by conducting a comprehensive laboratory program. The mechanical properties of mortar made from recycled concrete fine aggregate (RCFA), that is produced by pulse power crushing concrete waste are satisfactory and capable of being utilized in the construction industry. A better treatment of RCFA particles and enhancing its quality will make it possible to be utilized in producing structural concrete. Pulse power discharge technology is proposed in this research to produce RCFA, which is a more effective and promising technique compared to other recycling methods to generate medium to high-quality recycled concrete fine aggregate with a reduced amount of powder, mitigate the environmental burden, and save more space.

Keywords: construction and demolition waste, concrete waste recycle fine aggregate, pulse power discharge

Procedia PDF Downloads 132
6451 An Eigen-Approach for Estimating the Direction-of Arrival of Unknown Number of Signals

Authors: Dia I. Abu-Al-Nadi, M. J. Mismar, T. H. Ismail

Abstract:

A technique for estimating the direction-of-arrival (DOA) of unknown number of source signals is presented using the eigen-approach. The eigenvector corresponding to the minimum eigenvalue of the autocorrelation matrix yields the minimum output power of the array. Also, the array polynomial with this eigenvector possesses roots on the unit circle. Therefore, the pseudo-spectrum is found by perturbing the phases of the roots one by one and calculating the corresponding array output power. The results indicate that the DOAs and the number of source signals are estimated accurately in the presence of a wide range of input noise levels.

Keywords: array signal processing, direction-of-arrival, antenna arrays, Eigenvalues, Eigenvectors, Lagrange multiplier

Procedia PDF Downloads 322
6450 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 293
6449 Is there Anything Useful in That? High Value Product Extraction from Artemisia annua L. in the Spent Leaf and Waste Streams

Authors: Anike Akinrinlade

Abstract:

The world population is estimated to grow from 7.1 billion to 9.22 billion by 2075, increasing therefore by 23% from the current global population. Much of the demographic changes up to 2075 will take place in the less developed regions. There are currently 54 countries which fall under the bracket of being defined as having ‘low-middle income’ economies and need new ways to generate valuable products from current resources that is available. Artemisia annua L is well used for the extraction of the phytochemical artemisinin, which accounts for around 0.01 to 1.4 % dry weight of the plant. Artemisinin is used in the treatment of malaria, a disease rampart in sub-Saharan Africa and in many other countries. Once artemisinin has been extracted the spent leaf and waste streams are disposed of as waste. A feasibility study was carried out looking at increasing the biomass value of A. annua, by designing a biorefinery where spent leaf and waste streams are utilized for high product generation. Quercetin, ferulic acid, dihydroartemisinic acid, artemisinic acid and artemsinin were screened for in the waste stream samples and the spent leaf. The analytical results showed that artemisinin, artemisinic acid and dihydroartemisinic acid were present in the waste extracts as well as camphor and arteannuin b. Ongoing effects are looking at using more industrially relevant solvents to extract the phytochemicals from the waste fractions and investigate how microwave pyrolysis of spent leaf can be utilized to generate bio-products.

Keywords: high value product generation, bioinformatics, biomedicine, waste streams, spent leaf

Procedia PDF Downloads 331
6448 Principal Component Analysis Applied to the Electric Power Systems – Practical Guide; Practical Guide for Algorithms

Authors: John Morales, Eduardo Orduña

Abstract:

Currently the Principal Component Analysis (PCA) theory has been used to develop algorithms regarding to Electric Power Systems (EPS). In this context, this paper presents a practical tutorial of this technique detailed their concept, on-line and off-line mathematical foundations, which are necessary and desirables in EPS algorithms. Thus, features of their eigenvectors which are very useful to real-time process are explained, showing how it is possible to select these parameters through a direct optimization. On the other hand, in this work in order to show the application of PCA to off-line and on-line signals, an example step to step using Matlab commands is presented. Finally, a list of different approaches using PCA is presented, and some works which could be analyzed using this tutorial are presented.

Keywords: practical guide; on-line; off-line, algorithms, faults

Procedia PDF Downloads 546
6447 Synthesis of Pyrimidine-Based Polymers Consist of 2-{4-[4,6-Bis-(4-Hexyl-Thiophen-2-yl)-Pyrimidin-2-yl]-Phenyl}-Thiazolo[5,4-B]Pyridine with Deep HOMO Level for Photovoltaics

Authors: Hyehyeon Lee, Jiwon Yu, Juwon Kim, Raquel Kristina Leoni Tumiar, Taewon Kim, Juae Kim, Hongsuk Suh

Abstract:

Photovoltaics, which have many advantages in cost, easy processing, and light-weight, have attracted attention. We synthesized pyrimidine-based conjugated polymers with 2-{4-[4,6-bis-(4-hexyl-thiophen-2-yl)-pyrimidin-2-yl]-phenyl}-thiazolo[5,4-b]pyridine (pPTP) which have an ability of powerful electron withdrawing and introduced into the PSCs. By Stille polymerization, we designed the conjugated polymers, pPTPBDT-12, pPTPBDT-EH, pPTPBDTT-EH and pPTPTTI. The HOMO energy levels of four polymers (pPTPBDT-12, pPTPBDT-EH, pPTPBDTT-EH and pPTPTTI) were at -5.61 ~ -5.89 eV, their LUMO (Lowest Unoccupied Molecular Orbital) energy levels were at -3.95 ~ -4.09 eV. The device including pPTPBDT-12 and PC71BM (1:2) indicated a V_oc of 0.67 V, a J_sc of 1.33 mA/cm², and a fill factor (FF) of 0.25, giving a power conversion efficiency (PCE) of 0.23%. The device including pPTPBDT-EH and PC71BM (1:2) indicated a V_oc of 0.72 V, a J_sc of 2.56 mA/cm², and a fill factor (FF) of 0.30, giving a power conversion efficiency of 0.56%. The device including pPTPBDTT-EH and PC71BM (1:2) indicated a V_oc of 0.72 V, a J_sc of 3.61 mA/cm², and a fill factor (FF) of 0.29, giving a power conversion efficiency of 0.74%. The device including pPTPTTI and PC71BM (1:2) indicated a V_oc of 0.83 V, a J_sc of 4.41 mA/cm², and a fill factor (FF) of 0.31, giving a power conversion efficiency of 1.13%. Therefore, pPTPBDT-12, pPTPBDT-EH, pPTPBDTT-EH, and pPTPTTI were synthesized by Stille polymerization. And We find one of the best efficiency for these polymers, called pPTPTTI. Their optical properties were measured and the results show that pyrimidine-based polymers especially like pPTPTTI have a great promise to act as the donor of the active layer.

Keywords: polymer solar cells, pyrimidine-based polymers, photovoltaics, conjugated polymer

Procedia PDF Downloads 182
6446 Decommissioning of Nuclear Power Plants: The Current Position and Requirements

Authors: A. Stifi, S. Gentes

Abstract:

Undoubtedly from construction's perspective, the use of explosives will remove a large facility such as a 40-storey building , that took almost 3 to 4 years for construction, in few minutes. Usually, the reconstruction or decommissioning, the last phase of life cycle of any facility, is considered to be the shortest. However, this is proved to be wrong in the case of nuclear power plant. Statistics says that in the last 30 years, the construction of a nuclear power plant took an average time of 6 years whereas it is estimated that decommissioning of such plants may take even a decade or more. This paper is all about the decommissioning phase of a nuclear power plant which needs to be given more attention and encouragement from the research institutes as well as the nuclear industry. Currently, there are 437 nuclear power reactors in operation and 70 reactors in construction. With around 139 nuclear facilities already been shut down and are in different decommissioning stages and approximately 347 nuclear reactors will be in decommissioning phase in the next 20 years (assuming the operation time of a reactor as 40 years), This fact raises the following two questions (1) How far is the nuclear and construction Industry ready to face the challenges of decommissioning project? (2) What is required for a safety and reliable decommissioning project delivery? The decommissioning of nuclear facilities across the global have severe time and budget overruns. Largely the decommissioning processes are being executed by the force of manual labour where the change in regulations is respectively observed. In term of research and development, some research projects and activities are being carried out in this area, but the requirement seems to be much more. The near future of decommissioning shall be better through a sustainable development strategy where all stakeholders agree to implement innovative technologies especially for dismantling and decontamination processes and to deliever a reliable and safety decommissioning. The scope of technology transfer from other industries shall be explored. For example, remotery operated robotic technologies used in automobile and production industry to reduce time and improve effecincy and saftey shall be tried here. However, the innovative technologies are highly requested but they are alone not enough, the implementation of creative and innovative management methodologies should be also investigated and applied. Lean Management with it main concept "elimination of waste within process", is a suitable example here. Thus, the cooperation between international organisations and related industries and the knowledge-sharing may serve as a key factor for the successful decommissioning projects.

Keywords: decommissioning of nuclear facilities, innovative technology, innovative management, sustainable development

Procedia PDF Downloads 457
6445 Aerodynamic Design Optimization Technique for a Tube Capsule That Uses an Axial Flow Air Compressor and an Aerostatic Bearing

Authors: Ahmed E. Hodaib, Muhammed A. Hashem

Abstract:

High-speed transportation has become a growing concern. To increase high-speed efficiencies and minimize power consumption of a vehicle, we need to eliminate the friction with the ground and minimize the aerodynamic drag acting on the vehicle. Due to the complexity and high power requirements of electromagnetic levitation, we make use of the air in front of the capsule, that produces the majority of the drag, to compress it in two phases and inject a proportion of it through small nozzles to make a high-pressure air cushion to levitate the capsule. The tube is partially-evacuated so that the air pressure is optimized for maximum compressor effectiveness, optimum tube size, and minimum vacuum pump power consumption. The total relative mass flow rate of the tube air is divided into two fractions. One is by-passed to flow over the capsule body, ensuring that no chocked flow takes place. The other fraction is sucked by the compressor where it is diffused to decrease the Mach number (around 0.8) to be suitable for the compressor inlet. The air is then compressed and intercooled, then split. One fraction is expanded through a tail nozzle to contribute to generating thrust. The other is compressed again. Bleed from the two compressors is used to maintain a constant air pressure in an air tank. The air tank is used to supply air for levitation. Dividing the total mass flow rate increases the achievable speed (Kantrowitz limit), and compressing it decreases the blockage of the capsule. As a result, the aerodynamic drag on the capsule decreases. As the tube pressure decreases, the drag decreases and the capsule power requirements decrease, however, the vacuum pump consumes more power. That’s why Design optimization techniques are to be used to get the optimum values for all the design variables given specific design inputs. Aerodynamic shape optimization, Capsule and tube sizing, compressor design, diffuser and nozzle expander design and the effect of the air bearing on the aerodynamics of the capsule are to be considered. The variations of the variables are to be studied for the change of the capsule velocity and air pressure.

Keywords: tube-capsule, hyperloop, aerodynamic design optimization, air compressor, air bearing

Procedia PDF Downloads 316
6444 Oscillating Water Column Wave Energy Converter with Deep Water Reactance

Authors: William C. Alexander

Abstract:

The oscillating water column (OSC) wave energy converter (WEC) with deep water reactance (DWR) consists of a large hollow sphere filled with seawater at the base, referred to as the ‘stabilizer’, a hollow cylinder at the top of the device, with a said cylinder having a bottom open to the sea and a sealed top save for an orifice which leads to an air turbine, and a long, narrow rod connecting said stabilizer with said cylinder. A small amount of ballast at the bottom of the stabilizer and a small amount of floatation in the cylinder keeps the device upright in the sea. The floatation is set such that the mean water level is nominally halfway up the cylinder. The entire device is loosely moored to the seabed to keep it from drifting away. In the presence of ocean waves, seawater will move up and down within the cylinder, producing the ‘oscillating water column’. This gives rise to air pressure within the cylinder alternating between positive and negative gauge pressure, which in turn causes air to alternately leave and enter the cylinder through said top-cover situated orifice. An air turbine situated within or immediately adjacent to said orifice converts the oscillating airflow into electric power for transport to shore or elsewhere by electric power cable. Said oscillating air pressure produces large up and down forces on the cylinder. Said large forces are opposed through the rod to the large mass of water retained within the stabilizer, which is located deep enough to be mostly free of any wave influence and which provides the deepwater reactance. The cylinder and stabilizer form a spring-mass system which has a vertical (heave) resonant frequency. The diameter of the cylinder largely determines the power rating of the device, while the size (and water mass within) of the stabilizer determines said resonant frequency. Said frequency is chosen to be on the lower end of the wave frequency spectrum to maximize the average power output of the device over a large span of time (such as a year). The upper portion of the device (the cylinder) moves laterally (surge) with the waves. This motion is accommodated with minimal loading on the said rod by having the stabilizer shaped like a sphere, allowing the entire device to rotate about the center of the stabilizer without rotating the seawater within the stabilizer. A full-scale device of this type may have the following dimensions. The cylinder may be 16 meters in diameter and 30 meters high, the stabilizer 25 meters in diameter, and the rod 55 meters long. Simulations predict that this will produce 1,400 kW in waves of 3.5-meter height and 12 second period, with a relatively flat power curve between 5 and 16 second wave periods, as will be suitable for an open-ocean location. This is nominally 10 times higher power than similar-sized WEC spar buoys as reported in the literature, and the device is projected to have only 5% of the mass per unit power of other OWC converters.

Keywords: oscillating water column, wave energy converter, spar bouy, stabilizer

Procedia PDF Downloads 94
6443 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine

Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland

Abstract:

The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.

Keywords: force-velocity, leg-press, power-velocity, profiling, reliability

Procedia PDF Downloads 38
6442 Comparative Vector Susceptibility for Dengue Virus and Their Co-Infection in A. aegypti and A. albopictus

Authors: Monika Soni, Chandra Bhattacharya, Siraj Ahmed Ahmed, Prafulla Dutta

Abstract:

Dengue is now a globally important arboviral disease. Extensive vector surveillance has already established A.aegypti as a primary vector, but A.albopictus is now accelerating the situation through gradual adaptation to human surroundings. Global destabilization and gradual climatic shift with rising in temperature have significantly expanded the geographic range of these species These versatile vectors also host Chikungunya, Zika, and yellow fever virus. Biggest challenge faced by endemic countries now is upsurge in co-infection reported with multiple serotypes and virus co-circulation. To foster vector control interventions and mitigate disease burden, there is surge for knowledge on vector susceptibility and viral tolerance in response to multiple infections. To address our understanding on transmission dynamics and reproductive fitness, both the vectors were exposed to single and dual combinations of all four dengue serotypes by artificial feeding and followed up to third generation. Artificial feeding observed significant difference in feeding rate for both the species where A.albopictus was poor artificial feeder (35-50%) compared to A.aegypti (95-97%) Robust sequential screening of viral antigen in mosquitoes was followed by Dengue NS1 ELISA, RT-PCR and Quantitative PCR. To observe viral dissemination in different mosquito tissues Indirect immunofluorescence assay was performed. Result showed that both the vectors were infected initially with all dengue(1-4)serotypes and its co-infection (D1 and D2, D1 and D3, D1 and D4, D2 and D4) combinations. In case of DENV-2 there was significant difference in the peak titer observed at 16th day post infection. But when exposed to dual infections A.aegypti supported all combinations of virus where A.albopictus only continued single infections in successive days. There was a significant negative effect on the fecundity and fertility of both the vectors compared to control (PANOVA < 0.001). In case of dengue 2 infected mosquito, fecundity in parent generation was significantly higher (PBonferroni < 0.001) for A.albopicus compare to A.aegypti but there was a complete loss of fecundity from second to third generation for A.albopictus. It was observed that A.aegypti becomes infected with multiple serotypes frequently even at low viral titres compared to A.albopictus. Possible reason for this could be the presence of wolbachia infection in A.albopictus or mosquito innate immune response, small RNA interference etc. Based on the observations it could be anticipated that transovarial transmission may not be an important phenomenon for clinical disease outcome, due to the absence of viral positivity by third generation. Also, Dengue NS1 ELISA can be used for preliminary viral detection in mosquitoes as more than 90% of the samples were found positive compared to RT-PCR and viral load estimation.

Keywords: co-infection, dengue, reproductive fitness, viral quantification

Procedia PDF Downloads 184
6441 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid

Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.

Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid

Procedia PDF Downloads 419
6440 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation

Procedia PDF Downloads 126
6439 Feasibility of Small Autonomous Solar-Powered Water Desalination Units for Arid Regions

Authors: Mohamed Ahmed M. Azab

Abstract:

The shortage of fresh water is a major problem in several areas of the world such as arid regions and coastal zones in several countries of Arabian Gulf. Fortunately, arid regions are exposed to high levels of solar irradiation most the year, which makes the utilization of solar energy a promising solution to such problem with zero harmful emission (Green System). The main objective of this work is to conduct a feasibility study of utilizing small autonomous water desalination units powered by photovoltaic modules as a green renewable energy resource to be employed in different isolated zones as a source of drinking water for some scattered societies where the installation of huge desalination stations are discarded owing to the unavailability of electric grid. Yanbu City is chosen as a case study where the Renewable Energy Center exists and equipped with all sensors to assess the availability of solar energy all over the year. The study included two types of available water: the first type is brackish well water and the second type is seawater of coastal regions. In the case of well water, two versions of desalination units are involved in the study: the first version is based on day operation only. While the second version takes into consideration night operation also, which requires energy storage system as batteries to provide the necessary electric power at night. According to the feasibility study results, it is found that utilization of small autonomous desalinations unit is applicable and economically accepted in the case of brackish well water. While in the case of seawater the capital costs are extremely high and the cost of desalinated water will not be economically feasible unless governmental subsidies are provided. In addition, the study indicated that, for the same water production, the utilization of energy storage version (day-night) adds additional capital cost for batteries, and extra running cost for their replacement, which makes the unit price not only incompetent with day-only unit but also with conventional units powered by diesel generator (fossil fuel) owing to the low prices of fuel in the kingdom. However, the cost analysis shows that the price of the produced water per cubic meter of day-night unit is similar to that produced from the day-only unit provided that the day-night unit operates theoretically for a longer period of 50%.

Keywords: solar energy, water desalination, reverse osmosis, arid regions

Procedia PDF Downloads 433