Search results for: parallel operation
552 Building Information Modeling-Based Information Exchange to Support Facilities Management Systems
Authors: Sandra T. Matarneh, Mark Danso-Amoako, Salam Al-Bizri, Mark Gaterell
Abstract:
Today’s facilities are ever more sophisticated and the need for available and reliable information for operation and maintenance activities is vital. The key challenge for facilities managers is to have real-time accurate and complete information to perform their day-to-day activities and to provide their senior management with accurate information for decision-making process. Currently, there are various technology platforms, data repositories, or database systems such as Computer-Aided Facility Management (CAFM) that are used for these purposes in different facilities. In most current practices, the data is extracted from paper construction documents and is re-entered manually in one of these computerized information systems. Construction Operations Building information exchange (COBie), is a non-proprietary data format that contains the asset non-geometric data which was captured and collected during the design and construction phases for owners and facility managers use. Recently software vendors developed add-in applications to generate COBie spreadsheet automatically. However, most of these add-in applications are capable of generating a limited amount of COBie data, in which considerable time is still required to enter the remaining data manually to complete the COBie spreadsheet. Some of the data which cannot be generated by these COBie add-ins is essential for facilities manager’s day-to-day activities such as job sheet which includes preventive maintenance schedules. To facilitate a seamless data transfer between BIM models and facilities management systems, we developed a framework that enables automated data generation using the data extracted directly from BIM models to external web database, and then enabling different stakeholders to access to the external web database to enter the required asset data directly to generate a rich COBie spreadsheet that contains most of the required asset data for efficient facilities management operations. The proposed framework is a part of ongoing research and will be demonstrated and validated on a typical university building. Moreover, the proposed framework supplements the existing body of knowledge in facilities management domain by providing a novel framework that facilitates seamless data transfer between BIM models and facilities management systems.Keywords: building information modeling, BIM, facilities management systems, interoperability, information management
Procedia PDF Downloads 116551 Post-Combustion CO₂ Capture: From Membrane Synthesis to Module Intensification
Authors: Imran Khan Swati, Mohammad Younas
Abstract:
This work aims to explore the potential applications of polymeric hydrophobic membranes and green ionic liquids (ILs). Protic and aprotic ILs were synthesized in the lab., characterized, and tested for CO₂/N₂ and CO₂/CH₄ separation using hydrophobic polymeric membranes via supported ionic liquid membrane (SILM). ILs were verified by FTIR spectroscopy. The SILMs were stable at room temperature up to 0.5 MPa. For CO₂, [BSmim][tos] had the greatest coefficient of solubility and permeability, along with all ILs. At 0.5 MPa, IL [BSmim][tos] was found with a selectivity of 56.2 and 47.5 for pure CO₂/N₂ and CO₂/CH₄, respectively. The ILs synthesized for this study are rated as [BSmim][tos]>[BSmpy][tos]>[Bmim][Cl]>[Bpy][Cl] based on their SILM separation performance. Furthermore, high values of selectivity of [BSmim][tos] and [BSmpy][tos] support the use of ILs for CO₂ separation using SILMs. The study was extended to synthesize and test the ammonium-based ILs, [2-HEA][f] and [2-HEA][Hs]. These ILs achieved 50 % less selectivity for CO₂/N₂ as compared to [BSmim][tos] and [BSmpy][tos]. Nevertheless, the permeability of CO₂ achieved with [2-HEA][f] and [2-HEA][Hs] is more than 20 times higher than the [BSmim][tos] and [BSmpy][tos]. Later, the CO₂/N₂ permeability and selectivity study was extended using a flat sheet membrane contactor with recirculated IL. The contact angle effects, liquid entry pressure (LEP), initial CO₂ concentration, and type of solvents and membrane material on the CO₂ capture efficiency and membrane wetting in the post-combustion capture (PCC) process have been experimentally investigated and evaluated. Polytetrafluoroethylene (PTFE) has shown the most hydrophobic property with 6-170 loss in the contact angle. Furthermore, [Omim][BF4] and [Bmim][BF6] have exhibited only 5-8 % loss in LEP using PTFE membrane support. The CO₂ capture efficiency has been achieved as 80.8-99.8 % in different combinations of ILs and membrane support, keeping all other variables constant. While increasing CO₂ concentration from 15 to 45 % vol., an increase of nearly three folds in the CO₂ mass transfer flux was observed. The combination of [Omim][BF4] and PTFE membrane witnessed good long-term stability with only a 20 % loss in CO₂ capture efficiency in 480 min of continuous operation. A 3- D simulation model for non-dispersive solvent absorption in membrane contactors provides insight into the optimum design of a separation system for a specific application minimizing the overall cost and making the process environment-friendly.Keywords: Post-combustion CO2 capture, membrane synthesis, process development, permeability and selectivity, ionic liquids
Procedia PDF Downloads 70550 Application of Geotube® Method for Sludge Handling in Adaro Coal Mine
Authors: Ezman Fitriansyah, Lestari Diah Restu, Wawan
Abstract:
Adaro coal mine in South Kalimantan-Indonesia maintains catchment area of approximately 15,000 Ha for its mine operation. As an open pit surface coal mine with high erosion rate, the mine water in Adaro coal mine contains high TSS that needs to be treated before being released to rivers. For the treatment process, Adaro operates 21 Settling Ponds equipped with combination of physical and chemical system to separate solids and water to ensure the discharged water complied with regional environmental quality standards. However, the sludge created from the sedimentation process reduces the settling ponds capacity gradually. Therefore regular maintenance activities are required to recover and maintain the ponds' capacity. Trucking system and direct dredging had been the most common method to handle sludge in Adaro. But the main problem in applying these two methods is excessive area required for drying pond construction. To solve this problem, Adaro implements an alternative method called Geotube®. The principle of Geotube® method is the sludge contained in the Settling Ponds is pumped into Geotube® containers which have been designed to release water and retain mud flocks. During the pumping process, an amount of flocculants chemicals are injected into the sludge to form bigger mud flocks. Due to the difference in particle size, the mud flocks are settled in the container whilst the water continues to flow out through the container’s pores. Compared to the trucking system and direct dredging method, this method provides three advantages: space required to operate, increasing of overburden waste dump volume, and increasing of water treatment process speed and quality. Based on the evaluation result, Geotube® method only needs 1:8 of space required by the other methods. From the geotechnical assessment result conducted by Adaro, the potential loss of waste dump volume capacity prior to implementation of the Geotube® method was 26.7%. The water treatment process of TSS in well maintained ponds is 16% more optimum.Keywords: geotube, mine water, settling pond, sludge handling, wastewater treatment
Procedia PDF Downloads 200549 The Characteristics of the Operating Parameters of the Vertical Axis Wind Turbine for the Selected Wind Speed
Authors: Zdzislaw Kaminski, Zbigniew Czyz
Abstract:
The paper discusses the results of the research into a wind turbine with a vertical axis of rotation which was performed with the open return wind tunnel, Gunt HM 170, at the laboratory of the Department of Thermodynamics, Fluid Mechanics and Propulsion Aviation Systems of Lublin University of Technology. Wind tunnel experiments are a necessary step to construct any new type of wind turbine, to validate design assumptions and numerical results. This research focused on the rotor with the blades capable of modifying their working surfaces, i.e. absorbing wind kinetic energy. The operation of this rotor is based on adjusting angular aperture α of the top and bottom parts of the blades mounted on an axis. If this angle α increases, the working surface which absorbs wind kinetic energy also increases. The study was performed on scaled and geometrically similar models with the criteria of similarity relevant for the type of research preserved. The rotors with varied angular apertures of their blades were printed for the research with a powder 3D printer, ZPrinter® 450. This paper presents the research results for the selected flow speed of 6.5 m/s for the three angular apertures of the rotor blades, i.e. 30°, 60°, 90° at varied speeds. The test stand enables the turbine rotor to be braked to achieve the required speed and airflow speed and torque to be recorded. Accordingly, the torque and power as a function of airflow were plotted. The rotor with its adjustable blades enables turbine power to be adjusted within a wide range of wind speeds. A variable angular aperture of blade working surfaces α in a wind turbine enables us to control the speed of the turbine and consequently its output power. Reducing the angular aperture of working surfaces results in reduced speed, and if a special current generator applied, electrical output power is reduced, too. Speed adjusted by changing angle α enables the maximum load acting on rotor blades to be controlled. The solution under study is a kind of safety against a damage of a turbine due to possible high wind speed.Keywords: drive torque, renewable energy, power, wind turbine, wind tunnel
Procedia PDF Downloads 258548 One Species into Five: Nucleo-Mito Barcoding Reveals Cryptic Species in 'Frankliniella Schultzei Complex': Vector for Tospoviruses
Authors: Vikas Kumar, Kailash Chandra, Kaomud Tyagi
Abstract:
The insect order Thysanoptera includes small insects commonly called thrips. As insect vectors, only thrips are capable of Tospoviruses transmission (genus Tospovirus, family Bunyaviridae) affecting various crops. Currently, fifteen species of subfamily Thripinae (Thripidae) have been reported as vectors for tospoviruses. Frankliniella schultzei, which is reported as act as a vector for at least five tospovirses, have been suspected to be a species complex with more than one species. It is one of the historical unresolved issues where, two species namely, F. schultzei Trybom and F. sulphurea Schmutz were erected from South Africa and Srilanaka respectively. These two species were considered to be valid until 1968 when sulphurea was treated as colour morph (pale form) and synonymised under schultzei (dark form) However, these two have been considered as valid species by some of the thrips workers. Parallel studies have indicated that brown form of schultzei is a vector for tospoviruses while yellow form is a non-vector. However, recent studies have shown that yellow populations have also been documented as vectors. In view of all these facts, it is highly important to have a clear understanding whether these colour forms represent true species or merely different populations with different vector carrying capacities and whether there is some hidden diversity in 'Frankliniella schultzei species complex'. In this study, we aim to study the 'Frankliniella schultzei species complex' with molecular spectacles with DNA data from India and Australia and Africa. A total of fifty-five specimens was collected from diverse locations in India and Australia. We generated molecular data using partial fragments of mitochondrial cytochrome c oxidase I gene (mtCOI) and 28S rRNA gene. For COI dataset, there were seventy-four sequences, out of which data on fifty-five was generated in the current study and others were retrieved from NCBI. All the four different tree construction methods: neighbor-joining, maximum parsimony, maximum likelihood and Bayesian analysis, yielded the same tree topology and produced five cryptic species with high genetic divergence. For, rDNA, there were forty-five sequences, out of which data on thirty-nine was generated in the current study and others were retrieved from NCBI. The four tree building methods yielded four cryptic species with high bootstrap support value/posterior probability. Here we could not retrieve one cryptic species from South Africa as we could not generate data on rDNA from South Africa and sequence for rDNA from African region were not available in the database. The results of multiple species delimitation methods (barcode index numbers, automatic barcode gap discovery, general mixed Yule-coalescent, and Poisson-tree-processes) also supported the phylogenetic data and produced 5 and 4 Molecular Operational Taxonomic Units (MOTUs) for mtCOI and 28S dataset respectively. These results of our study indicate the likelihood that F. sulphurea may be a valid species, however, more morphological and molecular data is required on specimens from type localities of these two species and comparison with type specimens.Keywords: DNA barcoding, species complex, thrips, species delimitation
Procedia PDF Downloads 128547 Numerical Investigation of Effect of Throat Design on the Performance of a Rectangular Ramjet Intake
Authors: Subrat Partha Sarathi Pattnaik, Rajan N.K.S.
Abstract:
Integrated rocket ramjet engines are highly suitable for long range missile applications. Designing the fixed geometry intakes for such missiles that can operate efficiently over a range of operating conditions is a highly challenging task. Hence, the present study aims to evaluate the effect of throat design on the performance of a rectangular mixed compression intake for operation in the Mach number range of 1.8 – 2.5. The analysis has been carried out at four different Mach numbers of 1.8, 2, 2.2, 2.5 and two angle-of-attacks of +5 and +10 degrees. For the throat design, three different throat heights have been considered, one corresponding to a 3- external shock design and two heights corresponding to a 2-external shock design leading to different internal contraction ratios. The on-design Mach number for the study is M 2.2. To obtain the viscous flow field in the intake, the theoretical designs have been considered for computational fluid dynamic analysis. For which Favre averaged Navier- Stokes (FANS) equations with two equation SST k-w model have been solved. The analysis shows that for zero angle of attack at on-design and high off-design Mach number operations the three-ramp design leads to a higher total pressure recovery (TPR) compared to the two-ramp design at both contraction ratios maintaining same mass flow ratio (MFR). But at low off-design Mach numbers the total pressure shows an opposite trend that is maximum for the two-ramp low contraction ratio design due to lower shock loss across the external shocks similarly the MFR is higher for low contraction ratio design as the external ramp shocks move closer to the cowl. At both the angle of attack conditions and complete range of Mach numbers the total pressure recovery and mass flow ratios are highest for two ramp low contraction design due to lower stagnation pressure loss across the detached bow shock formed at the ramp and lower mass spillage. Hence, low contraction design is found to be suitable for higher off-design performance.Keywords: internal contraction ratio, mass flow ratio, mixed compression intake, performance, supersonic flows
Procedia PDF Downloads 108546 Sustainable Development Change within Our Environs
Authors: Akinwale Adeyinka
Abstract:
Critical natural resources such as clean ground water, fertile topsoil, and biodiversity are diminishing at an exponential rate, orders of magnitude above that at which they can be regenerated. Based on news on world population record, over 6 billion people on earth, and almost a quarter million added each day, the scale of human activity and environmental impact is unprecedented. Soaring human population growth over the past century has created a visible challenge to earth’s life support systems. In addition, the world faces an onslaught of other environmental threats including degenerated global climate change, global warming, intensified acid rain, stratospheric ozone depletion and health threatening pollution. Overpopulation and the use of deleterious technologies combine to increase the scale of human activities to a level that underlies these entire problems. These intensifying trends cannot continue indefinitely, hopefully, through increased understanding and valuation of ecosystems and their services, earth’s basic life-support system will be protected for the future.To say the fact, human civilization is now the dominant cause of change in the global environment. Now that our relationship to the earth has change so utterly, we have to see that change and understand its implication. These are actually 2 aspects to the challenges which we should believe. The first is to realize that our power to harm the earth can indeed have global and even permanent effects. Second is to realize that the only way to understand our new role as a co-architect of nature is to see ourselves as part of a complex system that does operate according to the same simple rules of cause and effect we are used to. So understanding the physical/biological dimension of earth system is an important precondition for making sensible policy to protect our environment. Because we believe Sustainable Development Is a matter of reconciling respect for the environment, social equity and economic profitability. Also, we strongly believe that environmental protection is naturally about reducing air and water pollution, but it also includes the improvement of the environmental performance of existing process. That is why we should always have it at the heart of our business that the environmental problem is not our effect on the environment so much as our relationship with the environment. We should always think of being environmental friendly in our operation.Keywords: Stratospheric ozone depletion ion , Climate Change, global warming, social equity and economic profitability
Procedia PDF Downloads 337545 As a Secure Bridge Country about Oil and Gas Sources Transfer after Arab Spring: Turkey
Authors: Fatih Ercin Guney, Hami Karagol
Abstract:
Day by day, humanity's energy needs increase, to facilitate access to energy sources by energy importing countries is of great importance in terms of issues both in terms of economic security and political security. The geographical location of the oil exporting countries in the Middle East (Iran, Iraq, Kuwait, Libya, Saudi Arabia, United Arab Emirates, Qatar) today, it is observed that evaluated by emerging Arab Spring(from Tunisia to Egypt) and freedom battles(in Syria) with security issues arise sourced from terrorist activities(ISIS). Progresses related with limited natural resources, energy and it's transportation issues which worries the developing countries, the energy in the region is considered to how to transfer safely. North Region of the Black Sea , the beginning of the conflict in the regional nature formed between Russia and Ukraine (2010), followed by the relevant regions of the power transmission line (From Russia to Europe) the discovery is considered to be the east's hand began to strengthen in terms of both the economical and political sides. With the growing need for safe access to the west of the new energy transmission lines are followed by Turkey, re-interest is considered to be shifted to the Mediterranean and the Middle East by West. Also, Russia, Iran and China (three axis of east) are generally performing as carry out parallel policies about energy , economical side and security in both United Nations Security Council (Two of Five Permanent Members are Russia and China) and Shanghai Cooperation Organization. In addition, Eastern Mediterranean Region Tension are rapidly increasing about research new oil and natural gas sources by Israel, Egypt, Cyprus, Lebanon. This paper provides, new energy corridor(s) are needed to transfer sources (Oil&Natural Gas) by Europe from East to West. So The West needs either safe bridge country to transfer natural sources to Europe in region or is needed to discovery new natural sources in extraterritorial waters of Eastern Mediterranean Region. But in two opportunities are evaluated with secure transfer corridors form region to Europe in safely. Even if the natural sources can be discovered, they are considered to transfer in safe manner. This paper involved, Turkey’s importance as a leader country in region over both of political and safe energy transfer sides as bridge country between south and north of Turkey why natural sources shall be transferred over Turkey, Even if diplomatic issues-For Example; Cyprus membership in European Union, Turkey membership candidate duration, Israel-Cyprus- Egypt-Lebanon researches about new natural sources in Mediterranean - occurred. But politic balance in Middle-East is changing quickly because of lack of democratic governments in region. So it is evaluated that the alliance of natural sources researches may not be long-time relations due to share sources after discoveries. After evaluating over causes and reasons, aim to reach finding foresight about future of region for energy transfer periods in secure manner.Keywords: Middle East, natural gas, oil, Turkey
Procedia PDF Downloads 297544 Rehabilitation of Dilapidated Buildings in Morocco: Turning Urban Challenges into Opportunities
Authors: Derradji A., Ben El Mamoun M., Zakaria E., Charadi I. Anrur
Abstract:
The issue of dilapidated buildings represents a significant opportunity for constructive and beneficial interventions in Morocco. Faced with challenges associated with aging constructions and rapid urbanization, the country is committed to developing innovative strategies aimed at revitalizing urban areas and enhancing the sustainability of infrastructure, thereby ensuring citizens' safety. Through targeted investments in the renovation and modernization of existing buildings, Morocco aims to stimulate job creation, boost the local economy, and improve the quality of life for residents. Additionally, the integration of sustainable construction standards and the strengthening of regulations will promote resilient and environmentally friendly urban development. In this proactive perspective, LABOTEST has been commissioned by the National Agency for Urban Renewal (ANRUR) to conduct an in-depth study. This study focuses on the technical expertise of 1800 buildings identified as dilapidated in the prefectures of Rabat and Skhirat-Témara following an initial clearance operation. The primary objective of this initiative is to conduct a comprehensive diagnosis of these buildings and define the necessary interventions to eliminate potential risks while ensuring appropriate treatment. The article presents the adopted intervention methodology, taking into account the social dimensions involved, as well as the results of the technical expertise. These results include the classification of buildings according to their degree of urgency and recommendations for appropriate conservatory measures. Additionally, different pathologies are identified and accompanied by specific treatment proposals for each type of building. Since this study, the adopted approach has been generalized to the entire territory of Morocco. LABOTEST has been solicited by other cities such as Casablanca, Chefchaouen, Ouazzane, Azilal, Bejaad, and Demnate. This extension of the initiative demonstrates Morocco's commitment to addressing urban challenges in a proactive and inclusive manner. These efforts also illustrate the endeavors undertaken to transform urban challenges into opportunities for sustainable development and socio-economic progress for the entire population.Keywords: building, dilapidated, rehabilitation, Morocco
Procedia PDF Downloads 64543 Signal Processing Techniques for Adaptive Beamforming with Robustness
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.Keywords: adaptive beamforming, robustness, signal blocking, steering angle error
Procedia PDF Downloads 125542 Analyzing the Investment Decision and Financing Method of the French Small and Medium-Sized Enterprises
Authors: Eliane Abdo, Olivier Colot
Abstract:
SMEs are always considered as a national priority due to their contribution to job creation, innovation and growth. Once the start-up phase is crossed with encouraging results, the company enters the phase of growth. In order to improve its competitiveness, maintain and increase its market share, the company is in the necessity even the obligation to develop its tangible and intangible investments. SMEs are generally closed companies with special and critical financial situation, limited resources and difficulty to access the capital markets; their shareholders are always living in a conflict between their independence and their need to increase capital that leads to the entry of new shareholder. The capital structure was always considered the core of research in corporate finance; moreover, the financial crisis and its repercussions on the credit’s availability, especially for SMEs make SME financing a hot topic. On the other hand, financial theories do not provide answers to capital structure’s questions; they offer tools and mode of financing that are more accessible to larger companies. Yet, SME’s capital structure can’t be independent of their governance structure. The classic financial theory supposes independence between the investment decision and the financing decision. Thus, investment determines the volume of funding, but not the split between internal or external funds. In this context, we find interesting to study the hypothesis that SMEs respond positively to the financial theories applied to large firms and to check if they are constrained by conventional solutions used by large companies. In this context, this research focuses on the analysis of the resource’s structure of SME in parallel with their investments’ structure, in order to highlight a link between their assets and liabilities structure. We founded our conceptual model based on two main theoretical frameworks: the Pecking order theory, and the Trade Off theory taking into consideration the SME’s characteristics. Our data were generated from DIANE database. Five hypotheses were tested via a panel regression to understand the type of dependence between the financing methods of 3,244 French SMEs and the development of their investment over a period of 10 years (2007-2016). The results show dependence between equity and internal financing in case of intangible investments development. Moreover, this type of business is constraint to financial debts since the guarantees provided are not sufficient to meet the banks' requirements. However, for tangible investments development, SMEs count sequentially on internal financing, bank borrowing, and new shares issuance or hybrid financing. This is compliant to the Pecking Order Theory. We, therefore, conclude that unlisted SMEs incur more financial debts to finance their tangible investments more than their intangible. However, they always prefer internal financing as a first choice. This seems to be confirmed by the assumption that the profitability of the company is negatively related to the increase of the financial debt. Thus, the Pecking Order Theory predictions seem to be the most plausible. Consequently, SMEs primarily rely on self-financing and then go, into debt as a priority to finance their financial deficit.Keywords: capital structure, investments, life cycle, pecking order theory, trade off theory
Procedia PDF Downloads 113541 Comprehensive Multilevel Practical Condition Monitoring Guidelines for Power Cables in Industries: Case Study of Mobarakeh Steel Company in Iran
Authors: S. Mani, M. Kafil, E. Asadi
Abstract:
Condition Monitoring (CM) of electrical equipment has gained remarkable importance during the recent years; due to huge production losses, substantial imposed costs and increases in vulnerability, risk and uncertainty levels. Power cables feed numerous electrical equipment such as transformers, motors, and electric furnaces; thus their condition assessment is of a very great importance. This paper investigates electrical, structural and environmental failure sources, all of which influence cables' performances and limit their uptimes; and provides a comprehensive framework entailing practical CM guidelines for maintenance of cables in industries. The multilevel CM framework presented in this study covers performance indicative features of power cables; with a focus on both online and offline diagnosis and test scenarios, and covers short-term and long-term threats to the operation and longevity of power cables. The study, after concisely overviewing the concept of CM, thoroughly investigates five major areas of power quality, Insulation Quality features of partial discharges, tan delta and voltage withstand capabilities, together with sheath faults, shield currents and environmental features of temperature and humidity; and elaborates interconnections and mutual impacts between those areas; using mathematical formulation and practical guidelines. Detection, location, and severity identification methods for every threat or fault source are also elaborated. Finally, the comprehensive, practical guidelines presented in the study are presented for the specific case of Electric Arc Furnace (EAF) feeder MV power cables in Mobarakeh Steel Company (MSC), the largest steel company in MENA region, in Iran. Specific technical and industrial characteristics and limitations of a harsh industrial environment like MSC EAF feeder cable tunnels are imposed on the presented framework; making the suggested package more practical and tangible.Keywords: condition monitoring, diagnostics, insulation, maintenance, partial discharge, power cables, power quality
Procedia PDF Downloads 228540 Hot Carrier Photocurrent as a Candidate for an Intrinsic Loss in a Single Junction Solar Cell
Authors: Jonas Gradauskas, Oleksandr Masalskyi, Ihor Zharchenko
Abstract:
The advancement in improving the efficiency of conventional solar cells toward the Shockley-Queisser limit seems to be slowing down or reaching a point of saturation. The challenges hindering the reduction of this efficiency gap can be categorized into extrinsic and intrinsic losses, with the former being theoretically avoidable. Among the five intrinsic losses, two — the below-Eg loss (resulting from non-absorption of photons with energy below the semiconductor bandgap) and thermalization loss —contribute to approximately 55% of the overall lost fraction of solar radiation at energy bandgap values corresponding to silicon and gallium arsenide. Efforts to minimize the disparity between theoretically predicted and experimentally achieved efficiencies in solar cells necessitate the integration of innovative physical concepts. Hot carriers (HC) present a contemporary approach to addressing this challenge. The significance of hot carriers in photovoltaics is not fully understood. Although their excessive energy is thought to indirectly impact a cell's performance through thermalization loss — where the excess energy heats the lattice, leading to efficiency loss — evidence suggests the presence of hot carriers in solar cells. Despite their exceptionally brief lifespan, tangible benefits arise from their existence. The study highlights direct experimental evidence of hot carrier effect induced by both below- and above-bandgap radiation in a singlejunction solar cell. Photocurrent flowing across silicon and GaAs p-n junctions is analyzed. The photoresponse consists, on the whole, of three components caused by electron-hole pair generation, hot carriers, and lattice heating. The last two components counteract the conventional electron-hole generation-caused current required for successful solar cell operation. Also, a model of the temperature coefficient of the voltage change of the current–voltage characteristic is used to obtain the hot carrier temperature. The distribution of cold and hot carriers is analyzed with regard to the potential barrier height of the p-n junction. These discoveries contribute to a better understanding of hot carrier phenomena in photovoltaic devices and are likely to prompt a reevaluation of intrinsic losses in solar cells.Keywords: solar cell, hot carriers, intrinsic losses, efficiency, photocurrent
Procedia PDF Downloads 65539 Association Between Renewable Energy and Community Forest User Group: A Case of Siranchowk Rural Municipality, Nepal
Authors: Prem Bahadur Giri, MathineeYucharoen
Abstract:
Community forest user groups (CFUGs) have been the core stone of forest management efforts in Nepal. Due to the lack of a smooth transition into the local governance structure in 2017, policy instruments have not been effectively cascaded to the local level, creating ambiguity and inconsistency in forest governance. Descriptive mixed-method research was performed with community users and stakeholders of the Tarpakha community forest, Siranchowk Rural Municipality, to understand the role of the political economy in CFUG management. The household survey was conducted among 100 households (who also are existing members of the Tarpakha CFUG) to understand and document their energy consumption preferences and practices. Likewise, ten key informant interviews and five focus group discussions with the municipality and forest management officials were also conducted to have a wider overview of the factors and political, socio-economic, and religious contexts behind the utilization of renewable energy for sustainable development. Findings from our study suggest that only 3% of households use biogas as their main source of energy. The rest of the households mention liquid petroleum gas (LPG), electricity, and firewood as major sources of energy for domestic purposes. Community members highlighted the difficulty in accessing firewood due to strict regulations from the CFUG, lack of cattle and manpower to rear cattle to produce cow dung (for biogas), and lack of technical expertise at the community level for the operation and maintenance of solar energy, among others as challenges of the resource. Likewise, key informants have mentioned policy loopholes at both the federal and local levels, especially with regard to the promotion of alternative or renewable energy, as there are no clear mandates and provisions to regulate the renewable energy industry. The study recommends doing an in-depth study on the feasibility of renewable energy sources, especially in the context of CFUGs, where biodiversity conservation aspects need to be equally taken into consideration while thinking of the promotion and expansion of renewable energy sources.Keywords: community forest, renewable energy, sustainable development, Nepal
Procedia PDF Downloads 14538 Index of Suitability for Culex pipiens sl. Mosquitoes in Portugal Mainland
Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, REVIVE team
Abstract:
The environment of the mosquitoes complex Culex pipiens sl. in Portugal mainland is evaluated based in its abundance, using a data set georeferenced, collected during seven years (2006-2012) from May to October. The suitability of the different regions can be delineated using the relative abundance areas; the suitablility index is directly proportional to disease transmission risk and allows focusing mitigation measures in order to avoid outbreaks of vector-borne diseases. The interest in the Culex pipiens complex is justified by its medical importance: the females bite all warm-blooded vertebrates and are involved in the circulation of several arbovirus of concern to human health, like West Nile virus, iridoviruses, rheoviruses and parvoviruses. The abundance of Culex pipiens mosquitoes were documented systematically all over the territory by the local health services, in a long duration program running since 2006. The environmental factors used to characterize the vector habitat are land use/land cover, distance to cartographed water bodies, altitude and latitude. Focus will be on the mosquito females, which gonotrophic cycle mate-bloodmeal-oviposition is responsible for the virus transmission; its abundance is the key for the planning of non-aggressive prophylactic countermeasures that may eradicate the transmission risk and simultaneously avoid chemical ambient degradation. Meteorological parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures) and daily total rainfall were gathered from the weather stations network for the same dates and crossed with the standardized females’ abundance in a geographic information system (GIS). Mean capture and percentage of above average captures related to each variable are used as criteria to compute a threshold for each meteorological parameter; the difference of the mean capture above/below the threshold was statistically assessed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the meaningful thresholds for each parameter. The intersection of the maps of all the parameters obtained for each month show the evolution of the suitable meteorological conditions through the mosquito season, considered as May to October, although the first and last month are less relevant. In parallel, mean and above average captures were related to the physiographic parameters – the land use/land cover classes most relevant in each month, the altitudes preferred and the most frequent distance to water bodies, a factor closely related with the mosquito biology. The maps produced with these results were crossed with the meteorological maps previously segmented, in order to get an index of suitability for the complex Culex pipiens evaluated all over the country, and its evolution from the beginning to the end of the mosquitoes season.Keywords: suitability index, Culex pipiens, habitat evolution, GIS model
Procedia PDF Downloads 576537 A Fast Method for Graphene-Supported Pd-Co Nanostructures as Catalyst toward Ethanol Oxidation in Alkaline Media
Authors: Amir Shafiee Kisomi, Mehrdad Mofidi
Abstract:
Nowadays, fuel cells as a promising alternative for power source have been widely studied owing to their security, high energy density, low operation temperatures, renewable capability and low environmental pollutant emission. The nanoparticles of core-shell type could be widely described in a combination of a shell (outer layer material) and a core (inner material), and their characteristics are greatly conditional on dimensions and composition of the core and shell. In addition, the change in the constituting materials or the ratio of core to the shell can create their special noble characteristics. In this study, a fast technique for the fabrication of a Pd-Co/G/GCE modified electrode is offered. Thermal decomposition reaction of cobalt (II) formate salt over the surface of graphene/glassy carbon electrode (G/GCE) is utilized for the synthesis of Co nanoparticles. The nanoparticles of Pd-Co decorated on the graphene are created based on the following method: (1) Thermal decomposition reaction of cobalt (II) formate salt and (2) the galvanic replacement process Co by Pd2+. The physical and electrochemical performances of the as-prepared Pd-Co/G electrocatalyst are studied by Field Emission Scanning Electron Microscopy (FESEM), Energy Dispersive X-ray Spectroscopy (EDS), Cyclic Voltammetry (CV), and Chronoamperometry (CHA). Galvanic replacement method is utilized as a facile and spontaneous approach for growth of Pd nanostructures. The Pd-Co/G is used as an anode catalyst for ethanol oxidation in alkaline media. The Pd-Co/G not only delivered much higher current density (262.3 mAcm-2) compared to the Pd/C (32.1 mAcm-2) catalyst, but also demonstrated a negative shift of the onset oxidation potential (-0.480 vs -0.460 mV) in the forward sweep. Moreover, the novel Pd-Co/G electrocatalyst represents large electrochemically active surface area (ECSA), lower apparent activation energy (Ea), higher levels of durability and poisoning tolerance compared to the Pd/C catalyst. The paper demonstrates that the catalytic activity and stability of Pd-Co/G electrocatalyst are higher than those of the Pd/C electrocatalyst toward ethanol oxidation in alkaline media.Keywords: thermal decomposition, nanostructures, galvanic replacement, electrocatalyst, ethanol oxidation, alkaline media
Procedia PDF Downloads 153536 An Inorganic Nanofiber/Polymeric Microfiber Network Membrane for High-Performance Oil/Water Separation
Authors: Zhaoyang Liu
Abstract:
It has been highly desired to develop a high-performance membrane for separating oil/water emulsions with the combined features of high water flux, high oil separation efficiency, and high mechanical stability. Here, we demonstrated a design for high-performance membranes constructed with ultra-long titanate nanofibers (over 30 µm in length)/cellulose microfibers. An integrated network membrane was achieved with these ultra-long nano/microfibers, contrast to the non-integrated membrane constructed with carbon nanotubes (5 µm in length)/cellulose microfibers. The morphological properties of the prepared membranes were characterized by A FEI Quanta 400 (Hillsboro, OR, United States) environmental scanning electron microscope (ESEM). The hydrophilicity, underwater oleophobicity and oil adhesion property of the membranes were examined using an advanced goniometer (Rame-hart model 500, Succasunna, NJ, USA). More specifically, the hydrophilicity of membranes was investigated by analyzing the spreading process of water into membranes. A filtration device (Nalgene 300-4050, Rochester, NY, USA) with an effective membrane area of 11.3 cm² was used for evaluating the separation properties of the fabricated membranes. The prepared oil-in-water emulsions were poured into the filtration device. The separation process was driven under vacuum with a constant pressure of 5 kPa. The filtrate was collected, and the oil content in water was detected by a Shimadzu total organic carbon (TOC) analyzer (Nakagyo-ku, Kyoto, Japan) to examine the separation efficiency. Water flux (J) of the membrane was calculated by measuring the time needed to collect some volume of permeate. This network membrane demonstrated good mechanical flexibility and robustness, which are critical for practical applications. This network membrane also showed high separation efficiency (99.9%) for oil/water emulsions with oil droplet size down to 3 µm, and meanwhile, has high water permeation flux (6.8 × 10³ L m⁻² h⁻¹ bar⁻¹) at low operation pressure. The high water flux is attributed to the interconnected scaffold-like structure throughout the whole membrane, while the high oil separation efficiency is attributed to the nanofiber-made nanoporous selective layer. Moreover, the economic materials and low-cost fabrication process of this membrane indicate its great potential for large-scale industrial applications.Keywords: membrane, inorganic nanofibers, oil/water separation, emulsions
Procedia PDF Downloads 173535 Inverted Geometry Ceramic Insulators in High Voltage Direct Current Electron Guns for Accelerators
Authors: C. Hernandez-Garcia, P. Adderley, D. Bullard, J. Grames, M. A. Mamun, G. Palacios-Serrano, M. Poelker, M. Stutzman, R. Suleiman, Y. Wang, , S. Zhang
Abstract:
High-energy nuclear physics experiments performed at the Jefferson Lab (JLab) Continuous Electron Beam Accelerator Facility require a beam of spin-polarized ps-long electron bunches. The electron beam is generated when a circularly polarized laser beam illuminates a GaAs semiconductor photocathode biased at hundreds of kV dc inside an ultra-high vacuum chamber. The photocathode is mounted on highly polished stainless steel electrodes electrically isolated by means of a conical-shape ceramic insulator that extends into the vacuum chamber, serving as the cathode electrode support structure. The assembly is known as a dc photogun, which has to simultaneously meet the following criteria: high voltage to manage space charge forces within the electron bunch, ultra-high vacuum conditions to preserve the photocathode quantum efficiency, no field emission to prevent gas load when field emitted electrons impact the vacuum chamber, and finally no voltage breakdown for robust operation. Over the past decade, JLab has tested and implemented the use of inverted geometry ceramic insulators connected to commercial high voltage cables to operate a photogun at 200kV dc with a 10 cm long insulator, and a larger version at 300kV dc with 20 cm long insulator. Plans to develop a third photogun operating at 400kV dc to meet the stringent requirements of the proposed International Linear Collider are underway at JLab, utilizing even larger inverted insulators. This contribution describes approaches that have been successful in solving challenging problems related to breakdown and field emission, such as triple-point junction screening electrodes, mechanical polishing to achieve mirror-like surface finish and high voltage conditioning procedures with Kr gas to extinguish field emission.Keywords: electron guns, high voltage techniques, insulators, vacuum insulation
Procedia PDF Downloads 113534 Health Monitoring of Composite Pile Construction Using Fiber Bragg Gratings Sensor Arrays
Authors: B. Atli-Veltin, A. Vosteen, D. Megan, A. Jedynska, L. K. Cheng
Abstract:
Composite materials combine the advantages of being lightweight and possessing high strength. This is in particular of interest for the development of large constructions, e.g., aircraft, space applications, wind turbines, etc. One of the shortcomings of using composite materials is the complex nature of the failure mechanisms which makes it difficult to predict the remaining lifetime. Therefore, condition and health monitoring are essential for using composite material for critical parts of a construction. Different types of sensors are used/developed to monitor composite structures. These include ultrasonic, thermography, shearography and fiber optic. The first 3 technologies are complex and mostly used for measurement in laboratory or during maintenance of the construction. Optical fiber sensor can be surface mounted or embedded in the composite construction to provide the unique advantage of in-operation measurement of mechanical strain and other parameters of interest. This is identified to be a promising technology for Structural Health Monitoring (SHM) or Prognostic Health Monitoring (PHM) of composite constructions. Among the different fiber optic sensing technologies, Fiber Bragg Grating (FBG) sensor is the most mature and widely used. FBG sensors can be realized in an array configuration with many FBGs in a single optical fiber. In the current project, different aspects of using embedded FBG for composite wind turbine monitoring are investigated. The activities are divided into two parts. Firstly, FBG embedded carbon composite laminate is subjected to tensile and bending loading to investigate the response of FBG which are placed in different orientations with respect to the fiber. Secondly, the demonstration of using FBG sensor array for temperature and strain sensing and monitoring of a 5 m long scale model of a glass fiber mono-pile is investigated. Two different FBG types are used; special in-house fibers and off-the-shelf ones. The results from the first part of the study are showing that the FBG sensors survive the conditions during the production of the laminate. The test results from the tensile and the bending experiments are indicating that the sensors successfully response to the change of strain. The measurements from the sensors will be correlated with the strain gauges that are placed on the surface of the laminates.Keywords: Fiber Bragg Gratings, embedded sensors, health monitoring, wind turbine towers
Procedia PDF Downloads 243533 Automatic Vertical Wicking Tester Based on Optoelectronic Techniques
Authors: Chi-Wai Kan, Kam-Hong Chau, Ho-Shing Law
Abstract:
Wicking property is important for textile finishing and wears comfort. Good wicking properties can ensure uniformity and efficiency of the textiles treatment. In view of wear comfort, quick wicking fabrics facilitate the evaporation of sweat. Therefore, the wetness sensation of the skin is minimised to prevent discomfort. The testing method for vertical wicking was standardised by the American Association of Textile Chemists and Colorists (AATCC) in 2011. The traditional vertical wicking test involves human error to observe fast changing and/or unclear wicking height. This study introduces optoelectronic devices to achieve an automatic Vertical Wicking Tester (VWT) and reduce human error. The VWT can record the wicking time and wicking height of samples. By reducing the difficulties of manual judgment, the reliability of the vertical wicking experiment is highly increased. Furthermore, labour is greatly decreased by using the VWT. The automatic measurement of the VWT has optoelectronic devices to trace the liquid wicking with a simple operation procedure. The optoelectronic devices detect the colour difference between dry and wet samples. This allows high sensitivity to a difference in irradiance down to 10 μW/cm². Therefore, the VWT is capable of testing dark fabric. The VWT gives a wicking distance (wicking height) of 1 mm resolution and a wicking time of one-second resolution. Acknowledgment: This is a research project of HKRITA funded by Innovation and Technology Fund (ITF) with title “Development of an Automatic Measuring System for Vertical Wicking” (ITP/055/20TP). Author would like to thank the financial support by ITF. Any opinions, findings, conclusions or recommendations expressed in this material/event (or by members of the project team) do not reflect the views of the Government of the Hong Kong Special Administrative Region, the Innovation and Technology Commission or the Panel of Assessors for the Innovation and Technology Support Programme of the Innovation and Technology Fund and the Hong Kong Research Institute of Textiles and Apparel. Also, we would like to thank the support and sponsorship from Lai Tak Enterprises Limited, Kingis Development Limited and Wing Yue Textile Company Limited.Keywords: AATCC method, comfort, textile measurement, wetness sensation
Procedia PDF Downloads 101532 Conceptualizing Conflict in the Gray Zone: A Comparative Analysis of Diplomatic, Military and Political Lenses
Authors: John Hardy, Paul Lushenko
Abstract:
he twenty-first century international security order has been fraught with challenges to the credibility and stability of the post-Cold War status quo. Although the American-led international system has rarely been threatened directly by dissatisfied states, an underlying challenge to the international security order has emerged in the form of a slow-burning abnegation of small but significant aspects of the status quo. Meanwhile, those security challenges which have threatened to destabilize order in the international system have not clearly belonged to the traditional notions of diplomacy and armed conflict. Instead, the main antagonists have been both states and non-state actors, the issues have crossed national and international boundaries, and contestation has occurred in a ‘gray zone’ between peace and war. Gray zone conflicts are not easily categorized as military operations, national security policies or political strategies, because they often include elements of diplomacy, military operations, and statecraft in complex combinations. This study applies three approaches to conceptualizing the gray zone in which many contemporary conflicts take place. The first approach frames gray zone conflicts as a form of coercive diplomacy, in which armed force is used to add credibility and commitment to political threats. The second approach frames gray zone conflicts as a form of discrete military operation, in which armed force is used sparingly and is limited to a specific issue. The third approach frames gray zones conflicts as a form of proxy war, in which armed force is used by or through third parties, rather than directly between belligerents. The study finds that each approach to conceptualizing the gray zone accounts for only a narrow range of issues which fall within the gap between traditional notions of peace and war. However, in combination, all three approaches are useful in explicating the gray zone and understanding the character of contemporary security challenges which defy simple categorization. These findings suggest that coercive diplomacy, discrete military operations, and proxy warfare provide three overlapping lenses for conceptualizing the gray zone and for understanding the gray zone conflicts which threaten international security in the early twenty-first century.Keywords: gray zone, international security, military operations, national security, strategy
Procedia PDF Downloads 159531 Building Information Modeling Acting as Protagonist and Link between the Virtual Environment and the Real-World for Efficiency in Building Production
Authors: Cristiane R. Magalhaes
Abstract:
Advances in Information and Communication Technologies (ICT) have led to changes in different sectors particularly in architecture, engineering, construction, and operation (AECO) industry. In this context, the advent of BIM (Building Information Modeling) has brought a number of opportunities in the field of the digital architectural design process bringing integrated design concepts that impact on the development, elaboration, coordination, and management of ventures. The project scope has begun to contemplate, from its original stage, the third dimension, by means of virtual environments (VEs), composed of models containing different specialties, substituting the two-dimensional products. The possibility to simulate the construction process of a venture in a VE starts at the beginning of the design process offering, through new technologies, many possibilities beyond geometrical digital modeling. This is a significant change and relates not only to form, but also to how information is appropriated in architectural and engineering models and exchanged among professionals. In order to achieve the main objective of this work, the Design Science Research Method will be adopted to elaborate an artifact containing strategies for the application and use of ICTs from BIM flows, with pre-construction cut-off to the execution of the building. This article intends to discuss and investigate how BIM can be extended to the site acting as a protagonist and link between the Virtual Environments and the Real-World, as well as its contribution to the integration of the value chain and the consequent increase of efficiency in the production of the building. The virtualization of the design process has reached high levels of development through the use of BIM. Therefore it is essential that the lessons learned with the virtual models be transposed to the actual building production increasing precision and efficiency. Thus, this paper discusses how the Fourth Industrial Revolution has impacted on property developments and how BIM could be the propellant acting as the main fuel and link between the virtual environment and the real production for the structuring of flows, information management and efficiency in this process. The results obtained are partial and not definite up to the date of this publication. This research is part of a doctoral thesis development, which focuses on the discussion of the impact of digital transformation in the construction of residential buildings in Brazil.Keywords: building information modeling, building production, digital transformation, ICT
Procedia PDF Downloads 122530 Promoting Public Participation in the Digital Memory Project: Experience from My Peking Memory Project(MPMP)
Authors: Xiaoshuang Jia, Huiling Feng, Li Niu, Wei Hai
Abstract:
Led by Humanistic Beijing Studies Center in Renmin University of China, My Peking Memory Project(MPMP) is a long-time digital memory project under guarantee of public participation to enable the cultural and intellectual memory of Beijing to be collected, organized, preserved and promoted for discovery and research. Taking digital memory as a new way, MPMP is an important part of Peking Memory Project(PMP) which is aimed at using digital technologies to protect and (re)present the cultural heritage in Beijing. The key outcome of MPMP is the co-building of a total digital collection of knowledge assets about Beijing. Institutional memories are central to Beijing’s collection and consist of the official published documentary content of Beijing. These have already fall under the archival collection purview. The advances in information and communication technology and the knowledge form social memory theory have allowed us to collect more comprehensively beyond institutional collections. It is now possible to engage citizens on a large scale to collect private memories through crowdsourcing in digital formats. Private memories go beyond official published content to include personal narratives, some of which are just in people’s minds until they are captured by MPMP. One aim of MPMP is to engage individuals, communities, groups or institutions who have formed memories and content about Beijing, and would like to contribute them. The project hopes to build a culture of remembering and it believes ‘Every Memory Matters’. Digital memory contribution was achieved through the development of the MPMP. In reducing barriers to digital contribution and promoting high public Participation, MPMP has taken explored the harvesting of transcribe service for digital ingestion, mobile platform and some off-line activities like holding social forum. MPMP has also cooperated with the ‘Implementation Plan of Support Plan for Growth of Talents in Renmin University of China’ to get manpower and intellectual support. After six months of operation, now MPMP have more than 2000 memories added and 7 Special Memory Collections now online. The work of MPMP has ultimately helped to highlight the important role in safeguarding the documentary heritage and intellectual memory of Beijing.Keywords: digital memory, public participation, MPMP, cultural heritage, collection
Procedia PDF Downloads 169529 PWM Harmonic Injection and Frequency-Modulated Triangular Carrier to Improve the Lives of the Transformers
Authors: Mario J. Meco-Gutierrez, Francisco Perez-Hidalgo, Juan R. Heredia-Larrubia, Antonio Ruiz-Gonzalez, Francisco Vargas-Merino
Abstract:
More and more applications power inverters connected to transformers, for example, the connection facilities to the power grid renewable generation. It is well known that the quality of signal power inverters it is not a pure sine. The harmonic content produced negative effects, one of which is the heating of electrical machines and therefore, affects the life of the machines. The decrease of life of transformers can be calculated by Arrhenius or Montsinger equation. Analyzing this expression any (long-term) decrease of a transformer temperature for 6º C - 7º C means doubles its life-expectancy. Methodologies: This work presents the technique of pulse width modulation (PWM) with an injection of harmonic and triangular frequency carrier modulated in frequency. This technique is used to improve the quality of the output voltage signal of the power inverters controlled PWM. The proposed technique increases in the fundamental term and a significant reduction in low order harmonics with the same commutations per time that control sine PWM. To achieve this, the modulating wave is compared to a triangular carrier with variable frequency over the period of the modulator. Therefore, it is, advantageous for the modulating signal to have a large amount of sinusoidal “information” in the areas of greater sampling. A triangular signal with a frequency that varies over the modulator’s period is used as a carrier, for obtaining more samples in the area with the greatest slope. A power inverter controlled by PWM proposed technique is connected to a transformer. Results: In order to verify the derived thermal parameters under different operation conditions, another ambient and loading scenario is involved for a further verification, which was sampled from the same power transformer. Temperatures of different parts of the transformer will be exposed for each PWM control technique analyzed. An assessment of the temperature be done with different techniques PWM control and hence the life of the transformer is calculated for each technique. Conclusion: This paper analyzes such as transformer heating produced by this technique and compared with other forms of PWM control. In it can be seen as a reduction the harmonic content produces less heat transformer and therefore, an increase in the life of the transformer.Keywords: heating, power-inverter, PWM, transformer
Procedia PDF Downloads 412528 Assessment of Hypersaline Outfalls via Computational Fluid Dynamics Simulations: A Case Study of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser
Authors: Mitchell J. Baum, Badin Gibbes, Greg Collecutt
Abstract:
This study details a three-dimensional field-scale numerical investigation conducted for the Gold Coast Desalination Plant (GCDP) offshore multiport brine diffuser. Quantitative assessment of diffuser performance with regard to trajectory, dilution and mapping of seafloor concentration distributions was conducted for 100% plant operation. The quasi-steady Computational Fluid Dynamics (CFD) simulations were performed using the Reynolds averaged Navier-Stokes equations with a k-ω shear stress transport turbulence closure scheme. The study compliments a field investigation, which measured brine plume characteristics under similar conditions. CFD models used an iterative mesh in a domain with dimensions 400 m long, 200 m wide and an average depth of 24.2 m. Acoustic Doppler current profiler measurements conducted in the companion field study exhibited considerable variability over the water column. The effect of this vertical variability on simulated discharge outcomes was examined. Seafloor slope was also accommodated into the model. Ambient currents varied predominantly in the longshore direction – perpendicular to the diffuser structure. Under these conditions, the alternating port orientation of the GCDP diffuser resulted in simultaneous subjection to co-propagating and counter-propagating ambient regimes. Results from quiescent ambient simulations suggest broad agreement with empirical scaling arguments traditionally employed in design and regulatory assessments. Simulated dynamic ambient regimes showed the influence of ambient crossflow upon jet trajectory, dilution and seafloor concentration is significant. The effect of ambient flow structure and the subsequent influence on jet dynamics is discussed, along with the implications for using these different simulation approaches to inform regulatory decisions.Keywords: computational fluid dynamics, desalination, field-scale simulation, multiport brine diffuser, negatively buoyant jet
Procedia PDF Downloads 214527 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 68526 Nuclear Resistance Movements: Case Study of India
Authors: Shivani Yadav
Abstract:
The paper illustrates dynamics of nuclear resistance movements in India and how peoples’ power rises in response to subversion of justice and suppression of human rights. The need for democratizing nuclear policy runs implicit through the demands of the people protesting against nuclear programmes. The paper analyses the rationale behind developing nuclear energy according to the mainstream development model adopted by the state. Whether the prevalent nuclear discourse includes people’s ambitions and addresses local concerns or not is discussed. Primarily, the nuclear movements across India comprise of two types of actors i.e. the local population as well as the urban interlocutors. The first type of actor is the local population comprising of the people who are residing in the vicinity of the nuclear site and are affected by its construction, presence and operation. They have very immediate concerns against nuclear energy projects but also have an ideological stand against producing nuclear energy. The other types of actors are the urban interlocutors, who are the intellectuals and nuclear activists who have a principled stand against nuclear energy and help to aggregate the aims and goals of the movement on various platforms. The paper focuses on the nuclear resistance movements at five sites in India- Koodankulam (Tamil Nadu), Jaitapur (Maharashtra), Haripur (West Bengal), Mithivirdi (Gujrat) and Gorakhpur (Haryana). The origin, development, role of major actors and mass media coverage of all these movements are discussed in depth. Major observations from the Indian case include: first, nuclear policy discussions in India are confined to elite circles; secondly, concepts like national security and national interest are used to suppress dissent against mainstream policies; and thirdly, India’s energy policies focus on economic concerns while ignoring the human implications of such policies. In conclusion, the paper observes that the anti-nuclear movements question not just the feasibility of nuclear power but also its exclusionary nature when it comes to people’s participation in policy making, endangering the ecology, violation of human rights, etc. The character of these protests is non-violent with an aim to produce more inclusive policy debates and democratic dialogues.Keywords: anti-nuclear movements, Koodankulam nuclear power plant, non-violent resistance, nuclear resistance movements, social movements
Procedia PDF Downloads 148525 Plasma-Assisted Decomposition of Cyclohexane in a Dielectric Barrier Discharge Reactor
Authors: Usman Dahiru, Faisal Saleem, Kui Zhang, Adam Harvey
Abstract:
Volatile organic compounds (VOCs) are atmospheric contaminants predominantly derived from petroleum spills, solvent usage, agricultural processes, automobile, and chemical processing industries, which can be detrimental to the environment and human health. Environmental problems such as the formation of photochemical smog, organic aerosols, and global warming are associated with VOC emissions. Research showed a clear relationship between VOC emissions and cancer. In recent years, stricter emission regulations, especially in industrialized countries, have been put in place around the world to restrict VOC emissions. Non-thermal plasmas (NTPs) are a promising technology for reducing VOC emissions by converting them into less toxic/environmentally friendly species. The dielectric barrier discharge (DBD) plasma is of interest due to its flexibility, moderate capital cost, and ease of operation under ambient conditions. In this study, a dielectric barrier discharge (DBD) reactor has been developed for the decomposition of cyclohexane (as a VOC model compound) using nitrogen, dry, and humidified air carrier gases. The effect of specific input energy (1.2-3.0 kJ/L), residence time (1.2-2.3 s) and concentration (220-520 ppm) were investigated. It was demonstrated that the removal efficiency of cyclohexane increased with increasing plasma power and residence time. The removal of cyclohexane decreased with increasing cyclohexane inlet concentration at fixed plasma power and residence time. The decomposition products included H₂, CO₂, H₂O, lower hydrocarbons (C₁-C₅) and solid residue. The highest removal efficiency (98.2%) was observed at specific input energy of 3.0 kJ/L and a residence time of 2.3 s in humidified air plasma. The effect of humidity was investigated to determine whether it could reduce the formation of solid residue in the DBD reactor. It was observed that the solid residue completely disappeared in humidified air plasma. Furthermore, the presence of OH radicals due to humidification not only increased the removal efficiency of cyclohexane but also improves product selectivity. This work demonstrates that cyclohexane can be converted to smaller molecules by a dielectric barrier discharge (DBD) non-thermal plasma reactor by varying plasma power (SIE), residence time, reactor configuration, and carrier gas.Keywords: cyclohexane, dielectric barrier discharge reactor, non-thermal plasma, removal efficiency
Procedia PDF Downloads 136524 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 290523 Artificial Neural Network Approach for Vessel Detection Using Visible Infrared Imaging Radiometer Suite Day/Night Band
Authors: Takashi Yamaguchi, Ichio Asanuma, Jong G. Park, Kenneth J. Mackin, John Mittleman
Abstract:
In this paper, vessel detection using the artificial neural network is proposed in order to automatically construct the vessel detection model from the satellite imagery of day/night band (DNB) in visible infrared in the products of Imaging Radiometer Suite (VIIRS) on Suomi National Polar-orbiting Partnership (Suomi-NPP).The goal of our research is the establishment of vessel detection method using the satellite imagery of DNB in order to monitor the change of vessel activity over the wide region. The temporal vessel monitoring is very important to detect the events and understand the circumstances within the maritime environment. For the vessel locating and detection techniques, Automatic Identification System (AIS) and remote sensing using Synthetic aperture radar (SAR) imagery have been researched. However, each data has some lack of information due to uncertain operation or limitation of continuous observation. Therefore, the fusion of effective data and methods is important to monitor the maritime environment for the future. DNB is one of the effective data to detect the small vessels such as fishery ships that is difficult to observe in AIS. DNB is the satellite sensor data of VIIRS on Suomi-NPP. In contrast to SAR images, DNB images are moderate resolution and gave influence to the cloud but can observe the same regions in each day. DNB sensor can observe the lights produced from various artifact such as vehicles and buildings in the night and can detect the small vessels from the fishing light on the open water. However, the modeling of vessel detection using DNB is very difficult since complex atmosphere and lunar condition should be considered due to the strong influence of lunar reflection from cloud on DNB. Therefore, artificial neural network was applied to learn the vessel detection model. For the feature of vessel detection, Brightness Temperature at the 3.7 μm (BT3.7) was additionally used because BT3.7 can be used for the parameter of atmospheric conditions.Keywords: artificial neural network, day/night band, remote sensing, Suomi National Polar-orbiting Partnership, vessel detection, Visible Infrared Imaging Radiometer Suite
Procedia PDF Downloads 235