Search results for: wind power density
1463 Water Governance Perspectives on the Urmia Lake Restoration Process: Challenges and Achievements
Authors: Jalil Salimi, Mandana Asadi, Naser Fathi
Abstract:
Urmia Lake (UL) has undergone a significant decline in water levels, resulting in severe environmental, socioeconomic, and health-related challenges. This paper examines the restoration process of UL from a water governance perspective. By applying a water governance model, the study evaluates the process based on six selected principles: stakeholder engagement, transparency and accountability, effectiveness, equitable water use, adaptation capacity, and water usage efficiency. The dominance of structural and physicalist approaches to water governance has led to a weak understanding of social and environmental issues, contributing to social crises. Urgent efforts are required to address the water crisis and reform water governance in the country, making water-related issues a top national priority. The UL restoration process has achieved significant milestones, including stakeholder consensus, scientific and participatory planning, environmental vision, intergenerational justice considerations, improved institutional environment for NGOs, investments in water infrastructure, transparency promotion, environmental effectiveness, and local issue resolutions. However, challenges remain, such as power distribution imbalances, bureaucratic administration, weak conflict resolution mechanisms, financial constraints, accountability issues, limited attention to social concerns, overreliance on structural solutions, legislative shortcomings, program inflexibility, and uncertainty management weaknesses. Addressing these weaknesses and challenges is crucial for the successful restoration and sustainable governance of UL.Keywords: evaluation, restoration process, Urmia Lake, water governance, water resource management
Procedia PDF Downloads 691462 Trade Openness, Productivity Growth And Economic Growth: Nigeria’s Experience
Authors: S. O. Okoro
Abstract:
Some words become the catch phrase of a particular decade. Globalization, Openness, and Privatization are certainly among the most frequently encapsulation of 1990’s; the market is ‘in’, ‘the state is out’. In the 1970’s, there were many political economists who spoke of autarky as one possible response to global economic forces. Be self-contained, go it alone, put up barriers to trans-nationalities, put in place import-substitution industrialization policy and grow domestic industries. In 1990’s, the emasculation of the state is by no means complete, but there is an acceptance that the state’s power is circumscribed by forces beyond its control and potential leverage. Autarky is no longer as a policy option. Nigeria, since its emergence as an independent nation, has evolved two macroeconomic management regimes of the interventionist and market friendly styles. This paper investigates Nigeria’s growth performance over the periods incorporating these two regimes and finds that there is no structural break in Total Factor Productivity, (TFP) growth and besides, the TFP growth over the entire period of study 1970-2012 is very negligible and hence growth can only be achieved by the unsustainable factor accumulation. Another important finding of this work is that the openness-human capital interaction term has a significant impact on the TFP growth, but the sign of the estimated coefficient does not meet it a theoretical expectation. This is because the negative coefficient on the human capital outweighs the positive openness effect. The poor quality of human capital is considered to have given rise to this. Given these results a massive investment in the education sector is required. The investment should be targeted at reforms that go beyond mere structural reforms to a reform agenda that will improve the quality of human capital in Nigeria.Keywords: globalization, emasculation, openness and privatization, total factor productivity
Procedia PDF Downloads 2431461 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community
Authors: Mohamed Ghorab
Abstract:
Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.Keywords: distributed energy resources, network energy system, optimization, microgeneration system
Procedia PDF Downloads 1921460 CsPbBr₃@MOF-5-Based Single Drop Microextraction for in-situ Fluorescence Colorimetric Detection of Dechlorination Reaction
Authors: Yanxue Shang, Jingbin Zeng
Abstract:
Chlorobenzene homologues (CBHs) are a category of environmental pollutants that can not be ignored. They can stay in the environment for a long period and are potentially carcinogenic. The traditional degradation method of CBHs is dechlorination followed by sample preparation and analysis. This is not only time-consuming and laborious, but the detection and analysis processes are used in conjunction with large-scale instruments. Therefore, this can not achieve rapid and low-cost detection. Compared with traditional sensing methods, colorimetric sensing is simpler and more convenient. In recent years, chromaticity sensors based on fluorescence have attracted more and more attention. Compared with sensing methods based on changes in fluorescence intensity, changes in color gradients are easier to recognize by the naked eye. Accordingly, this work proposes to use single drop microextraction (SDME) technology to solve the above problems. After the dechlorination reaction was completed, the organic droplet extracts Cl⁻ and realizes fluorescence colorimetric sensing at the same time. This method was integrated sample processing and visual in-situ detection, simplifying the detection process. As a fluorescence colorimetric sensor material, CsPbBr₃ was encapsulated in MOF-5 to construct CsPbBr₃@MOF-5 fluorescence colorimetric composite. Then the fluorescence colorimetric sensor was constructed by dispersing the composite in SDME organic droplets. When the Br⁻ in CsPbBr₃ exchanges with Cl⁻ produced by the dechlorination reactions, it is converted into CsPbCl₃. The fluorescence color of the single droplet of SDME will change from green to blue emission, thereby realizing visual observation. Therein, SDME can enhance the concentration and enrichment of Cl⁻ and instead of sample pretreatment. The fluorescence color change of CsPbBr₃@MOF-5 can replace the detection process of large-scale instruments to achieve real-time rapid detection. Due to the absorption ability of MOF-5, it can not only improve the stability of CsPbBr₃, but induce the adsorption of Cl⁻. Simultaneously, accelerate the exchange of Br- and Cl⁻ in CsPbBr₃ and the detection process of Cl⁻. The absorption process was verified by density functional theory (DFT) calculations. This method exhibits exceptional linearity for Cl⁻ in the range of 10⁻² - 10⁻⁶ M (10000 μM - 1 μM) with a limit of detection of 10⁻⁷ M. Whereafter, the dechlorination reactions of different kinds of CBHs were also carried out with this method, and all had satisfactory detection ability. Also verified the accuracy by gas chromatography (GC), and it was found that the SDME we developed in this work had high credibility. In summary, the in-situ visualization method of dechlorination reaction detection was a combination of sample processing and fluorescence colorimetric sensing. Thus, the strategy researched herein represents a promising method for the visual detection of dechlorination reactions and can be extended for applications in environments, chemical industries, and foods.Keywords: chlorobenzene homologues, colorimetric sensor, metal halide perovskite, metal-organic frameworks, single drop microextraction
Procedia PDF Downloads 1441459 Expounding the Evolution of the Proto-Femme Fatale and Its Correlation with the New Woman: A Close Study of David Mamet's Oleanna
Authors: Silvia Elias
Abstract:
The 'Femme Fatale' figure has become synonymous with a mysterious and seductive woman whose charms captivate her lovers into bonds of irresistible desire, often leading them to compromise or downfall. Originally, a Femme Fatale typically uses her beauty to lead men to their destruction but in modern literature, she represents a direct attack on traditional womanhood and the nuclear family as she refuses to abide by the pillars of mainstream society creating an image of a strong independent woman who defies the control of men and rejects the institution of the family. This research aims at discussing the differences and similarities between the femme fatale and the New Woman and how they are perceived by the audience. There is often confusion between the characteristics that define a New Woman and a Femme Fatale since both women desire independence, challenge typical gender role casting, push against the limits of the patriarchal society and take control of their sexuality. The study of the femme fatale remains appealing in modern times because the fear of gender equality gives life to modern femme fatale versions and post-modern literary works introduce their readers to new versions of the deadly seductress. One that does not fully depend on her looks to destroy men. The idea behind writing this paper was born from reading David Mamet's two-character play Oleanna (1992) and tracing the main female protagonist/antagonist's transformation from a helpless inarticulate girl into a powerful controlling negotiator who knows how to lead a bargain and maintain the upper hand.Keywords: Circe, David, Eve, evolution, feminist, femme fatale, gender, Mamet, new, Odysseus, Oleanna, power, Salome, schema, seduction, temptress, woman
Procedia PDF Downloads 4561458 Tracing Digital Traces of Phatic Communion in #Mooc
Authors: Judith Enriquez-Gibson
Abstract:
This paper meddles with the notion of phatic communion introduced 90 years ago by Malinowski, who was a Polish-born British anthropologist. It explores the phatic in Twitter within the contents of tweets related to moocs (massive online open courses) as a topic or trend. It is not about moocs though. It is about practices that could easily be hidden or neglected if we let big or massive topics take the lead or if we simply follow the computational or secret codes behind Twitter itself and third party software analytics. It draws from media and cultural studies. Though at first it appears data-driven as I submitted data collection and analytics into the hands of a third party software, Twitonomy, the aim is to follow how phatic communion might be practised in a social media site, such as Twitter. Lurking becomes its research method to analyse mooc-related tweets. A total of 3,000 tweets were collected on 11 October 2013 (UK timezone). The emphasis of lurking is to engage with Twitter as a system of connectivity. One interesting finding is that a click is in fact a phatic practice. A click breaks the silence. A click in one of the mooc website is actually a tweet. A tweet was posted on behalf of a user who simply chose to click without formulating the text and perhaps without knowing that it contains #mooc. Surely, this mechanism is not about reciprocity. To break the silence, users did not use words. They just clicked the ‘tweet button’ on a mooc website. A click performs and maintains connectivity – and Twitter as the medium in attendance in our everyday, available when needed to be of service. In conclusion, the phatic culture of breaking silence in Twitter does not have to submit to the power of code and analytics. It is a matter of human code.Keywords: click, Twitter, phatic communion, social media data, mooc
Procedia PDF Downloads 4141457 Examining the Predicting Effect of Mindfulness on Psychological Well-Being among Undergraduate Students
Authors: Piyanee Klainin-Yobas, Debbie Ramirez, Zenaida Fernandez, Jenneth Sarmiento, Wareerat Thanoi, Jeanette Ignacio, Ying Lau
Abstract:
In many countries, university students experience various stressors that may negatively affect their psychological well-being (PWB). Hence, they are at risk for physical and mental problems. This research aimed to examine the predicting effects of mindfulness, self-efficacy, and social support on psychological well-being among undergraduate students. A non-experimental research was conducted at a university in the Philippines. All students enrolled in undergraduate programs were eligible for this study unless they had chronic medical or mental health problems. Power analysis was used to calculate an adequate sample size and a convenience sampling of 630 was recruited. Data were collected through online self-reported questionnaires from year 2013 to 2015. All self-reported scales used in this study had sound psychometric properties. Descriptive statistics, correlational analyses, and structural equation modeling were performed to analyze the research data. Results showed that the participants were mostly Filipino, female, Christian, and in Schools of Nursing. Mindfulness, self-efficacy, support from family, support from friends, and support from significant others were significant predictors of psychological well-being. Mindfulness was the strongest predictor of positive psychological well-being whereas self-efficacy was the strongest predictor of negative psychological well-being. In conclusion, findings from this study add knowledge to the existing literature regarding the predictors of psychological well-being. Psychosocial interventions, with the focus on strengthening mindfulness and self-efficacy, could be delivered to undergraduate students to help them enhance psychological well-being. More studies can be undertaken to test the interventions and multi-centered research can be conducted to enhance generalizability of research findings.Keywords: mindfulness, self-efficacy, social support, psychological wellbeing
Procedia PDF Downloads 4311456 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 1001455 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 761454 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 2051453 Modernizer'ness as Madness: A Comparative Historical Study of Emperor Tewodros II of Ethiopia and Sultan Selim III of Ottoman Turkey's Modernization Reforms
Authors: Seid Ahmed Mohammed, Nedim Yalansiz
Abstract:
Many historians hardly gave due attention for historical comparison as their methods of study. They were still stunt supporter of the use of their own historical research method in their studies. But this method lacks the way to analyze some worldwide dynamics of events in comparative perspectives. Some dynamics like revolution, modernization, societal change and transformation needs broader analysis for broadening our historical knowledge’s by comparing and contrasting of the causes, courses and consequences of such dynamics historical developments in the world at large. In this paper, our study focuses up on ‘the dynamics of modernization’ and the challenge of modernity of the old regimes. For instance, countries like Turkey, Ethiopia, China, Russia, Iran, Afghanistan and Thailand have almost the same dynamics in facing the challenge of modernity. In such countries, the old regimes tried to introduce modernization and ‘reform from the above’ in order to tackle the gradual decline of the empire that faced strong challenge from the outside world. The other similarity of them was that as the rulers attempted to introduce the modernization reforms the old traditional and the religious institutions strongly opposed the reforms as the reforms alienated the power and prestige of the traditional classes. Similarly, the rules introduced modernization for maintaining their own unique socio-cultural and religious dynamics not as borrowing and acculturation of the west by complete destruction of their own. Therefore, this paper attempted to give a comparative analysis of two modernizers Tewodros II (1855-1868) of Ethiopia and Sultan Selim III (1739-1808) of Ottoman Turkey who tried to modernize their empire unfortunately they paid their precious life as a result of modernization.Keywords: comparative history, Ethiopia, modernization, Ottoman Turkey
Procedia PDF Downloads 2071452 Study on the Thermal Mixing of Steam and Coolant in the Hybrid Safety Injection Tank
Authors: Sung Uk Ryu, Byoung Gook Jeon, Sung-Jae Yi, Dong-Jin Euh
Abstract:
In such passive safety injection systems in the nuclear power plant as Core Makeup Tank (CMT) and Hybrid Safety Injection Tank, various thermal-hydraulic phenomena including the direct contact condensation of steam and the thermal stratification of coolant occur. These phenomena are also closely related to the performance of the system. Depending on the condensation rate of the steam injected to the tank, the injection of the coolant and pressure equalizing timings of the tank are decided. The steam injected to the tank from the upper nozzle penetrates the coolant and induces a direct contact condensation. In the present study, the direct contact condensation of steam and the thermal mixing between the steam and coolant were examined by using the Particle Image Velocimetry (PIV) technique. Especially, by altering the size of the nozzle from which the steam is injected, the influence of steam injection velocity on the thermal mixing with coolant and condensation shall be comprehended, while also investigating the influence of condensation on the pressure variation inside the tank. Even though the amounts of steam inserted were the same in three different nozzle size conditions, it was found that the velocity of pressure rise becomes lower as the steam injection area decreases. Also, as the steam injection area increases, the thickness of the zone within which the coolant’s temperature decreases. Thereby, the amount of steam condensed by the direct contact condensation also decreases. The results derived from the present study can be utilized for the detailed design of a passive safety injection system, as well as for modeling the direct contact condensation triggered by the steam jet’s penetration into the coolant.Keywords: passive safety injection systems, steam penetration, direct contact condensation, particle image velocimetry
Procedia PDF Downloads 3961451 Design and Development of Tandem Dynamometer for Testing and Validation of Motor Performance Parameters
Authors: Vedansh More, Lalatendu Bal, Ronak Panchal, Atharva Kulkarni
Abstract:
The project aims at developing a cost-effective test bench capable of testing and validating the complete powertrain package of an electric vehicle. Emrax 228 high voltage synchronous motor was selected as the prime mover for study. A tandem type dynamometer comprising of two loading methods; inertial, using standard inertia rollers and absorptive, using a separately excited DC generator with resistive coils was developed. The absorptive loading of the prime mover was achieved by implementing a converter circuit through which duty of the input field voltage level was controlled. This control was efficacious in changing the magnetic flux and hence the generated voltage which was ultimately dropped across resistive coils assembled in a load bank with all parallel configuration. The prime mover and loading elements were connected via a chain drive with a 2:1 reduction ratio which allows flexibility in placement of components and a relaxed rating of the DC generator. The development will aid in determination of essential characteristics like torque-RPM, power-RPM, torque factor, RPM factor, heat loads of devices and battery pack state of charge efficiency but also provides a significant financial advantage over existing versions of dynamometers with its cost-effective solution.Keywords: absorptive load, chain drive, chordal action, DC generator, dynamometer, electric vehicle, inertia rollers, load bank, powertrain, pulse width modulation, reduction ratio, road load, testbench
Procedia PDF Downloads 2331450 Experimental and Numerical Performance Analysis for Steam Jet Ejectors
Authors: Abdellah Hanafi, G. M. Mostafa, Mohamed Mortada, Ahmed Hamed
Abstract:
The steam ejectors are the heart of most of the desalination systems that employ vacuum. The systems that employ low grade thermal energy sources like solar energy and geothermal energy use the ejector to drive the system instead of high grade electric energy. The jet-ejector is used to create vacuum employing the flow of steam or air and using the severe pressure drop at the outlet of the main nozzle. The present work involves developing a one dimensional mathematical model for designing jet-ejectors and transform it into computer code using Engineering Equation solver (EES) software. The model receives the required operating conditions at the inlets and outlet of the ejector as inputs and produces the corresponding dimensions required to reach these conditions. The one-dimensional model has been validated using an existed model working on Abu-Qir power station. A prototype has been designed according to the one-dimensional model and attached to a special test bench to be tested before using it in the solar desalination pilot plant. The tested ejector will be responsible for the startup evacuation of the system and adjusting the vacuum of the evaporating effects. The tested prototype has shown a good agreement with the results of the code. In addition a numerical analysis has been applied on one of the designed geometry to give an image of the pressure and velocity distribution inside the ejector from a side, and from other side, to show the difference in results between the two-dimensional ideal gas model and real prototype. The commercial edition of ANSYS Fluent v.14 software is used to solve the two-dimensional axisymmetric case.Keywords: solar energy, jet ejector, vacuum, evaporating effects
Procedia PDF Downloads 6211449 Voting Representation in Social Networks Using Rough Set Techniques
Authors: Yasser F. Hassan
Abstract:
Social networking involves use of an online platform or website that enables people to communicate, usually for a social purpose, through a variety of services, most of which are web-based and offer opportunities for people to interact over the internet, e.g. via e-mail and ‘instant messaging’, by analyzing the voting behavior and ratings of judges in a popular comments in social networks. While most of the party literature omits the electorate, this paper presents a model where elites and parties are emergent consequences of the behavior and preferences of voters. The research in artificial intelligence and psychology has provided powerful illustrations of the way in which the emergence of intelligent behavior depends on the development of representational structure. As opposed to the classical voting system (one person – one decision – one vote) a new voting system is designed where agents with opposed preferences are endowed with a given number of votes to freely distribute them among some issues. The paper uses ideas from machine learning, artificial intelligence and soft computing to provide a model of the development of voting system response in a simulated agent. The modeled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structure. We employ agent-based computer simulation to demonstrate the formation and interaction of coalitions that arise from individual voter preferences. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior.Keywords: voting system, rough sets, multi-agent, social networks, emergence, power indices
Procedia PDF Downloads 3951448 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 1391447 Towards the Rapid Synthesis of High-Quality Monolayer Continuous Film of Graphene on High Surface Free Energy Existing Plasma Modified Cu Foil
Authors: Maddumage Don Sandeepa Lakshad Wimalananda, Jae-Kwan Kim, Ji-Myon Lee
Abstract:
Graphene is an extraordinary 2D material that shows superior electrical, optical, and mechanical properties for the applications such as transparent contacts. Further, chemical vapor deposition (CVD) technique facilitates to synthesizing of large-area graphene, including transferability. The abstract is describing the use of high surface free energy (SFE) and nano-scale high-density surface kinks (rough) existing Cu foil for CVD graphene growth, which is an opposite approach to modern use of catalytic surfaces for high-quality graphene growth, but the controllable rough morphological nature opens new era to fast synthesis (less than the 50s with a short annealing process) of graphene as a continuous film over conventional longer process (30 min growth). The experiments were shown that high SFE condition and surface kinks on Cu(100) crystal plane existing Cu catalytic surface facilitated to synthesize graphene with high monolayer and continuous nature because it can influence the adsorption of C species with high concentration and which can be facilitated by faster nucleation and growth of graphene. The fast nucleation and growth are lowering the diffusion of C atoms to Cu-graphene interface, which is resulting in no or negligible formation of bilayer patches. High energy (500W) Ar plasma treatment (inductively Coupled plasma) was facilitated to form rough and high SFE existing (54.92 mJm-2) Cu foil. This surface was used to grow the graphene by using CVD technique at 1000C for 50s. The introduced kink-like high SFE existing point on Cu(100) crystal plane facilitated to faster nucleation of graphene with a high monolayer ratio (I2D/IG is 2.42) compared to another different kind of smooth morphological and low SFE existing Cu surfaces such as Smoother surface, which is prepared by the redeposit of Cu evaporating atoms during the annealing (RRMS is 13.3nm). Even high SFE condition was favorable to synthesize graphene with monolayer and continuous nature; It fails to maintain clean (surface contains amorphous C clusters) and defect-free condition (ID/IG is 0.46) because of high SFE of Cu foil at the graphene growth stage. A post annealing process was used to heal and overcome previously mentioned problems. Different CVD atmospheres such as CH4 and H2 were used, and it was observed that there is a negligible change in graphene nature (number of layers and continuous condition) but it was observed that there is a significant difference in graphene quality because the ID/IG ratio of the graphene was reduced to 0.21 after the post-annealing with H2 gas. Addition to the change of graphene defectiveness the FE-SEM images show there was a reduction of C cluster contamination of the surface. High SFE conditions are favorable to form graphene as a monolayer and continuous film, but it fails to provide defect-free graphene. Further, plasma modified high SFE existing surface can be used to synthesize graphene within 50s, and a post annealing process can be used to reduce the defectiveness.Keywords: chemical vapor deposition, graphene, morphology, plasma, surface free energy
Procedia PDF Downloads 2441446 Artificial Intelligence Based Predictive Models for Short Term Global Horizontal Irradiation Prediction
Authors: Kudzanayi Chiteka, Wellington Makondo
Abstract:
The whole world is on the drive to go green owing to the negative effects of burning fossil fuels. Therefore, there is immediate need to identify and utilise alternative renewable energy sources. Among these energy sources solar energy is one of the most dominant in Zimbabwe. Solar power plants used to generate electricity are entirely dependent on solar radiation. For planning purposes, solar radiation values should be known in advance to make necessary arrangements to minimise the negative effects of the absence of solar radiation due to cloud cover and other naturally occurring phenomena. This research focused on the prediction of Global Horizontal Irradiation values for the sixth day given values for the past five days. Artificial intelligence techniques were used in this research. Three models were developed based on Support Vector Machines, Radial Basis Function, and Feed Forward Back-Propagation Artificial neural network. Results revealed that Support Vector Machines gives the best results compared to the other two with a mean absolute percentage error (MAPE) of 2%, Mean Absolute Error (MAE) of 0.05kWh/m²/day root mean square (RMS) error of 0.15kWh/m²/day and a coefficient of determination of 0.990. The other predictive models had prediction accuracies of MAPEs of 4.5% and 6% respectively for Radial Basis Function and Feed Forward Back-propagation Artificial neural network. These two models also had coefficients of determination of 0.975 and 0.970 respectively. It was found that prediction of GHI values for the future days is possible using artificial intelligence-based predictive models.Keywords: solar energy, global horizontal irradiation, artificial intelligence, predictive models
Procedia PDF Downloads 2741445 Sharing Tacit Knowledge: The Essence of Knowledge Management
Authors: Ayesha Khatun
Abstract:
In 21st century where markets are unstable, technologies rapidly proliferate, competitors multiply, products and services become obsolete almost overnight and customers demand low cost high value product, leveraging and harnessing knowledge is not just a potential source of competitive advantage rather a necessity in technology based and information intensive industries. Knowledge management focuses on leveraging the available knowledge and sharing the same among the individuals in the organization so that the employees can make best use of it towards achieving the organizational goals. Knowledge is not a discrete object. It is embedded in people and so difficult to transfer outside the immediate context that it becomes a major competitive advantage. However, internal transfer of knowledge among the employees is essential to maximize the use of knowledge available in the organization in an unstructured manner. But as knowledge is the source of competitive advantage for the organization it is also the source of competitive advantage for the individuals. People think that knowledge is power and sharing the same may lead to lose the competitive position. Moreover, the very nature of tacit knowledge poses many difficulties in sharing the same. But sharing tacit knowledge is the vital part of knowledge management process because it is the tacit knowledge which is inimitable. Knowledge management has been made synonymous with the use of software and technology leading to the management of explicit knowledge only ignoring personal interaction and forming of informal networks which are considered as the most successful means of sharing tacit knowledge. Factors responsible for effective sharing of tacit knowledge are grouped into –individual, organizational and technological factors. Different factors under each category have been identified. Creating a positive organizational culture, encouraging personal interaction, practicing reward system are some of the strategies that can help to overcome many of the barriers to effective sharing of tacit knowledge. Methodology applied here is completely secondary. Extensive review of relevant literature has been undertaken for the purpose.Keywords: knowledge, tacit knowledge, knowledge management, sustainable competitive advantage, organization, knowledge sharing
Procedia PDF Downloads 4001444 Causes of Pokir in the Budgeting Process: Case Study in the Province of Jakarta, Indonesia
Authors: Tri Nopiyanto, Rahardhyani Dwiannisa, Arief Ismaryanto
Abstract:
One main issue for a certain region in order to achieve development is if the government that consists of the executive, legislative and judicial board are able to work together. However, there are certain conditions that these boards are the sources of conflict, especially between the executive and legislative board. One of the example of the conflict is between the Local Government and Legislative Board (DPRD) in the Province of Jakarta in 2015. The cause of this conflict is because of the occurrence of pokir (pokok pikiran or ideas of budgeting). Pokir is driven by a budgeting plan that is arranged by DPRD that is supposed to be sourced from the aspiration of the people and delivered 5 months before the legalization of Local Government Budget (APBD), but the current condition in Jakarta is that pokir is a project by DPRD members itself and delivered just 3 days before the legalization in order to facilitate the interests of the members of the legislative. This paper discusses how pokir happens and what factors caused it. This paper uses political budgeting theory by Andy Norton and Diane Elson to analyze the issue. The method used in this paper is qualitative to collect the data and solve the problem of this research. The methods involved are in depth interview, experimental questionnaire, and literature studies. Results of this research are that Pokir occurs because of the distribution of power among DPRD members, between parties, executive, and legislative board. Beside that, Pokir also occurs because of the lack of the people’s participation in budgeting process and monitoring. Other than that, this paper also found that pokir also happens because of the budgeting system that is not able to provide a clean budgeting process, so it enables the creation of certain slots to add pokir into the budgets. Pokir also affects the development of Jakarta that goes through stagnation. This research recommends the implementation of e-budgeting to prevent the occurrence of pokir itself in the Province of Jakarta.Keywords: legislative and executive board, Jakarta, political budgeting, Pokir
Procedia PDF Downloads 2731443 Return on Investment of a VFD Drive for Centrifugal Pump
Authors: Benhaddadi M., Déry D.
Abstract:
Electric motors are the single biggest consumer of electricity, and the consumption will have more than to double by 2050. Meanwhile, the existing technologies offer the potential to reduce the motor energy demand by up to 30 %, whereas the know-how to realise energy savings is not extensively applied. That is why the authors first conducted a detailed analysis of the regulation of the electric motor market in North America To illustrate the colossal energy savings potential permitted by the VFD, the authors have equipped experimental setup, based on centrifugal pump, simultaneously equipped with regulating throttle valves and variable frequency drive VFD. The obtained experimental results for 1.5 HP motor pump are extended to another motor powers, as centrifugal pumps that are different in power may have similar operational characteristics if they are located in a similar kind of process, permitting the simulations for 5 HP and 100 HP motors. According to the obtained results, VFDs tend to be most cost-effective when fitted to larger motor pumps, in addition to higher duty cycle of the motor and relative time operating at lower than full load. The energy saving permitted by the VFD use is huge, and the payback period for drive investment is short. Nonetheless, it’s important to highlight that there is no general rule of thumb that can be used to obtain the impact of the relative time operating at lower than full load. Indeed, in terms of energy-saving differences, 50 % flow regulation is tremendously better than 75 % regulation, but a slightly enhanced relative to 25 %. Two main distinct reasons can explain this somewhat not anticipated results: the characteristics of the process and the drop in efficiency when motor is operating at low speed.Keywords: motor, drive, energy efficiency, centrifugal pump
Procedia PDF Downloads 741442 Strategies for Public Space Utilization
Authors: Ben Levenger
Abstract:
Social life revolves around a central meeting place or gathering space. It is where the community integrates, earns social skills, and ultimately becomes part of the community. Following this premise, public spaces are one of the most important spaces that downtowns offer, providing locations for people to be witnessed, heard, and most importantly, seamlessly integrate into the downtown as part of the community. To facilitate this, these local spaces must be envisioned and designed to meet the changing needs of a downtown, offering a space and purpose for everyone. This paper will dive deep into analyzing, designing, and implementing public space design for small plazas or gathering spaces. These spaces often require a detailed level of study, followed by a broad stroke of design implementation, allowing for adaptability. This paper will highlight how to assess needs, define needed types of spaces, outline a program for spaces, detail elements of design to meet the needs, assess your new space, and plan for change. This study will provide participants with the necessary framework for conducting a grass-roots-level assessment of public space and programming, including short-term and long-term improvements. Participants will also receive assessment tools, sheets, and visual representation diagrams. Urbanism, for the sake of urbanism, is an exercise in aesthetic beauty. An economic improvement or benefit must be attained to solidify these efforts' purpose further and justify the infrastructure or construction costs. We will deep dive into case studies highlighting economic impacts to ground this work in quantitative impacts. These case studies will highlight the financial impact on an area, measuring the following metrics: rental rates (per sq meter), tax revenue generation (sales and property), foot traffic generation, increased property valuations, currency expenditure by tenure, clustered development improvements, cost/valuation benefits of increased density in housing. The economic impact results will be targeted by community size, measuring in three tiers: Sub 10,000 in population, 10,001 to 75,000 in population, and 75,000+ in population. Through this classification breakdown, the participants can gauge the impact in communities similar to their work or for which they are responsible. Finally, a detailed analysis of specific urbanism enhancements, such as plazas, on-street dining, pedestrian malls, etc., will be discussed. Metrics that document the economic impact of each enhancement will be presented, aiding in the prioritization of improvements for each community. All materials, documents, and information will be available to participants via Google Drive. They are welcome to download the data and use it for their purposes.Keywords: downtown, economic development, planning, strategic
Procedia PDF Downloads 861441 Effect of Particle Size Variations on the Tribological Properties of Porcelain Waste Added Epoxy Composites
Authors: B. Yaman, G. Acikbas, N. Calis Acikbas
Abstract:
Epoxy based materials have advantages in tribological applications due to their unique properties such as light weight, self-lubrication capacity and wear resistance. On the other hand, their usage is often limited by their low load bearing capacity and low thermal conductivity values. In this study, it is aimed to improve tribological and also mechanical properties of epoxy by reinforcing with ceramic based porcelain waste. It is well-known that the reuse or recycling of waste materials leads to reduction in production costs, ease of manufacturing, saving energy, etc. From this perspective, epoxy and epoxy matrix composites containing 60wt% porcelain waste with different particle size in the range of below 90µm and 150-250µm were fabricated, and the effect of filler particle size on the mechanical and tribological properties was investigated. The microstructural characterization was carried out by scanning electron microscopy (SEM), and phase analysis was determined by X-ray diffraction (XRD). The Archimedes principle was used to measure the density and porosity of the samples. The hardness values were measured using Shore-D hardness, and bending tests were performed. Microstructural investigations indicated that porcelain particles were homogeneously distributed and no agglomerations were encountered in the epoxy resin. Mechanical test results showed that the hardness and bending strength were increased with increasing particle size related to low porosity content and well embedding to the matrix. Tribological behavior of these composites was evaluated in terms of friction, wear rates and wear mechanisms by ball-on-disk contact with dry and rotational sliding at room temperature against WC ball with a diameter of 3mm. Wear tests were carried out at room temperature (23–25°C) with a humidity of 40 ± 5% under dry-sliding conditions. The contact radius of cycles was set to 5 mm at linear speed of 30 cm/s for the geometry used in this study. In all the experiments, 3N of constant test load was applied at a frequency of 8 Hz and prolonged to 400m wear distance. The friction coefficient of samples was recorded online by the variation in the tangential force. The steady-state CoFs were changed in between 0,29-0,32. The dimensions of the wear tracks (depth and width) were measured as two-dimensional profiles by a stylus profilometer. The wear volumes were calculated by integrating these 2D surface areas over the diameter. Specific wear rates were computed by dividing the wear volume by the applied load and sliding distance. According to the experimental results, the use of porcelain waste in the fabrication of epoxy resin composites can be suggested to be potential materials due to allowing improved mechanical and tribological properties and also providing reduction in production cost.Keywords: epoxy composites, mechanical properties, porcelain waste, tribological properties
Procedia PDF Downloads 1961440 Growth of Metal Oxide (Tio2/Ag) Thin Films Sputtered by Hipims Effective in Bacterial Inactivation: Plasma Chemistry and Energetic
Authors: O. Baghriche, A. Zertal, C. Pulgarin, J. Kiwi, R. Sanjines
Abstract:
High-Power Impulse Magnetron Sputtering (HIPIMS) is a technology that belongs to the field of Ionized PVD of thin films. This study shows the first complete report on ultrathin TiO2/Ag nano-particulate films sputtered by highly ionized pulsed plasma magnetron sputtering (HIPIMS) leading to fast bacterial loss of viability. The Ag and the TiO2/Ag sputtered films induced complete Escherichia coli inactivation in the dark, which was not observed in the case of TiO2. When Ag was present, the bacterial inactivation was accelerated under low intensity solar simulated light and this has implications for a potential for a practical technology. The design, preparation, testing and surface characterization of these innovative films are described in this study. The HIPIMS sputtered composite films present an appreciable savings in metals compared to films obtained by conventional sputtering methods. HIPIMS sputtering induces a strong interaction with the rugous polyester 3-D structure due to the higher fraction of the Ag-ions (M+) attained in the magnetron chamber. The immiscibility of Ag and TiO2 in the TiO2/Ag films is shown by High Angular Dark Field (HAADF) microscopy. The ionization degree of the film forming species is significantly increased and film growth is assisted by an intense ion flux. Reports have revealed the significant enhancement of the film properties as the HIPIMS technology is used. However, a decrease of the deposition rate, as compared to the conventional DC magnetron sputtering Pulsed (DCMSP) process is commonly observed during HIPIMS.Keywords: E. coli, HIPIMS, inactivation bacterial, sputtering
Procedia PDF Downloads 3011439 Synergy Surface Modification for High Performance Li-Rich Cathode
Authors: Aipeng Zhu, Yun Zhang
Abstract:
The growing grievous environment problems together with the exhaustion of energy resources put urgent demands for developing high energy density. Considering the factors including capacity, resource and environment, Manganese-based lithium-rich layer-structured cathode materials xLi₂MnO₃⋅(1-x)LiMO₂ (M = Ni, Co, Mn, and other metals) are drawing increasing attention due to their high reversible capacities, high discharge potentials, and low cost. They are expected to be one type of the most promising cathode materials for the next-generation Li-ion batteries (LIBs) with higher energy densities. Unfortunately, their commercial applications are hindered with crucial drawbacks such as poor rate performance, limited cycle life and continuous falling of the discharge potential. With decades of extensive studies, significant achievements have been obtained in improving their cyclability and rate performances, but they cannot meet the requirement of commercial utilization till now. One major problem for lithium-rich layer-structured cathode materials (LLOs) is the side reaction during cycling, which leads to severe surface degradation. In this process, the metal ions can dissolve in the electrolyte, and the surface phase change can hinder the intercalation/deintercalation of Li ions and resulting in low capacity retention and low working voltage. To optimize the LLOs cathode material, the surface coating is an efficient method. Considering the price and stability, Al₂O₃ was used as a coating material in the research. Meanwhile, due to the low initial Coulombic efficiency (ICE), the pristine LLOs was pretreated by KMnO₄ to increase the ICE. The precursor was prepared by a facile coprecipitation method. The as-prepared precursor was then thoroughly mixed with Li₂CO₃ and calcined in air at 500℃ for 5h and 900℃ for 12h to produce Li₁.₂[Ni₀.₂Mn₀.₆]O₂ (LNMO). The LNMO was then put into 0.1ml/g KMnO₄ solution stirring for 3h. The resultant was filtered and washed with water, and dried in an oven. The LLOs obtained was dispersed in Al(NO₃)₃ solution. The mixture was lyophilized to confer the Al(NO₃)₃ was uniformly coated on LLOs. After lyophilization, the LLOs was calcined at 500℃ for 3h to obtain LNMO@LMO@ALO. The working electrodes were prepared by casting the mixture of active material, acetylene black, and binder (polyvinglidene fluoride) dissolved in N-methyl-2-pyrrolidone with a mass ratio of 80: 15: 5 onto an aluminum foil. The electrochemical performance tests showed that the multiple surface modified materials had a higher initial Coulombic efficiency (84%) and better capacity retention (91% after 100 cycles) compared with that of pristine LNMO (76% and 80%, respectively). The modified material suggests that the KMnO₄ pretreat and Al₂O₃ coating can increase the ICE and cycling stability.Keywords: Li-rich materials, surface coating, lithium ion batteries, Al₂O₃
Procedia PDF Downloads 1331438 The Effect of Hydroxyl Ethyl Cellulose (HEC) and Hydrophobically-Modified Alkali Soluble Emulsions (HASE) on the Properties and Quality of Water Based Paints
Authors: Haleden Chiririwa, Sandile S. Gwebu
Abstract:
The coatings industry is a million dollar business, and it is easy and inexpensive to set-up but it is growing very slowly in developing countries, and this study developed a paint formulation which gives better quality and good application properties. The effect of rheology modifiers, i.e. non-ionic polymers hydrophobically-modified ethoxylated urethanes (HEUR), anionic polymers hydrophobically-modified alkali soluble emulsions (HASE) and hydroxyl ethyl cellulose (HEC) on the quality and properties of water-based paints have been investigated. HEC provides the in-can viscosity and increases open working time while HASE improves application properties like spatter resistance and brush loading and HEUR provides excellent scrub resistance. Four paint recipes were prepared using four different thickeners HEC, HASE (carbopol) and Cellulose nitrate. The fourth formulation was thickened with a combination of HASE and HEC, this aimed at improving quality and at the same time reducing cost. The four samples were tested for quality tests such viscosity, sag resistance, volatile matter, tinter effect, drying times, hiding power, scrub resistance and stability on storage. Environmental factors were incorporated in the attempt to formulate an economic and green product. Hydroxyl ethyl cellulose and cellulose nitrate gave high quality and good properties of the paint. HEC and Cellulose nitrate showed stability on storage whereas carbopol thickener was very unstable.Keywords: properties, thickeners, rheology modifiers, water based paints
Procedia PDF Downloads 2681437 Combustion and Emissions Performance of Syngas Fuels Derived from Palm Kernel Shell and Polyethylene (PE) Waste via Catalytic Steam Gasification
Authors: Chaouki Ghenai
Abstract:
Computational fluid dynamics analysis of the burning of syngas fuels derived from biomass and plastic solid waste mixture through gasification process is presented in this paper. The syngas fuel is burned in gas turbine can combustor. Gas turbine can combustor with swirl is designed to burn the fuel efficiently and reduce the emissions. The main objective is to test the impact of the alternative syngas fuel compositions and lower heating value on the combustion performance and emissions. The syngas fuel is produced by blending Palm Kernel Shell (PKS) with Polyethylene (PE) waste via catalytic steam gasification (fluidized bed reactor). High hydrogen content syngas fuel was obtained by mixing 30% PE waste with PKS. The syngas composition obtained through the gasification process is 76.2% H2, 8.53% CO, 4.39% CO2 and 10.90% CH4. The lower heating value of the syngas fuel is LHV = 15.98 MJ/m3. Three fuels were tested in this study natural gas (100%CH4), syngas fuel and pure hydrogen (100% H2). The power from the combustor was kept constant for all the fuels tested in this study. The effect of syngas fuel composition and lower heating value on the flame shape, gas temperature, mass of carbon dioxide (CO2) and nitrogen oxides (NOX) per unit of energy generation is presented in this paper. The results show an increase of the peak flame temperature and NO mass fractions for the syngas and hydrogen fuels compared to natural gas fuel combustion. Lower average CO2 emissions at the exit of the combustor are obtained for the syngas compared to the natural gas fuel.Keywords: CFD, combustion, emissions, gas turbine combustor, gasification, solid waste, syngas, waste to energy
Procedia PDF Downloads 5931436 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy
Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan
Abstract:
For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue
Procedia PDF Downloads 3631435 Automated Parking System
Authors: N. Arunraj, C. P. V. Paul, D. M. D. Jayawardena, W. N. D. Fernando
Abstract:
Traffic congestion with increased numbers of vehicles is already a serious issue for many countries. The absence of sufficient parking spaces adds to the issue. Motorists are forced to wait in long queues to park their vehicles. This adds to the inconvenience faced by a motorist, kept waiting for a slot allocation, manually done along with the parking payment calculation. In Sri Lanka, nowadays, parking systems use barcode technology to identify the vehicles at both the entrance and the exit points. Customer management is handled by the use of man power. A parking space is, generally permanently sub divided according to the vehicle type. Here, again, is an issue. Parking spaces are not utilized to the maximum. The current arrangement leaves room for unutilized parking spaces. Accordingly, there is a need to manage the parking space dynamically. As a vehicle enters the parking area, available space has to be assigned for the vehicle according to the vehicle type. The system, Automated Parking System (APS), provides an automated solution using RFID Technology to identify the vehicles. Simultaneously, an algorithm manages the space allocation dynamically. With this system, there is no permanent parking slot allocation for a vehicle type. A desktop application manages the customer. A Web application is used to manage the external users with their reservations. The system also has an android application to view the nearest parking area from the current location. APS is built using java and php. It uses LED panels to guide the user inside the parking area to find the allocated parking slot accurately. The system ensures efficient performance, saving precious time for a customer. Compared with the current parking systems, APS interacts with users and increases customer satisfaction as well.Keywords: RFID, android, web based system, barcode, algorithm, LED panels
Procedia PDF Downloads 6001434 Comparative Parametric and Emission Characteristics of Single Cylinder Spark Ignition Engine Using Gasoline, Ethanol, and H₂O as Micro Emulsion Fuels
Authors: Ufaith Qadri, M Marouf Wani
Abstract:
In this paper, the performance and emission characteristics of a Single Cylinder Spark Ignition engine have been investigated. The research is based on micro emulsion application as fuel in a gasoline engine. We have analyzed many micro emulsion compositions in various proportions, for predicting the performance of the Spark Ignition engine. This new technology of fuel modifications is emerging very rapidly as lot of research is going on in the field of micro emulsion fuels in Compression Ignition engines, but the micro emulsion fuel used in a Gasoline engine is very rare. The use of micro emulsion as fuel in a Spark Ignition engine is virtually unexplored. So, our main goal is to see the performance and emission characteristics of micro emulsions as fuel, in Spark Ignition engines, and finding which composition is more efficient. In this research, we have used various micro emulsion fuels whose composition varies for all the three blends, and their performance and emission characteristic were predicted in AVL Boost software. Conventional Gasoline fuel 90%, 80% and 85% were blended with co-surfactant Ethanol in different compositions, and water was used as an additive for making it crystal clear transparent micro emulsion fuel, which is thermodynamically stable. By comparing the performances of engines, the power has shown similarity for micro emulsion fuel and conventional Gasoline fuel. On the other hand, Torque and BMEP shows increase for all the micro emulsion fuels. Micro emulsion fuel shows higher thermal efficiency and lower Specific Fuel Consumption for all the compositions as compared to the Gasoline fuel. Carbon monoxide and Hydro carbon emissions were also measured. The result shows that emissions decrease for all the composition of micro emulsion fuels, and proved to be the most efficient fuel both in terms of performance and emission characteristics.Keywords: AVL Boost, emissions, microemulsions, performance, Spark Ignition (SI) engine
Procedia PDF Downloads 264