Search results for: intelligent distribution grids
1527 Water Governance Perspectives on the Urmia Lake Restoration Process: Challenges and Achievements
Authors: Jalil Salimi, Mandana Asadi, Naser Fathi
Abstract:
Urmia Lake (UL) has undergone a significant decline in water levels, resulting in severe environmental, socioeconomic, and health-related challenges. This paper examines the restoration process of UL from a water governance perspective. By applying a water governance model, the study evaluates the process based on six selected principles: stakeholder engagement, transparency and accountability, effectiveness, equitable water use, adaptation capacity, and water usage efficiency. The dominance of structural and physicalist approaches to water governance has led to a weak understanding of social and environmental issues, contributing to social crises. Urgent efforts are required to address the water crisis and reform water governance in the country, making water-related issues a top national priority. The UL restoration process has achieved significant milestones, including stakeholder consensus, scientific and participatory planning, environmental vision, intergenerational justice considerations, improved institutional environment for NGOs, investments in water infrastructure, transparency promotion, environmental effectiveness, and local issue resolutions. However, challenges remain, such as power distribution imbalances, bureaucratic administration, weak conflict resolution mechanisms, financial constraints, accountability issues, limited attention to social concerns, overreliance on structural solutions, legislative shortcomings, program inflexibility, and uncertainty management weaknesses. Addressing these weaknesses and challenges is crucial for the successful restoration and sustainable governance of UL.Keywords: evaluation, restoration process, Urmia Lake, water governance, water resource management
Procedia PDF Downloads 671526 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community
Authors: Mohamed Ghorab
Abstract:
Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.Keywords: distributed energy resources, network energy system, optimization, microgeneration system
Procedia PDF Downloads 1901525 Association of Calcium Intake Adequacy with Wealth Indices among Selected Female Adults Living in Depressed and Non-Depressed Area in Metro Manila, Philippines
Authors: Maria Viktoria Melgo
Abstract:
This study aimed to determine the possible association between calcium intake and wealth indices of selected female adults. Specifically, it aimed to: a) determine the calcium intake adequacy of the respondents. b) determine the relationship, if any, between calcium intake adequacy, area and wealth indices. The study used the survey design and employed convenience sampling in selecting participants. Two hundred females aged 20 – 64 years old were covered in the study from depressed and non-depressed areas. Data collected were calcium intake taken from two 24-hour food recall and Food Frequency Questionnaire (FFQ) and wealth indices using housing characteristics, household assets and access to utilities and infrastructure. Descriptive statistics and Chi-square test were used to determine the frequency distribution and association between the given variables, respectively, using Statistical Package for Social Sciences (SPSS) and OpenEpi software. The results showed that there were 86% of respondents in the depressed area with an inadequate calcium intake while there were 78% of respondents in the non-depressed area with an adequate calcium intake. No significant relationship was obtained in most wealth indices with calcium intake adequacy and area but appliance and ownership of main material of the house showed a significant relationship to calcium intake adequacy by area. The study recommends that the Local Government Unit (LGU) should provide seminars or nutrition education that will further enhance the knowledge of the people in the community. The study also recommends to conduct a similar study but with different, larger sample size, different location nonetheless if it is in urban or rural and include the anthropometry measurement of the respondents.Keywords: association, calcium intake adequacy, metro Manila, Philippines, wealth indices
Procedia PDF Downloads 1961524 Evaluation of Forming Properties on AA 5052 Aluminium Alloy by Incremental Forming
Authors: A. Anbu Raj, V. Mugendiren
Abstract:
Sheet metal forming is a vital manufacturing process used in automobile, aerospace, agricultural industries, etc. Incremental forming is a promising process providing a short and inexpensive way of forming complex three-dimensional parts without using die. The aim of this research is to study the forming behaviour of AA 5052, Aluminium Alloy, using incremental forming and also to study the FLD of cone shape AA 5052 Aluminium Alloy at room temperature and various annealing temperature. Initially the surface roughness and wall thickness through incremental forming on AA 5052 Aluminium Alloy sheet at room temperature is optimized by controlling the effects of forming parameters. The central composite design (CCD) was utilized to plan the experiment. The step depth, feed rate, and spindle speed were considered as input parameters in this study. The surface roughness and wall thickness were used as output response. The process performances such as average thickness and surface roughness were evaluated. The optimized results are taken for minimum surface roughness and maximum wall thickness. The optimal results are determined based on response surface methodology and the analysis of variance. Formability Limit Diagram is constructed on AA 5052 Aluminium Alloy at room temperature and various annealing temperature by using optimized process parameters from the response surface methodology. The cone has higher formability than the square pyramid and higher wall thickness distribution. Finally the FLD on cone shape and square pyramid shape at room temperature and the various annealing temperature is compared experimentally and simulated with Abaqus software.Keywords: incremental forming, response surface methodology, optimization, wall thickness, surface roughness
Procedia PDF Downloads 3381523 Determination and Distribution of Formation Thickness Using Seismic and Well Data in Baga/Lake Sub-basin, Chad Basin Nigeria
Authors: Gabriel Efomeh Omolaiye, Olatunji Seminu, Jimoh Ajadi, Yusuf Ayoola Jimoh
Abstract:
The Nigerian part of the Chad Basin till date has been one of the few critically studied basins, with few published scholarly works, compared to other basins such as Niger Delta, Dahomey, etc. This work was undertaken by the integration of 3D seismic interpretations and the well data analysis of eight wells fairly distributed in block A, Baga/Lake sub-basin in Borno basin with the aim of determining the thickness of Chad, Kerri-Kerri, Fika, and Gongila Formations in the sub-basin. Da-1 well (type-well) used in this study was subdivided into stratigraphic units based on the regional stratigraphic subdivision of the Chad basin and was later correlated with other wells using similarity of observed log responses. The combined density and sonic logs were used to generate synthetic seismograms for seismic to well ties. Five horizons were mapped, representing the tops of the formations on the 3D seismic data covering the block; average velocity function with maximum error/residual of 0.48% was adopted in the time to depth conversion of all the generated maps. There is a general thickening of sediments from the west to the east, and the estimated thicknesses of the various formations in the Baga/Lake sub-basin are Chad Formation (400-750 m), Kerri-Kerri Formation (300-1200 m), Fika Formation (300-2200 m) and Gongila Formation (100-1300 m). The thickness of the Bima Formation could not be established because the deepest well (Da-1) terminates within the formation. This is a modification to the previous and widely referenced studies of over forty decades that based the estimation of formation thickness within the study area on the observed outcrops at different locations and the use of few well data.Keywords: Baga/Lake sub-basin, Chad basin, formation thickness, seismic, velocity
Procedia PDF Downloads 1861522 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm
Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy
Abstract:
IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.Keywords: IoT, fog networks, data stewardship, dynamic access policy
Procedia PDF Downloads 591521 Plasmonic Nanoshells Based Metabolite Detection for in-vitro Metabolic Diagnostics and Therapeutic Evaluation
Authors: Deepanjali Gurav, Kun Qian
Abstract:
In-vitro metabolic diagnosis relies on designed materials-based analytical platforms for detection of selected metabolites in biological samples, which has a key role in disease detection and therapeutic evaluation in clinics. However, the basic challenge deals with developing a simple approach for metabolic analysis in bio-samples with high sample complexity and low molecular abundance. In this work, we report a designer plasmonic nanoshells based platform for direct detection of small metabolites in clinical samples for in-vitro metabolic diagnostics. We first synthesized a series of plasmonic core-shell particles with tunable nanoshell structures. The optimized plasmonic nanoshells as new matrices allowed fast, multiplex, sensitive, and selective LDI MS (Laser desorption/ionization mass spectrometry) detection of small metabolites in 0.5 μL of bio-fluids without enrichment or purification. Furthermore, coupling with isotopic quantification of selected metabolites, we demonstrated the use of these plasmonic nanoshells for disease detection and therapeutic evaluation in clinics. For disease detection, we identified patients with postoperative brain infection through glucose quantitation and daily monitoring by cerebrospinal fluid (CSF) analysis. For therapeutic evaluation, we investigated drug distribution in blood and CSF systems and validated the function and permeability of blood-brain/CSF-barriers, during therapeutic treatment of patients with cerebral edema for pharmacokinetic study. Our work sheds light on the design of materials for high-performance metabolic analysis and precision diagnostics in real cases.Keywords: plasmonic nanoparticles, metabolites, fingerprinting, mass spectrometry, in-vitro diagnostics
Procedia PDF Downloads 1381520 Experimental and Numerical Performance Analysis for Steam Jet Ejectors
Authors: Abdellah Hanafi, G. M. Mostafa, Mohamed Mortada, Ahmed Hamed
Abstract:
The steam ejectors are the heart of most of the desalination systems that employ vacuum. The systems that employ low grade thermal energy sources like solar energy and geothermal energy use the ejector to drive the system instead of high grade electric energy. The jet-ejector is used to create vacuum employing the flow of steam or air and using the severe pressure drop at the outlet of the main nozzle. The present work involves developing a one dimensional mathematical model for designing jet-ejectors and transform it into computer code using Engineering Equation solver (EES) software. The model receives the required operating conditions at the inlets and outlet of the ejector as inputs and produces the corresponding dimensions required to reach these conditions. The one-dimensional model has been validated using an existed model working on Abu-Qir power station. A prototype has been designed according to the one-dimensional model and attached to a special test bench to be tested before using it in the solar desalination pilot plant. The tested ejector will be responsible for the startup evacuation of the system and adjusting the vacuum of the evaporating effects. The tested prototype has shown a good agreement with the results of the code. In addition a numerical analysis has been applied on one of the designed geometry to give an image of the pressure and velocity distribution inside the ejector from a side, and from other side, to show the difference in results between the two-dimensional ideal gas model and real prototype. The commercial edition of ANSYS Fluent v.14 software is used to solve the two-dimensional axisymmetric case.Keywords: solar energy, jet ejector, vacuum, evaporating effects
Procedia PDF Downloads 6211519 Assessing the Potential of a Waste Material for Cement Replacement and the Effect of Its Fineness in Soft Soil Stabilisation
Authors: Hassnen M. Jafer, W. Atherton, F. Ruddock
Abstract:
This paper represents the results of experimental work to investigate the suitability of a waste material (WM) for soft soil stabilisation. In addition, the effect of particle size distribution (PSD) of the waste material on its performance as a soil stabiliser was investigated. The WM used in this study is produced from the incineration processes in domestic energy power plant and it is available in two different grades of fineness (coarse waste material (CWM) and fine waste material (FWM)). An intermediate plasticity silty clayey soil with medium organic matter content has been used in this study. The suitability of the CWM and FWM to improve the physical and engineering properties of the selected soil was evaluated dependant on the results obtained from the consistency limits, compaction characteristics (optimum moisture content (OMC) and maximum dry density (MDD)); along with the unconfined compressive strength test (UCS). Different percentages of CWM were added to the soft soil (3, 6, 9, 12 and 15%) to produce various admixtures. Then the UCS test was carried out on specimens under different curing periods (zero, 7, 14, and 28 days) to find the optimum percentage of CWM. The optimum and other two percentages (either side of the optimum content) were used for FWM to evaluate the effect of the fineness of the WM on UCS of the stabilised soil. Results indicated that both types of the WM used in this study improved the physical properties of the soft soil where the index of plasticity (IP) was decreased significantly. IP was decreased from 21 to 13.64 and 13.10 with 12% of CWM and 15% of FWM respectively. The results of the unconfined compressive strength test indicated that 12% of CWM was the optimum and this percentage developed the UCS value from 202kPa to 500kPa for 28 days cured samples, which is equal, approximately 2.5 times the UCS value for untreated soil. Moreover, this percentage provided 1.4 times the value of UCS for stabilized soil-CWA by using FWM which recorded just under 700kPa after 28 days curing.Keywords: soft soil stabilisation, waste materials, fineness, unconfined compressive strength
Procedia PDF Downloads 2711518 Smart Mobility Planning Applications in Meeting the Needs of the Urbanization Growth
Authors: Caroline Atef Shoukry Tadros
Abstract:
Massive Urbanization growth threatens the sustainability of cities and the quality of city life. This raised the need for an alternate model of sustainability, so we need to plan the future cities in a smarter way with smarter mobility. Smart Mobility planning applications are solutions that use digital technologies and infrastructure advances to improve the efficiency, sustainability, and inclusiveness of urban transportation systems. They can contribute to meeting the needs of Urbanization growth by addressing the challenges of traffic congestion, pollution, accessibility, and safety in cities. Some example of a Smart Mobility planning application are Mobility-as-a-service: This is a service that integrates different transport modes, such as public transport, shared mobility, and active mobility, into a single platform that allows users to plan, book, and pay for their trips. This can reduce the reliance on private cars, optimize the use of existing infrastructure, and provide more choices and convenience for travelers. MaaS Global is a company that offers mobility-as-a-service solutions in several cities around the world. Traffic flow optimization: This is a solution that uses data analytics, artificial intelligence, and sensors to monitor and manage traffic conditions in real-time. This can reduce congestion, emissions, and travel time, as well as improve road safety and user satisfaction. Waycare is a platform that leverages data from various sources, such as connected vehicles, mobile applications, and road cameras, to provide traffic management agencies with insights and recommendations to optimize traffic flow. Logistics optimization: This is a solution that uses smart algorithms, blockchain, and IoT to improve the efficiency and transparency of the delivery of goods and services in urban areas. This can reduce the costs, emissions, and delays associated with logistics, as well as enhance the customer experience and trust. ShipChain is a blockchain-based platform that connects shippers, carriers, and customers and provides end-to-end visibility and traceability of the shipments. Autonomous vehicles: This is a solution that uses advanced sensors, software, and communication systems to enable vehicles to operate without human intervention. This can improve the safety, accessibility, and productivity of transportation, as well as reduce the need for parking space and infrastructure maintenance. Waymo is a company that develops and operates autonomous vehicles for various purposes, such as ride-hailing, delivery, and trucking. These are some of the ways that Smart Mobility planning applications can contribute to meeting the needs of the Urbanization growth. However, there are also various opportunities and challenges related to the implementation and adoption of these solutions, such as the regulatory, ethical, social, and technical aspects. Therefore, it is important to consider the specific context and needs of each city and its stakeholders when designing and deploying Smart Mobility planning applications.Keywords: smart mobility planning, smart mobility applications, smart mobility techniques, smart mobility tools, smart transportation, smart cities, urbanization growth, future smart cities, intelligent cities, ICT information and communications technologies, IoT internet of things, sensors, lidar, digital twin, ai artificial intelligence, AR augmented reality, VR virtual reality, robotics, cps cyber physical systems, citizens design science
Procedia PDF Downloads 731517 Geophysical Methods of Mapping Groundwater Aquifer System: Perspectives and Inferences From Lisana Area, Western Margin of the Central Main Ethiopian Rift
Authors: Esubalew Yehualaw Melaku, Tigistu Haile Eritro
Abstract:
In this study, two basic geophysical methods are applied for mapping the groundwater aquifer system in the Lisana area along the Guder River, northeast of Hosanna town, near the western margin of the Central Main Ethiopian Rift. The main target of the study is to map the potential aquifer zone and investigate the groundwater potential for current and future development of the resource in the Gode area. The geophysical methods employed in this study include, Vertical Electrical Sounding (VES) and magnetic survey techniques. Electrical sounding was used to examine and map the depth to the potential aquifer zone of the groundwater and its distribution over the area. On the other hand, a magnetic survey was used to delineate contact between lithologic units and geological structures. The 2D magnetic modeling and the geoelectric sections are used for the identification of weak zones, which control the groundwater flow and storage system. The geophysical survey comprises of twelve VES readings collected by using a Schlumberger array along six profile lines and more than four hundred (400) magnetic readings at about 10m station intervals along four profiles and 20m along three random profiles. The study result revealed that the potential aquifer in the area is obtained at a depth range from 45m to 92m. This is the response of the highly weathered/ fractured ignimbrite and pumice layer with sandy soil, which is the main water-bearing horizon. Overall, in the neighborhood of four VES points, VES- 2, VES- 3, VES-10, and VES-11, shows good water-bearing zones in the study area.Keywords: vertical electrical sounding, magnetic survey, aquifer, groundwater potential
Procedia PDF Downloads 791516 Fast Switching Mechanism for Multicasting Failure in OpenFlow Networks
Authors: Alaa Allakany, Koji Okamura
Abstract:
Multicast technology is an efficient and scalable technology for data distribution in order to optimize network resources. However, in the IP network, the responsibility for management of multicast groups is distributed among network routers, which causes some limitations such as delays in processing group events, high bandwidth consumption and redundant tree calculation. Software Defined Networking (SDN) represented by OpenFlow presented as a solution for many problems, in SDN the control plane and data plane are separated by shifting the control and management to a remote centralized controller, and the routers are used as a forwarder only. In this paper we will proposed fast switching mechanism for solving the problem of link failure in multicast tree based on Tabu Search heuristic algorithm and modifying the functions of OpenFlow switch to fasts switch to the pack up sub tree rather than sending to the controller. In this work we will implement multicasting OpenFlow controller, this centralized controller is a core part in our multicasting approach, which is responsible for 1- constructing the multicast tree, 2- handling the multicast group events and multicast state maintenance. And finally modifying OpenFlow switch functions for fasts switch to pack up paths. Forwarders, forward the multicast packet based on multicast routing entries which were generated by the centralized controller. Tabu search will be used as heuristic algorithm for construction near optimum multicast tree and maintain multicast tree to still near optimum in case of join or leave any members from multicast group (group events).Keywords: multicast tree, software define networks, tabu search, OpenFlow
Procedia PDF Downloads 2631515 Impact of Unusual Dust Event on Regional Climate in India
Authors: Kanika Taneja, V. K. Soni, Kafeel Ahmad, Shamshad Ahmad
Abstract:
A severe dust storm generated from a western disturbance over north Pakistan and adjoining Afghanistan affected the north-west region of India between May 28 and 31, 2014, resulting in significant reductions in air quality and visibility. The air quality of the affected region degraded drastically. PM10 concentration peaked at a very high value of around 1018 μgm-3 during dust storm hours of May 30, 2014 at New Delhi. The present study depicts aerosol optical properties monitored during the dust days using ground based multi-wavelength Sky radiometer over the National Capital Region of India. High Aerosol Optical Depth (AOD) at 500 nm was observed as 1.356 ± 0.19 at New Delhi while Angstrom exponent (Alpha) dropped to 0.287 on May 30, 2014. The variation in the Single Scattering Albedo (SSA) and real n(λ) and imaginary k(λ) parts of the refractive index indicated that the dust event influences the optical state to be more absorbing. The single scattering albedo, refractive index, volume size distribution and asymmetry parameter (ASY) values suggested that dust aerosols were predominant over the anthropogenic aerosols in the urban environment of New Delhi. The large reduction in the radiative flux at the surface level caused significant cooling at the surface. Direct Aerosol Radiative Forcing (DARF) was calculated using a radiative transfer model during the dust period. A consistent increase in surface cooling was evident, ranging from -31 Wm-2 to -82 Wm-2 and an increase in heating of the atmosphere from 15 Wm-2 to 92 Wm-2 and -2 Wm-2 to 10 Wm-2 at top of the atmosphere.Keywords: aerosol optical properties, dust storm, radiative transfer model, sky radiometer
Procedia PDF Downloads 3771514 Causes of Pokir in the Budgeting Process: Case Study in the Province of Jakarta, Indonesia
Authors: Tri Nopiyanto, Rahardhyani Dwiannisa, Arief Ismaryanto
Abstract:
One main issue for a certain region in order to achieve development is if the government that consists of the executive, legislative and judicial board are able to work together. However, there are certain conditions that these boards are the sources of conflict, especially between the executive and legislative board. One of the example of the conflict is between the Local Government and Legislative Board (DPRD) in the Province of Jakarta in 2015. The cause of this conflict is because of the occurrence of pokir (pokok pikiran or ideas of budgeting). Pokir is driven by a budgeting plan that is arranged by DPRD that is supposed to be sourced from the aspiration of the people and delivered 5 months before the legalization of Local Government Budget (APBD), but the current condition in Jakarta is that pokir is a project by DPRD members itself and delivered just 3 days before the legalization in order to facilitate the interests of the members of the legislative. This paper discusses how pokir happens and what factors caused it. This paper uses political budgeting theory by Andy Norton and Diane Elson to analyze the issue. The method used in this paper is qualitative to collect the data and solve the problem of this research. The methods involved are in depth interview, experimental questionnaire, and literature studies. Results of this research are that Pokir occurs because of the distribution of power among DPRD members, between parties, executive, and legislative board. Beside that, Pokir also occurs because of the lack of the people’s participation in budgeting process and monitoring. Other than that, this paper also found that pokir also happens because of the budgeting system that is not able to provide a clean budgeting process, so it enables the creation of certain slots to add pokir into the budgets. Pokir also affects the development of Jakarta that goes through stagnation. This research recommends the implementation of e-budgeting to prevent the occurrence of pokir itself in the Province of Jakarta.Keywords: legislative and executive board, Jakarta, political budgeting, Pokir
Procedia PDF Downloads 2701513 Infrastructure Sharing Synergies: Optimal Capacity Oversizing and Pricing
Authors: Robin Molinier
Abstract:
Industrial symbiosis (I.S) deals with both substitution synergies (exchange of waste materials, fatal energy and utilities as resources for production) and infrastructure/service sharing synergies. The latter is based on the intensification of use of an asset and thus requires to balance capital costs increments with snowball effects (network externalities) for its implementation. Initial investors must specify ex-ante arrangements (cost sharing and pricing schedule) to commit toward investments in capacities and transactions. Our model investigate the decision of 2 actors trying to choose cooperatively a level of infrastructure capacity oversizing to set a plug-and-play offer to a potential entrant whose capacity requirement is randomly distributed while satisficing their own requirements. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period. The entrant’s willingness-to-pay for the access to the infrastructure is dependent upon its standalone cost and the capacity gap that it must complete in case the available capacity is insufficient ex-post (the complement cost). Since initial capacity choices are driven by ex-ante (expected) yield extractible from the entrant we derive the expected complement cost function which helps us defining the investors’ objective function. We first show that this curve is decreasing and convex in the capacity increments and that it is shaped by the distribution function of the potential entrant’s requirements. We then derive the general form of solutions and solve the model for uniform and triangular distributions. Depending on requirements volumes and cost assumptions different equilibria occurs. We finally analyze the effect of a per-unit subsidy a public actor would apply to foster such sharing synergies.Keywords: capacity, cooperation, industrial symbiosis, pricing
Procedia PDF Downloads 2121512 Use of Social Media in PR: A Change of Trend
Authors: Tang Mui Joo, Chan Eang Teng
Abstract:
The use of social media has become more defined. It has been widely used for the purpose of business. More marketers are now using social media as tools to enhance their businesses. Whereas on the other hand, there are more and more people spending their time through mobile apps to be engaged in the social media sites like YouTube, Facebook, Twitter and others. Social media has even become common in Public Relations (PR). It has become number one platform for creating and sharing content. In view to this, social media has changed the rules in PR where it brings new challenges and opportunities to the profession. Although corporate websites, chat-rooms, email customer response facilities and electronic news release distribution are now viewed as standard aspects of PR practice, many PR practitioners are still struggling with the impact of new media though the implementation of social media is potentially reducing the cost of communication. It is to the point that PR practitioners are not fully embracing new media, they are ill-equipped to do so and they have a fear of the technology. Somehow that social media has become a new style of communication that is characterized by conversation and community. It has become a platform that allows individuals to interact with one another and build relationship among each other. Therefore, in the use of business world, consumers are able to interact with those companies that have joined any social media. Based on their experiences with social networking site interactions, they are also exposed to personal interaction while communicating. This paper is to study the impact of social media to PR. This paper discovers the potential changes of PR practices in a developing country like Malaysia. Eventually the study reflects on how PR practitioners are actually using social media in the country. This paper is based on two theories in its development of this research foundation. Media Ecology Theory is to support the impact and changes to PR. Social Penetration Theory is to reflect on how the use of social media is among PRs. This research is using survey with PR practitioners in its data collection. The results have shown that PR professionals value social media more than they actually use it and the way of organizations communicate had been changed due to the transformation of social media.Keywords: new media, social media, PR, change of trend, communication, digital culture
Procedia PDF Downloads 3211511 Study on the Impact of Power Fluctuation, Hydrogen Utilization, and Fuel Cell Stack Orientation on the Performance Sensitivity of PEM Fuel Cell
Authors: Majid Ali, Xinfang Jin, Victor Eniola, Henning Hoene
Abstract:
The performance of proton exchange membrane (PEM) fuel cells is sensitive to several factors, including power fluctuations, hydrogen utilization, and the quality orientation of the fuel cell stack. In this study, we investigate the impact of these factors on the performance of a PEM fuel cell. We start by analyzing the power fluctuations that are typical in renewable energy systems and their effects on the 50 Watt fuel cell's performance. Next, we examine the hydrogen utilization rate (0-1000 mL/min) and its impact on the cell's efficiency and durability. Finally, we investigate the quality orientation (three different positions) of the fuel cell stack, which can significantly affect the cell's lifetime and overall performance. The basis of our analysis is the utilization of experimental results, which have been further validated by comparing them with simulations and manufacturer results. Our results indicate that power fluctuations can cause significant variations in the fuel cell's voltage and current, leading to a reduction in its performance. Moreover, we show that increasing the hydrogen utilization rate beyond a certain threshold can lead to a decrease in the fuel cell's efficiency. Finally, our analysis demonstrates that the orientation of the fuel cell stack can affect its performance and lifetime due to non-uniform distribution of reactants and products. In summary, our study highlights the importance of considering power fluctuations, hydrogen utilization, and quality orientation in designing and optimizing PEM fuel cell systems. The findings of this study can be useful for researchers and engineers working on the development of fuel cell systems for various applications, including transportation, stationary power generation, and portable devices.Keywords: fuel cell, proton exchange membrane, renewable energy, power fluctuation, experimental
Procedia PDF Downloads 1351510 Factors Affecting Special Core Analysis Resistivity Parameters
Authors: Hassan Sbiga
Abstract:
Laboratory measurements methods were undertaken on core samples selected from three different fields (A, B, and C) from the Nubian Sandstone Formation of the central graben reservoirs in Libya. These measurements were conducted in order to determine the factors which affect resistivity parameters, and to investigate the effect of rock heterogeneity and wettability on these parameters. This included determining the saturation exponent (n) in the laboratory at two stages. The first stage was before wettability measurements were conducted on the samples, and the second stage was after the wettability measurements in order to find any effect on the saturation exponent. Another objective of this work was to quantify experimentally pores and porosity types (macro- and micro-porosity), which have an affect on the electrical properties, by integrating capillary pressure curves with other routine and special core analysis. These experiments were made for the first time to obtain a relation between pore size distribution and saturation exponent n. Changes were observed in the formation resistivity factor and cementation exponent due to ambient conditions and changes of overburden pressure. The cementation exponent also decreased from GHE-5 to GHE-8. Changes were also observed in the saturation exponent (n) and water saturation (Sw) before and after wettability measurement. Samples with an oil-wet tendency have higher irreducible brine saturation and higher Archie saturation exponent values than samples with an uniform water-wet surface. The experimental results indicate that there is a good relation between resistivity and pore type depending on the pore size. When oil begins to penetrate micro-pore systems in measurements of resistivity index versus brine saturation (after wettability measurement), a significant change in slope of the resistivity index relationship occurs.Keywords: part of thesis, cementation, wettability, resistivity
Procedia PDF Downloads 2461509 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 3341508 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid
Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong
Abstract:
Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function
Procedia PDF Downloads 1001507 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies
Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey
Abstract:
Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.Keywords: climate change, downscaling, GCM, RCM
Procedia PDF Downloads 4061506 Radio Labeling and Characterization of Cysteine and Its Derivatives with Tc99m and Their Bio-Distribution
Authors: Rabia Ashfaq, Saeed Iqbal, Atiq ur Rehman, Irfanullah Khan
Abstract:
An extensive series of radiopharmaceuticals have been explored in order to discover a better brain tumour diagnostic agent. Tc99m labelling with cysteine and its derivatives in liposomes shows effective tagging of about 70% to 80 %. Due to microscopic size it successfully crossed the brain barrier in 2 minutes which gradually decreases in 5 to 15 minutes. HMPAO labelled with Tc99m is another important radiopharmaceutical used to study brain perfusion but it comes with a flaw that it’s only functional during epilepsy. 1, 1 ECD is purely used in Tc99m ECD formulation; because it not only tends to cross the blood brain barrier but it can be metabolized which can be easily entrapped in human brain. Radio labelling of Cysteine with Tc99m at room temperature was performed which yielded no good results. Hence cysteine derivatives with salicylaldehyde were prepared that produced about 75 % yield for ligand. In order to perform it’s radio labelling a suitable solvent DMSO was selected and physical parameters were performed. Elemental analyser produced remarkably similar results for ligand as reported in literature. IR spectra of Ligand in DMSO concluded in the absence of SH stretch and presence of N-H vibration. Thermal analysis of the ligand further suggested its decomposition pattern with no distinct curve for a melting point. Radio labelling of ligand was performed which produced excellent results giving up to 88% labelling at pH 5.0. Clinical trials using Rabbit were performed after validating the products reproducibility. The radiopharmaceutical prepared was injected into the rabbit. Dynamic as well as static study was performed under the SPECT. It showed considerable uptake in the kidneys and liver considering it suitable for the Hypatobilliary study.Keywords: marcapto compounds, 99mTc - radiolabeling, salicylaldicysteine, thiozolidine
Procedia PDF Downloads 3441505 Novel Liposomal Nanocarriers For Long-term Tumor Imaging
Authors: Mohamad Ahrari, Kayvan Sadri, Mahmoud Reza Jafari
Abstract:
PEGylated liposomes have a smaller volume of distribution and decreased clearance, consequently, due to their more prolonged presence in bloodstream and maintaining their stability during this period, these liposomes can be applied for imaging tumoral sites. The purpose of this study is to develop an appropriate radiopharmaceutical agent in long-term imaging for improved diagnosis and evaluation of tumors. In this study, liposomal formulations encapsulating albumin is synthesized by solvent evaporation method along with homogenization, and their characteristics were assessed. Then these liposomes labeled by Philips method and the rate of stability of labeled liposomes in serum, and ultimately the rate of biodistribution and gamma scintigraphy in C26-colon carcinoma tumor-bearing mice, were studied. The result of the study of liposomal characteristics displayed that capable of accumulating in tumor sites based of EPR phenomenon. these liposomes also have high stability for maintaining encapsulated albumin in a long time. In the study of biodistribution of these liposomes in mice, they accumulated more in the kidney, liver, spleen, and tumor sites, which, even after clearing formulations in the bloodstream, they existed in high levels in these organs up to 96 hours. In gamma scintigraphy also, organs with high activity accumulation from early hours up to 96 hours were visible in the form of hot spots. concluded that PEGylated liposomal formulation encapsulating albumin can be labeled with In-Oxine, and obtained stabilized formulation for long-term imaging, that have more favorable conditions for the evaluation of tumors and it will cause early diagnosis of tumors.Keywords: nano liposome, 111In-oxine, imaging, biodistribution, tumor
Procedia PDF Downloads 1131504 Brand Preferences in Saudi Arabia: Explorative Study in Jeddah
Authors: Badr Alharbi
Abstract:
There is significant debate on the evolution of retail marketing as an economy matures. In penetrating new markets, global brands are efficient in establishing a presence and replacing less effective competitors by engaging in superior advertising, pricing and sometimes quality. However, national brands adapt over time and may either partner with global brands in distribution and services or directly compete more efficiently in the new, open market. This explorative study investigates brand preferences in Saudi Arabia. As a conservative society, which is nevertheless highly commercialised, Saudi Arabia markets could be fragmenting with consumer preferences and rejections based on country of origin, globalisation, or perhaps regionalisation. To investigate this, an online survey was distributed to Saudis in Jeddah to gather data on their preferences for travel, technology, clothes and accessories, eating out, vehicles, and influential brands. The results from 710 valid responses were that there are distinct regional and national brand preferences among the young Saudi men who contributed to the survey. Apart from a preference for Saudi food providers, airline preferences were the United Emirates, holiday preferences were Europe, study and work preferences were the United States, hotel preferences were United States-based, car preferences were Japanese, and clothing preferences were United States-based. The results were broadly in line with international research findings; however, the study participants varied from Arab research findings by describing themselves as innovative in their purchase selections, rarely loyal (exception of Apple products) and continually seeking new brand experiences. This survey contributes to an understanding of evolving Saudi consumer preferences.Keywords: Saudi marketing, globalisation, country of origin, brand preferences
Procedia PDF Downloads 2771503 Economic and Financial Crime, Forensic Accounting and Sustainable Developments Goals (SDGs). Bibliometric Analysis
Authors: Monica Violeta Achim, Sorin Nicolae Borlea
Abstract:
This aim of this work is to stress the needs for enhancing the role of forensic accounting in fighting economic and financial crime, in the context of the new international regulation movements in this area enhanced by the International Federation of Accountants (IFAC). Corruption, money laundering, tax evasion and other frauds significant hamper the economic growth and human development and, ultimately, the UN Sustainable Development Goals. The present paper also stresses the role of good governance in fighting the frauds, in order to achieve the most suitable sustainable development of the society. In this view, we made a bibliometric systematic review on forensic accounting and its contribution towards fraud detection and prevention and theirs relationship with good governance and Sustainable Developments Goals (SDGs). In this view, two powerful bibliometric visual software tools, VosViewer and CiteSpace are used in order to analyze published papers identifies in Scopus and Web of Science databases over the time. Our findings reveal the main red flags identified in literature as used tools by forensic accounting, the evolution in time of the interest of the topic, the distribution in space among world countries and connectivity with patterns of a good governance. Visual designs and scientific maps are useful to show these findings, in a visual way. Our findings are useful for managers and policy makers to provide important avenues that may help in reaching the 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, in the area of using forensic accounting in preventing frauds.Keywords: forensic accounting, frauds, red flags, SDGs
Procedia PDF Downloads 1401502 Saponins vs Anthraquinones: Different Chemicals, Similar Ecological Roles in Marine Symbioses
Authors: Guillaume Caulier, Lola Brasseur, Patrick Flammang, Pascal Gerbaux, Igor Eeckhaut
Abstract:
Saponins and quinones are two major groups of secondary metabolites widely distributed in the biosphere. More specifically, triterpenoid saponins and anthraquinones are mainly found in a wide variety of plants, bacteria and fungi. In the animal kingdom, these natural organic compounds are rare and only found in small quantities in arthropods, marine sponges and echinoderms. In this last group, triterpenoid saponins are specific to holothuroids (sea cucumbers) while anthraquinones are the chemical signature of crinoids (feather stars). Depending on the species, they present different molecular cocktails. Despite presenting different chemical properties, these molecules share numerous similarities. This study compares the biological distribution, the pharmacological effects and the ecological roles of holothuroid saponins and crinoid anthraquinones. Both of them have been defined as allomones repelling predators and parasites (i.e. chemical defense) and have interesting pharmacological properties (e.g. anti-bacterial, anti-fungal, anti-cancer). Our study investigates the chemical ecology of two symbiotic associations models; between the snapping shrimp Synalpheus stimpsonii associated with crinoids and the Harlequin crab Lissocarcinus orbicularis associated with holothuroids. Using behavioral experiments in olfactometers, chemical extractions and mass spectrometry analyses, we discovered that saponins and anthraquinones present a second ecological role: the attraction of obligatory symbionts towards their hosts. They can, therefore, be defined as kairomones. This highlights a new paradigm in marine chemical ecology: Chemical repellents are attractants to obligatory symbionts because they constitute host specific chemical signatures.Keywords: anthraquinones, kairomones, marine symbiosis, saponins, attractant
Procedia PDF Downloads 1991501 Hybrid Capture Resolves the Phylogeny of the Pantropically Distributed Zanthoxylum (Rutaceae) and Reveals an Old World Origin
Authors: Lee Ping Ang, Salvatore Tomasello, Jun Wen, Marc S. Appelhans
Abstract:
With about 225 species, Zanthoxylum L. is the second most species rich genus in Rutaceae. It is the only genus with a pantropical distribution. Economically, it is used in several Asian countries as traditional medicine and spice. In the past Zanthoxylum was divided into two genera, the temperate Zanthoxylum sensu strictu (s.s.) and the (sub)tropical Fagara, due to the large differences in flower morphology: heterochlamydeous in Fagara and homochlamydeous in Zanthoxylum s.s.. This genus is much under studied and previous phylogenetic studies using Sanger sequencing did not resolve the relationships sufficiently. In this study, we use Hybrid Capture with a specially designed bait set for Zanthoxylum to sequence 347 putatively single-copy genes. The taxon sampling has been largely improved as compared to previous studies and the preliminary results will be based on 371 specimens representing 133 species from all continents and major island groups. Our preliminary results reveal similar tree topology as the previous studies while providing more details to the backbone of the phylogeny. The phylogenetic tree consists of four main clades: A) African/Malagasy clade, B) Z. asiaticum clade - a clade consisting widespread species occurring in (sub)tropical Asia and Africa as well as Madagascar, C) Asian/Pacific clade and D) American clade, which also includes the temperate Asian species. The merging of Fagara and Zanthoxylum is supported by our results and the homochlamydeous flowers of Zanthoxylum s.s. are likely derived from heterochlamydeous flowers. Several of the morphologically defined sections within Zanthoxylum are not monophyletic. The study dissemination will (1) introduce the framework of this project; (2) present preliminary results and (3) the ongoing progress of the study.Keywords: Zanthoxylum, phylogenomic, hybrid capture, pantropical
Procedia PDF Downloads 721500 Geographic Information System (GIS) for Structural Typology of Buildings
Authors: Néstor Iván Rojas, Wilson Medina Sierra
Abstract:
Managing spatial information is described through a Geographic Information System (GIS), for some neighborhoods in the city of Tunja, in relation to the structural typology of the buildings. The use of GIS provides tools that facilitate the capture, processing, analysis and dissemination of cartographic information, product quality evaluation of the classification of buildings. Allows the development of a method that unifies and standardizes processes information. The project aims to generate a geographic database that is useful to the entities responsible for planning and disaster prevention and care for vulnerable populations, also seeks to be a basis for seismic vulnerability studies that can contribute in a study of urban seismic microzonation. The methodology consists in capturing the plat including road naming, neighborhoods, blocks and buildings, to which were added as attributes, the product of the evaluation of each of the housing data such as the number of inhabitants and classification, year of construction, the predominant structural systems, the type of mezzanine board and state of favorability, the presence of geo-technical problems, the type of cover, the use of each building, damage to structural and non-structural elements . The above data are tabulated in a spreadsheet that includes cadastral number, through which are systematically included in the respective building that also has that attribute. Geo-referenced data base is obtained, from which graphical outputs are generated, producing thematic maps for each evaluated data, which clearly show the spatial distribution of the information obtained. Using GIS offers important advantages for spatial information management and facilitates consultation and update. Usefulness of the project is recognized as a basis for studies on issues of planning and prevention.Keywords: microzonation, buildings, geo-processing, cadastral number
Procedia PDF Downloads 3341499 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence
Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti
Abstract:
In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.Keywords: collective intelligence, data framework, destination management, smart tourism
Procedia PDF Downloads 1211498 A Dual Spark Ignition Timing Influence for the High Power Aircraft Radial Engine Using a CFD Transient Modeling
Authors: Tytus Tulwin, Ksenia Siadkowska, Rafał Sochaczewski
Abstract:
A high power radial reciprocating engine is characterized by a large displacement volume of a combustion chamber. Choosing the right moment for ignition is important for a high performance or high reliability and ignition certainty. This work shows methods of simulating ignition process and its impact on engine parameters. For given conditions a flame speed is limited when a deflagration combustion takes place. Therefore, a larger length scale of the combustion chamber compared to a standard size automotive engine makes combustion take longer time to propagate. In order to speed up the mixture burn-up time the second spark is introduced. The transient Computational Fluid Dynamics model capable of simulating multicycle engine processes was developed. The CFD model consists of ECFM-3Z combustion and species transport models. A relative ignition timing difference for the both spark sources is constant. The temperature distribution on engine walls was calculated in the separate conjugate heat transfer simulation. The in-cylinder pressure validation was performed for take-off power flight conditions. The influence of ignition timing on parameters like in-cylinder temperature or rate of heat release was analyzed. The most advantageous spark timing for the highest power output was chosen. The conditions around the spark plug locations for the pre-ignition period were analyzed. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: CFD, combustion, ignition, simulation, timing
Procedia PDF Downloads 296