Search results for: simplified conceptual models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7923

Search results for: simplified conceptual models

4203 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case

Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.

Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe

Procedia PDF Downloads 91
4202 Implementation and Validation of a Damage-Friction Constitutive Model for Concrete

Authors: L. Madouni, M. Ould Ouali, N. E. Hannachi

Abstract:

Two constitutive models for concrete are available in ABAQUS/Explicit, the Brittle Cracking Model and the Concrete Damaged Plasticity Model, and their suitability and limitations are well known. The aim of the present paper is to implement a damage-friction concrete constitutive model and to evaluate the performance of this model by comparing the predicted response with experimental data. The constitutive formulation of this material model is reviewed. In order to have consistent results, the parameter identification and calibration for the model have been performed. Several numerical simulations are presented in this paper, whose results allow for validating the capability of the proposed model for reproducing the typical nonlinear performances of concrete structures under different monotonic and cyclic load conditions. The results of the evaluation will be used for recommendations concerning the application and further improvements of the investigated model.

Keywords: Abaqus, concrete, constitutive model, numerical simulation

Procedia PDF Downloads 349
4201 Examining the Usefulness of an ESP Textbook for Information Technology: Learner Perspectives

Authors: Yun-Husan Huang

Abstract:

Many English for Specific Purposes (ESP) textbooks are distributed globally as the content development is often obliged to compromises between commercial and pedagogical demands. Therefore, the issue of regional application and usefulness of globally published ESP textbooks has received much debate. For ESP instructors, textbook selection is definitely a priority consideration for curriculum design. An appropriate ESP textbook can facilitate teaching and learning, while an inappropriate one may cause a disaster for both teachers and students. This study aims to investigate the regional application and usefulness of an ESP textbook for information technology (IT). Participants were 51 sophomores majoring in Applied Informatics and Multimedia at a university in Taiwan. As they were non-English majors, their English proficiency was mostly at elementary and elementary-to-intermediate levels. This course was offered for two semesters. The textbook selected was Oxford English for Information Technology. At class end, the students were required to complete a survey comprising five choices of Very Easy, Easy, Neutral, Difficult, and Very Difficult for each item. Based on the content design of the textbook, the survey investigated how the students viewed the difficulty of grammar, listening, speaking, reading, and writing materials of the textbook. In terms of difficulty, results reveal that only 22% of them found the grammar section difficult and very difficult. For listening, 71% responded difficult and very difficult. For general reading, 55% responded difficult and very difficult. For speaking, 56% responded difficult and very difficult. For writing, 78% responded difficult and very difficult. For advanced reading, 90% reported difficult and very difficult. These results indicate that, except the grammar section, more than half of the students found the textbook contents difficult in terms of listening, speaking, reading, and writing materials. Such contradictory results between the easy grammar section and the difficult four language skills sections imply that the textbook designers do not well understand the English learning background of regional ESP learners. For the participants, the learning contents of the grammar section were the general grammar level of junior high school, while the learning contents of the four language skills sections were more of the levels of college English majors. Implications from the findings are obtained for instructors and textbook designers. First of all, existing ESP textbooks for IT are few and thus textbook selections for instructors are insufficient. Second, existing globally published textbooks for IT cannot be applied to learners of all English proficiency levels, especially the low level. With limited textbook selections, third, instructors should modify the selected textbook contents or supplement extra ESP materials to meet the proficiency level of target learners. Fourth, local ESP publishers should collaborate with local ESP instructors who understand best the learning background of their students in order to develop appropriate ESP textbooks for local learners. Even though the instructor reduced learning contents and simplified tests in curriculum design, in conclusion, the students still found difficult. This implies that in addition to the instructor’s professional experience, there is a need to understand the usefulness of the textbook from learner perspectives.

Keywords: ESP textbooks, ESP materials, ESP textbook design, learner perspectives on ESP textbooks

Procedia PDF Downloads 325
4200 Applying Innovation in FP Counselling: Results from A360 Amplify Matasan Matan Arewa Implementation of Counseling for Choice to Improve Contraceptive Adoption and Continuation among Married Adolescent Girls (15-19 years) in Northern Nigeria

Authors: Bulama Alhaji Alhassan, Roselyn Odeh, Rakiya Idris Labaran, Dorcas Yemi Danladi, Faith Ochonu

Abstract:

Introduction: Contraceptive use has numerous health benefits such as preventing unplanned pregnancies thereby supporting women to achieve their life goals, maintaining the ideal amount of time between pregnancies, lowering the death rate for both mothers and children and generally enhancing the lives of women and children. Despite the numerous advantages of modern contraception and numerous initiatives by the government and development partners to promote its adoption, Nigeria's use of these methods has remained persistently low. Counseling about contraception is essential to providing high-quality treatment ensuring informed choice, and voluntarism for family planning is the key. The goal of the contraceptive counseling approach known as Counseling for Choice (C4C) is to ensure that people have the agency and voice to choose the contraceptive methods that best suit their requirements by altering the way both clients and providers engage in family planning counseling sessions. Aim: To evaluate the effect of counseling for choice on Modern Contraceptive adoption and continuation among married adolescent girls aged 15-19 years in 61 health facilities, within a 6-month period in Northern Nigeria. Methodology: Data from the NDHIS was obtained from selected facilities Pre & Post commencement of C4C intervention from 36 facilities Kaduna and 25 Nasarawa Matasan Matan Arewa (MMA) core implementation states putting into consideration the specific period of initiation of intervention, six months after deployment of the C4C, data was obtained from these facilities for post analysis. Data was analyzed on SPSS using paired sample t-test. Result: C4C resulted to improved access to FP services via increasing contraceptive adoption and continued used by 15% and 27% respectively (p<0.05) in Nasarawa state. While in Kaduna state we observed 11% and 28% improvement in adoption and continued use respectively as well with statistical significance (p<0.05) depicting that the increase is highly correlated (0.99 Nasarawa and 0.75 Kaduna) with the C4C intervention where the provider uses the NORMAL AND 3Ws Rubric to explain to the client in a simplified manner what to do with chosen method, what to expect with her method of adoption and when to return for a refill. Conclusion: In Northern Nigeria, it was observed that most clients discontinue their methods due to bleeding side effect and that was related to lack of appropriate and comprehensive information during counselling about what to expect with the clients method of adoption but with the intervention of the program, through capacity strengthening of PHC providers on counselling skills using the Counselling for Choice, it has helped to improve modern contraceptive uptake among young married women in northern Nigeria.

Keywords: continuation, counselling, uptake, adolescent, modern & implementation

Procedia PDF Downloads 54
4199 Simulation of Surge Protection for a Direct Current Circuit

Authors: Pedro Luis Ferrer Penalver, Edmundo da Silva Braga

Abstract:

In this paper, the performance of a simple surge protection for a direct current circuit was simulated. The protection circuit was developed from modified electric macro models of a gas discharge tube and a transient voltage suppressor diode. Moreover, a combination wave generator circuit was used as source of energy surges. The simulations showed that the circuit presented ensures immunity corresponding with test level IV of the IEC 61000-4-5:2014 international standard. The developed circuit can be modified to meet the requirements of any other equipment to be protected. Similarly, the parameters of the combination wave generator can be changed to provide different surge amplitudes.

Keywords: combination wave generator, IEC 61000-4-5, Pspice simulation, surge protection

Procedia PDF Downloads 312
4198 The Use of Computer-Aided Design in Small Contractors in a Local Area of Korea

Authors: Myunghoun Jang

Abstract:

A survey of small-size contractors in Jeju was conducted to investigate college graduate's computer-aided design (CAD) competence. Most of small-size contractors use CAD software to review and update drawings submitted from an architect. This research analyzed the curriculum of the architectural engineering in several national universities. The CAD classes have 4 or 6 hours per week and use AutoCAD primarily. This paper proposes that a CAD class needs 6 hours per week, 2D drawing is the main theme in the curriculum, and exercises to make 3D models are also included in the CAD class. An improved method, for example Internet cafe and real time feedbacks using smartphones, to evaluate the reports and exercise results is necessary.

Keywords: CAD (Computer Aided Design), CAD education, education improvement, small-size contractor

Procedia PDF Downloads 256
4197 Numerical Investigation of Flow Past in a Staggered Tube Bundle

Authors: Kerkouri Abdelkadir

Abstract:

Numerical calculations of turbulent flows are one of the most prominent modern interests in various engineering applications. Due to the difficulty of predicting, following up and studying this flow for computational fluid dynamic (CFD), in this paper, we simulated numerical study of a flow past in a staggered tube bundle, using CFD Code ANSYS FLUENT with several models of turbulence following: k-ε, k-ω and SST approaches. The flow is modeled based on the experimental studies. The predictions of mean velocities are in very good agreement with detailed LDA (Laser Doppler Anemometry) measurements performed in 8 stations along the depth of the array. The sizes of the recirculation zones behind the cylinders are also predicted. The simulations are conducted for Reynolds numbers of 12858. The Reynolds number is set to depend experimental results.

Keywords: flow, tube bundle, ANSYS Fluent, CFD, turbulence, LDA, RANS (k-ε, k-ω, SST)

Procedia PDF Downloads 150
4196 An East-West Trans-Cultural Study: Zen Enlightenment in Asian and John Cage's Visual Arts

Authors: Yu-Shun Elisa Pong

Abstract:

American composer John Cage (1912-1992) is an influential figure in musical, visual and performing arts after World War II and has also been claimed as a forerunner of the western avant-garde in the artistic field. However, the crucial factors contributed to his highly acclaimed achievements include the Zen enlightenment, which he mainly got from Japanese Zen master D. T. Suzuki (1870-1966). As a kind of reflection and afterthought of the Zen inspiration, John Cage created various forms of arts in which visual arts have recently attracted more and more attention and discussion, especially from the perspectives of Zen. John Cage had started to create visual art works since he was 66 years old and the activity had lasted until his death. The quality and quantity of the works are worthy of in-depth study— the 667 pieces of print, 114 pieces of water color, and about 150 pieces of sketch. Cage’s stylistic changes during the 14 years of creation are quite obvious, and the Zen elements in the later works seem to be omnipresent. Based on comparative artistic study, a historical and conceptual view of Zen art that was formed initially in the traditional Chinese and Japanese visual arts will be discussed. Then, Chinese and Japanese representative Zen works will be mentioned, and the technique aspect, as well as stylistic analysis, will be revealed. Finally, a comprehensive comparison of the original Oriental Zen works with John Cage’s works and focus on the influence, and art transformation will be addressed. The master pieces from Zen tradition by Chinese artists like Liang Kai (d. 1210) and Ma Yuan (1160-1225) from Southern Sung Dynasty, the Japanese artists like Sesshū (1420-1506), Miyamoto Musashi (1584-1645) and some others would be discussed. In the current study, these art works from different periods of historical development in Zen will serve as the basis of analogy, interpretation, and criticism to Cage's visual art works. Through the perspectives of the Zen authenticity from Asia, we see how John Cage appropriated the eastern culture to his innovation, which changed the art world forever. And it is believed that through a transition from inter-, cross-, toward trans-cultural inspiration, John Cage set up a unique pathway of art innovations.

Keywords: John Cage, Chinese Zen art, Japanese Zen art, visual art

Procedia PDF Downloads 505
4195 Problem Solving: Process or Product? A Mathematics Approach to Problem Solving in Knowledge Management

Authors: A. Giannakopoulos, S. B. Buckley

Abstract:

Problem solving in any field is recognised as a prerequisite for any advancement in knowledge. For example in South Africa it is one of the seven critical outcomes of education together with critical thinking. As a systematic way to problem solving was initiated in mathematics by the great mathematician George Polya (the father of problem solving), more detailed and comprehensive ways in problem solving have been developed. This paper is based on the findings by the author and subsequent recommendations for further research in problem solving and critical thinking. Although the study was done in mathematics, there is no doubt by now in almost anyone’s mind that mathematics is involved to a greater or a lesser extent in all fields, from symbols, to variables, to equations, to logic, to critical thinking. Therefore it stands to reason that mathematical principles and learning cannot be divorced from any field. In management of knowledge situations, the types of problems are similar to mathematics problems varying from simple to analogical to complex; from well-structured to ill-structured problems. While simple problems could be solved by employees by adhering to prescribed sequential steps (the process), analogical and complex problems cannot be proceduralised and that diminishes the capacity of the organisation of knowledge creation and innovation. The low efficiency in some organisations and the low pass rates in mathematics prompted the author to view problem solving as a product. The authors argue that using mathematical approaches to knowledge management problem solving and treating problem solving as a product will empower the employee through further training to tackle analogical and complex problems. The question the authors asked was: If it is true that problem solving and critical thinking are indeed basic skills necessary for advancement of knowledge why is there so little literature of knowledge management (KM) about them and how they are connected and advance KM?This paper concludes with a conceptual model which is based on general accepted principles of knowledge acquisition (developing a learning organisation), knowledge creation, sharing, disseminating and storing thereof, the five pillars of knowledge management (KM). This model, also expands on Gray’s framework on KM practices and problem solving and opens the doors to a new approach to training employees in general and domain specific areas problems which can be adapted in any type of organisation.

Keywords: critical thinking, knowledge management, mathematics, problem solving

Procedia PDF Downloads 581
4194 Entrepreneurship Education Revised: Merging a Theory-Based and Action-Based Framework for Entrepreneurial Narratives' Impact as an Awareness-Raising Teaching Tool

Authors: Katharina Fellnhofer, Kaisu Puumalainen

Abstract:

Despite the current worldwide increasing interest in entrepreneurship education (EE), little attention has been paid to innovative web-based ways such as the narrative approach by telling individual stories of entrepreneurs via multimedia for demonstrating the impact on individuals towards entrepreneurship. In addition, this research discipline is faced with no consensus regarding its effective content of teaching materials and tools. Therefore, a qualitative hypothesis-generating research contribution is required to aim at drawing new insights from published works in the EE field of research to serve for future research related to multimedia entrepreneurial narratives. Based on this background, our effort will focus on finding support regarding following introductory statement: Multimedia success and failure stories of real entrepreneurs show potential to change perceptions towards entrepreneurship in a positive way. The proposed qualitative conceptual paper will introduce the underlying background for this research framework. Therefore, as a qualitative hypothesis-generating research contribution it aims at drawing new insights from published works in the EE field of research related to entrepreneurial narratives to serve for future research. With the means of the triangulation of multiple theories, we will utilize the foundation for multimedia-based entrepreneurial narratives applying a learning-through-multimedia-real-entrepreneurial-narratives pedagogical tool to facilitate entrepreneurship. Our effort will help to demystify how value-oriented entrepreneurs telling their stories multimedia can simultaneously enhance EE. Therefore, the paper will build new-fangled bridges between well-cited theoretical constructs to build a robust research framework. Overall, the intended contribution seeks to emphasize future research of currently under-researched issues in the EE sphere, which are considered to be essential not only to academia, as well as to business and society having future jobs-providing growth-oriented entrepreneurs in mind. The Authors would like to thank the Austrian Science Fund FWF: [J3740 – G27].

Keywords: entrepreneurship education, entrepreneurial attitudes and perceptions, entrepreneurial intention, entrepreneurial narratives

Procedia PDF Downloads 236
4193 Analyzing Competitive Advantage of Internet of Things and Data Analytics in Smart City Context

Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue

Abstract:

The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic hasnot only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of the normal design, construction, and operation of cities provides a unique opportunity to improve connection between people. Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the researchcontribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create a competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.

Keywords: internet of things, data analytics, smart cities, competitive advantage

Procedia PDF Downloads 82
4192 Analysing the Cost of Immigrants to the National Health System in Eastern Macedonia and Thrace

Authors: T. Theodosiou, P. Polychronidou, A. G. Karasavvoglou

Abstract:

The latest years the number of immigrants at Greece has increased dramatically. Their impact on the National Health System (NHS) has not been yet thoroughly investigated. This paper analyses the cost of immigrants to the NHS hospitals of the region of Eastern Macedonia and Thrace. The data are collected from 2005 to 2011 from five different hospitals and are analysed using linear mixed effects models in order to investigate the effects of nationality and year on the cost of hospitalization and treatment. The results show that generally the Greek nationality patients have a higher mean cost of hospitalization compared to the immigrants and that there is an increasing trend for the cost except for the year 2010.

Keywords: cost, Eastern Macedonia and Thrace, immigrants, national health system

Procedia PDF Downloads 232
4191 A Strategic Communication Design Model for Indigenous Knowledge Management

Authors: Dilina Janadith Nawarathne

Abstract:

This article presents the initial development of a communication model (Model_isi) as the means of gathering, preserving and transferring indigenous knowledge in the field of knowledge management. The article first discusses the need for an appropriate complimentary model for indigenous knowledge management which differs from the existing methods and models. Then the paper suggests the newly developed model for indigenous knowledge management which generate as result of blending key aspects of different disciplines, which can be implemented as a complementary approach for the existing scientific method. The paper further presents the effectiveness of the developed method in reflecting upon a pilot demonstration carried out on selected indigenous communities of Sri Lanka.

Keywords: indigenous knowledge management, knowledge transferring, tacit knowledge, research model, asian centric philosophy

Procedia PDF Downloads 464
4190 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 273
4189 Epistemic Uncertainty Analysis of Queue with Vacations

Authors: Baya Takhedmit, Karim Abbas, Sofiane Ouazine

Abstract:

The vacations queues are often employed to model many real situations such as computer systems, communication networks, manufacturing and production systems, transportation systems and so forth. These queueing models are solved at fixed parameters values. However, the parameter values themselves are determined from a finite number of observations and hence have uncertainty associated with them (epistemic uncertainty). In this paper, we consider the M/G/1/N queue with server vacation and exhaustive discipline where we assume that the vacation parameter values have uncertainty. We use the Taylor series expansions approach to estimate the expectation and variance of model output, due to epistemic uncertainties in the model input parameters.

Keywords: epistemic uncertainty, M/G/1/N queue with vacations, non-parametric sensitivity analysis, Taylor series expansion

Procedia PDF Downloads 418
4188 Simulation of Wet Scrubbers for Flue Gas Desulfurization

Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra

Abstract:

Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.

Keywords: desulfurization, discrete phase, scrubber, wall film

Procedia PDF Downloads 242
4187 Attention Problems among Adolescents: Examining Educational Environments

Authors: Zhidong Zhang, Zhi-Chao Zhang, Georgianna Duarte

Abstract:

This study investigated the attention problems with the instrument of Achenbach System of Empirically Based Assessment (ASEBA). Two thousand eight hundred and ninety-four adolescents were surveyed by using a stratified sampling method. We examined the relationships between relevant background variables and attention problems. Multiple regression models were applied to analyze the data. Relevant variables such as sports activities, hobbies, age, grade and the number of close friends were included in this study as predictive variables. The analysis results indicated that educational environments and extracurricular activities are important factors which influence students’ attention problems.

Keywords: adolescents, ASEBA, attention problems, educational environments, stratified sampling

Procedia PDF Downloads 262
4186 Aerodynamic Analysis of a Frontal Deflector for Vehicles

Authors: C. Malça, N. Alves, A. Mateus

Abstract:

This work was one of the tasks of the Manufacturing2Client project, whose objective was to develop a frontal deflector to be commercialized in the automotive industry, using new project and manufacturing methods. In this task, in particular, it was proposed to develop the ability to predict computationally the aerodynamic influence of flow in vehicles, in an effort to reduce fuel consumption in vehicles from class 3 to 8. With this aim, two deflector models were developed and their aerodynamic performance analyzed. The aerodynamic study was done using the Computational Fluid Dynamics (CFD) software Ansys CFX and allowed the calculation of the drag coefficient caused by the vehicle motion for the different configurations considered. Moreover, the reduction of diesel consumption and carbon dioxide (CO2) emissions associated with the optimized deflector geometry could be assessed.

Keywords: erodynamic analysis, CFD, CO2 emissions, drag coefficient, frontal deflector, fuel consumption

Procedia PDF Downloads 395
4185 The Experimental and Modeling Adsorption Properties of Sr2+ on Raw and Purified Bentonite

Authors: A. A. Khodadadi, S. C. Ravaj, B. D. Tavildari, M. B. Abdolahi

Abstract:

The adsorption properties of local bentonite (Semnan Iran) and purified prepared from this bentonite towards Sr2+ adsorption, were investigated by batch equilibration. The influence of equilibration time, adsorption isotherms, kinetic adsorption, solution pH, and presence of EDTA and NaCl on these properties was studied and discussed. Kinetic data were found to be well fitted with a pseudo-second order kinetic model. Sr2+ is preferably adsorbed by bentonite and purified bentonite. The D-R isotherm model has the best fit with experimental data than other adsorption isotherm models. The maximum adsorption of Sr2+ representing the highest negative charge density on the surface of the adsorbent was seen at pH 12. Presence of EDTA and NaCl decreased the amount of Sr2+ adsorption.

Keywords: bentonite, purified bentonite, Sr2+, equilibrium isotherm, kinetics

Procedia PDF Downloads 362
4184 The Impact of Monetary Policy on Aggregate Market Liquidity: Evidence from Indian Stock Market

Authors: Byomakesh Debata, Jitendra Mahakud

Abstract:

The recent financial crisis has been characterized by massive monetary policy interventions by the Central bank, and it has amplified the importance of liquidity for the stability of the stock market. This paper empirically elucidates the actual impact of monetary policy interventions on stock market liquidity covering all National Stock Exchange (NSE) Stocks, which have been traded continuously from 2002 to 2015. The present study employs a multivariate VAR model along with VAR-granger causality test, impulse response functions, block exogeneity test, and variance decomposition to analyze the direction as well as the magnitude of the relationship between monetary policy and market liquidity. Our analysis posits a unidirectional relationship between monetary policy (call money rate, base money growth rate) and aggregate market liquidity (traded value, turnover ratio, Amihud illiquidity ratio, turnover price impact, high-low spread). The impulse response function analysis clearly depicts the influence of monetary policy on stock liquidity for every unit innovation in monetary policy variables. Our results suggest that an expansionary monetary policy increases aggregate stock market liquidity and the reverse is documented during the tightening of monetary policy. To ascertain whether our findings are consistent across all periods, we divided the period of study as pre-crisis (2002 to 2007) and post-crisis period (2007-2015) and ran the same set of models. Interestingly, all liquidity variables are highly significant in the post-crisis period. However, the pre-crisis period has witnessed a moderate predictability of monetary policy. To check the robustness of our results we ran the same set of VAR models with different monetary policy variables and found the similar results. Unlike previous studies, we found most of the liquidity variables are significant throughout the sample period. This reveals the predictability of monetary policy on aggregate market liquidity. This study contributes to the existing body of literature by documenting a strong predictability of monetary policy on stock liquidity in an emerging economy with an order driven market making system like India. Most of the previous studies have been carried out in developing economies with quote driven or hybrid market making system and their results are ambiguous across different periods. From an eclectic sense, this study may be considered as a baseline study to further find out the macroeconomic determinants of liquidity of stocks at individual as well as aggregate level.

Keywords: market liquidity, monetary policy, order driven market, VAR, vector autoregressive model

Procedia PDF Downloads 362
4183 Nyaya, Buddhist School Controversy regarding the Laksana of Pratyaksa: Causal versus Conceptual Analysis

Authors: Maitreyee Datta

Abstract:

Buddhist lakṣaņa of pratyakṣa pramā is not the result of the causal analysis of the genesis of it. Naiyāyikas, on the other hand, has provided the lakṣaņa of pratyakṣa in terms of the causal analysis of it. Thus, though in these two philosophical systems philosophers have discussed in detail the nature of pratyakṣa pramā (perception), yet their treatments and understanding of it vary according to their respective understanding of pramā and prmāņa and their relationship. In Nyāya school, the definition (lakṣņa) of perception (pratyakṣa) has been given in terms of the process by virtue of which it has been generated. Thus, Naiyāyikas were found to provide a causal account of perception (pratyakṣa) by virtue of their lakṣaņa of it. But in Buddhist epistemology perception has been defined by virtue of the nature of perceptual knowledge (pratyakṣa pramā) which is devoid of any vikalpa or cognition. These two schools differed due to their different metaphysical presuppositions which determine their epistemological pursuits. The Naiyāyikas admitted pramā and pramāņa as separate events and they have taken pramāņa to be the cause of pramā. These presuppositions enabled them to provide a lakṣaņa of pratyakṣa pramā in terms of the causes by which it is generated. Why did the Buddhist epistemologists define perception by the unique nature of perceptual knowledge instead of the process by which it is generated? This question will be addressed and dealt with in the present paper. In doing so, the unique purpose of Buddhist philosophy will be identified which will enable us to find out an answer to the above question. This enterprise will also reveal the close relationship among some basic Buddhist presuppositions like pratityasamutpādavāda and kṣaņikavāda with Buddhist epistemological positions. In other words, their distinctive notion of pramā (knowledge) indicates their unique epistemological position which is found to comply with their basic philosophical presuppositions. The first section of the paper will present the Buddhist epistemologists’ lakṣaņa of pratyakṣa. The analysis of the lakṣaņa will be given in clear terms to reveal the nature of pratyakṣa as an instance of pramā. In the second section, an effort will be made to identify the uniqueness of such a definition. Here an articulation will be made in which the relationship among basic Buddhist presuppositions and their unique epistemological positions are determined. In the third section of the paper, an effort will be made to compare Nyāya epistemologist’s position regarding pratyakṣa with that of the Buddhist epistemologist.

Keywords: laksana, prama, pramana, pratyksa

Procedia PDF Downloads 131
4182 Adsorption Studies of Lead from Aqueos Solutions on Cocount Shell Activated Carbon

Authors: G. E. Sharaf El-Deen, S. E. A. Sharaf El-Deen

Abstract:

Activated carbon was prepared from coconut shell (ACS); a discarded agricultural waste was used to produce bioadsorbent through easy and environmental friendly processes. This activated carbon based biosorbent was evaluated for adsorptive removal of lead from water. The characterisation results showed this biosorbent had very high specific surface area and functional groups. The adsorption equilibrium data was well described by Langmuir, whilst kinetics data by pseudo-first order, pseudo-second order and Intraparticle diffusion models. The adsorption process could be described by the pseudo-second order kinetic.

Keywords: coconut shell, activated carbon, adsorption isotherm and kinetics, lead removal

Procedia PDF Downloads 290
4181 On Periodic Integer-Valued Moving Average Models

Authors: Aries Nawel, Bentarzi Mohamed

Abstract:

This paper deals with the study of some probabilistic and statistical properties of a Periodic Integer-Valued Moving Average Model (PINMA_{S}(q)). The closed forms of the mean, the second moment and the periodic autocovariance function are obtained. Furthermore, the time reversibility of the model is discussed in details. Moreover, the estimation of the underlying parameters are obtained by the Yule-Walker method, the Conditional Least Square method (CLS) and the Weighted Conditional Least Square method (WCLS). A simulation study is carried out to evaluate the performance of the estimation method. Moreover, an application on real data set is provided.

Keywords: periodic integer-valued moving average, periodically correlated process, time reversibility, count data

Procedia PDF Downloads 174
4180 Rheological Evaluation of Various Indigenous Gums

Authors: Yogita Weikey, Shobha Lata Sinha, Satish Kumar Dewangan

Abstract:

In the present investigation, rheology of the three different natural gums has been evaluated experimentally using MCR 102 rheometer. Various samples based on the variation of the concentration of the solid gum powder have been prepared. Their non-Newtonian behavior has been observed by the consistency plots and viscosity variation plots with respect to different solid concentration. The viscosity-shear rate curves of gums are similar and the behavior is shear thinning. Gums are showing pseudoplastic behavior. The value of k and n are calculated by using various models. Results show that the Herschel–Bulkley rheological model is reliable to describe the relationship of shear stress as a function of shear rate. R² values are also calculated to support the choice of gum selection.

Keywords: bentonite, Indian gum, non-Newtonian model, rheology

Procedia PDF Downloads 294
4179 Closest Possible Neighbor of a Different Class: Explaining a Model Using a Neighbor Migrating Generator

Authors: Hassan Eshkiki, Benjamin Mora

Abstract:

The Neighbor Migrating Generator is a simple and efficient approach to finding the closest potential neighbor(s) with a different label for a given instance and so without the need to calibrate any kernel settings at all. This allows determining and explaining the most important features that will influence an AI model. It can be used to either migrate a specific sample to the class decision boundary of the original model within a close neighborhood of that sample or identify global features that can help localising neighbor classes. The proposed technique works by minimizing a loss function that is divided into two components which are independently weighted according to three parameters α, β, and ω, α being self-adjusting. Results show that this approach is superior to past techniques when detecting the smallest changes in the feature space and may also point out issues in models like over-fitting.

Keywords: explainable AI, EX AI, feature importance, counterfactual explanations

Procedia PDF Downloads 144
4178 Air Breakdown Voltage Prediction in Post-arcing Conditions for Compact Circuit Breakers

Authors: Jing Nan

Abstract:

The air breakdown voltage in compact circuit breakers is a critical factor in the design and reliability of electrical distribution systems. This voltage determines the threshold at which the air insulation between conductors will fail or 'break down,' leading to an arc. This phenomenon is highly sensitive to the conditions within the breaker, such as the temperature and the distance between electrodes. Typically, air breakdown voltage models have been reliable for predicting failure under standard operational temperatures. However, in conditions post-arcing, where temperatures can soar above 2000K, these models face challenges due to the complex physics of ionization and electron behaviour at such high-energy states. Building upon the foundational understanding that the breakdown mechanism is initiated by free electrons and propelled by electric fields, which lead to ionization and, potentially, to avalanche or streamer formation, we acknowledge the complexity introduced by high-temperature environments. Recognizing the limitations of existing experimental data, a notable research gap exists in the accurate prediction of breakdown voltage at elevated temperatures, typically observed post-arcing, where temperatures exceed 2000K.To bridge this knowledge gap, we present a method that integrates gap distance and high-temperature effects into air breakdown voltage assessment. The proposed model is grounded in the physics of ionization, accounting for the dynamic behaviour of free electrons which, under intense electric fields at elevated temperatures, lead to thermal ionization and potentially reach the threshold for streamer formation as Meek's criterion. Employing the Saha equation, our model calculates equilibrium electron densities, adapting to the atmospheric pressure and the hot temperature regions indicative of post-arc temperature conditions. Our model is rigorously validated against established experimental data, demonstrating substantial improvements in predicting air breakdown voltage in the high-temperature regime. This work significantly improves the predictive power for air breakdown voltage under conditions that closely mimic operational stressors in compact circuit breakers. Looking ahead, the proposed methods are poised for further exploration in alternative insulating media, like SF6, enhancing the model's utility for a broader range of insulation technologies and contributing to the future of high-temperature electrical insulation research.

Keywords: air breakdown voltage, high-temperature insulation, compact circuit breakers, electrical discharge, saha equation

Procedia PDF Downloads 66
4177 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 53
4176 Cross-Country Mitigation Policies and Cross Border Emission Taxes

Authors: Massimo Ferrari, Maria Sole Pagliari

Abstract:

Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.

Keywords: climate change, general equilibrium, optimal taxation, monetary policy

Procedia PDF Downloads 144
4175 Mathematical Modelling of Different Types of Body Support Surface for Pressure Ulcer Prevention

Authors: Mahbub C. Mishu, Venktesh N. Dubey, Tamas Hickish, Jonathan Cole

Abstract:

Pressure ulcer is a common problem for today's healthcare industry. It occurs due to external load applied to the skin. Also when the subject is immobile for a longer period of time and there is continuous load applied to a particular area of human body,blood flow gets reduced and as a result pressure ulcer develops. Body support surface has a significant role in preventing ulceration so it is important to know the characteristics of support surface under loading conditions. In this paper we have presented mathematical models of different types of viscoelastic materials and also we have shown the validation of our simulation results with experiments.

Keywords: pressure ulcer, viscoelastic material, mathematical model, experimental validation

Procedia PDF Downloads 297
4174 Methods for Business Process Simulation Based on Petri Nets

Authors: K. Shoylekova, K. Grigorova

Abstract:

The Petri nets are the first standard for business process modeling. Most probably, it is one of the core reasons why all new standards created afterwards have to be so reformed as to reach the stage of mapping the new standard onto Petri nets. The paper presents a Business process repository based on a universal database. The repository provides the possibility the data about a given process to be stored in three different ways. Business process repository is developed with regard to the reformation of a given model to a Petri net in order to be easily simulated two different techniques for business process simulation based on Petri nets - Yasper and Woflan are discussed. Their advantages and drawbacks are outlined. The way of simulating business process models, stored in the Business process repository is shown.

Keywords: business process repository, petri nets, simulation, Woflan, Yasper

Procedia PDF Downloads 355