Search results for: Philips mathematical model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17208

Search results for: Philips mathematical model

9768 An Islamic Microfinance Business Model in Bangladesh and Its Role in Poverty Alleviation

Authors: Abul Hassan

Abstract:

Present socio-economic context and women wellbeing in Bangladesh imposes lots of constraints on women’s involvement in income generating activities. Different studies showed that the implementation of World Bank structural adjustment policies have had mixed impacts on women and their wellbeing. By involving poor people specially women in Islamic microfinance programmes in Bangladesh are used as a tool to combat poverty. Women are specifically targeted by Islamic microfinance under the rural development scheme of Islami Bank Bangladesh that provide interest free loan to the women groups. The programme has a multiplier effect since women invest largely in their households. The aim of this research is twofold: firstly, it wanted to confirm or refute a positive link between Islamic microfinance and the socio-economic wellbeing of women in Bangladesh and secondly, to explore the context in which Islamic microfinance programs function in Bangladesh and the way their performance can be improved. Based on structured questionnaires’ survey, this study addressed two research questions: (1) What can be expected from the offer of Islamic microfinance on the welfare of recipients and (2) Under what conditions would such an offer be more beneficial. The main result of this study shows that increase in women’s income and assets played a very important role in enhancing women’s economic independence and sense of self-confidence. An important policy recommendation is that it is necessary to redirect Islamic microfinance towards diversified developmental activities that will contribute to the improvement, in the long run, of the wellbeing of the recipients.

Keywords: business model, Islamic microfinance, women’s wellbeing

Procedia PDF Downloads 375
9767 Impact of Blended Learning in Interior Architecture Programs in Academia: A Case Study of Arcora Garage Academy from Turkey

Authors: Arzu Firlarer, Duygu Gocmen, Gokhan Uysal

Abstract:

There is currently a growing trend among universities towards blended learning. Blended learning is becoming increasingly important in higher education, with the aims of better accomplishing course learning objectives, meeting students’ changing needs and promoting effective learning both in a theoretical and practical dimension like interior architecture discipline. However, the practical dimension of the discipline cannot be supported in the university environment. During the undergraduate program, the practical training which is tried to be supported by two different internship programs cannot fully meet the requirements of the blended learning. The lack of education program frequently expressed by our graduates and employers is revealed in the practical knowledge and skills dimension of the profession. After a series of meetings for curriculum studies, interviews with the chambers of profession, meetings with interior architects, a gap between the theoretical and practical training modules is seen as a problem in all interior architecture departments. It is thought that this gap can be solved by a new education model which is formed by the cooperation of University-Industry in the concept of blended learning. In this context, it is considered that theoretical and applied knowledge accumulation can be provided by the creation of industry-supported educational environments at the university. In the application process of the Interior Architecture discipline, the use of materials and technical competence will only be possible with the cooperation of industry and participation of students in the production/manufacture processes as observers and practitioners. Wood manufacturing is an important part of interior architecture applications. Wood productions is a sustainable structural process where production details, material knowledge, and process details can be observed in the most effective way. From this point of view, after theoretical training about wooden materials, wood applications and production processes are given to the students, practical training for production/manufacture planning is supported by active participation and observation in the processes. With this blended model, we aimed to develop a training model in which theoretical and practical knowledge related to the production of wood works will be conveyed in a meaningful, lasting way by means of university-industry cooperation. The project is carried out in Ankara with Arcora Architecture and Furniture Company and Başkent University Department of Interior Design where university-industry cooperation is realized. Within the scope of the project, every week the video of that week’s lecture is recorded and prepared to be disseminated by digital medias such as Udemy. In this sense, the program is not only developed by the project participants, but also other institutions and people who are trained and practiced in the field of design. Both academicians from University and at least 15-year experienced craftsmen in the wood metal and dye sectors are preparing new training reference documents for interior architecture undergraduate programs. These reference documents will be a model for other Interior Architecture departments of the universities and will be used for creating an online education module.

Keywords: blended learning, interior design, sustainable training, effective learning.

Procedia PDF Downloads 124
9766 Comparison of Two Maintenance Policies for a Two-Unit Series System Considering General Repair

Authors: Seyedvahid Najafi, Viliam Makis

Abstract:

In recent years, maintenance optimization has attracted special attention due to the growth of industrial systems complexity. Maintenance costs are high for many systems, and preventive maintenance is effective when it increases operations' reliability and safety at a reduced cost. The novelty of this research is to consider general repair in the modeling of multi-unit series systems and solve the maintenance problem for such systems using the semi-Markov decision process (SMDP) framework. We propose an opportunistic maintenance policy for a series system composed of two main units. Unit 1, which is more expensive than unit 2, is subjected to condition monitoring, and its deterioration is modeled using a gamma process. Unit 1 hazard rate is estimated by the proportional hazards model (PHM), and two hazard rate control limits are considered as the thresholds of maintenance interventions for unit 1. Maintenance is performed on unit 2, considering an age control limit. The objective is to find the optimal control limits and minimize the long-run expected average cost per unit time. The proposed algorithm is applied to a numerical example to compare the effectiveness of the proposed policy (policy Ⅰ) with policy Ⅱ, which is similar to policy Ⅰ, but instead of general repair, replacement is performed. Results show that policy Ⅰ leads to lower average cost compared with policy Ⅱ. 

Keywords: condition-based maintenance, proportional hazards model, semi-Markov decision process, two-unit series systems

Procedia PDF Downloads 104
9765 Thermal Analysis and Optimization of a High-Speed Permanent Magnet Synchronous Motor with Toroidal Windings

Authors: Yuan Wan, Shumei Cui, Shaopeng Wu

Abstract:

Toroidal windings were taken advantage of to reduce of axial length of the motor, so as to match the applications that have severe restrictions on the axial length. But slotting in the out edge of the stator will decrease the heat-dissipation capacity of the water cooling of the housing. Besides, the windings in the outer slots will increase the copper loss, which will further increase the difficult for heat dissipation of the motor. At present, carbon-fiber composite retaining sleeve are increasingly used to be mounted over the magnets to ensure the rotor strength at high speeds. Due to the poor thermal conductivity of carbon-fiber sleeve, the cooling of the rotor becomes very difficult, which may result in the irreversible demagnetization of magnets for the excessively high temperature. So it is necessary to analyze the temperature rise of such motor. This paper builds a computational fluid dynamic (CFD) model of a toroidal-winding high-speed permanent magnet synchronous motor (PMSM) with water cooling of housing and forced air cooling of rotor. Thermal analysis was carried out based on the model and the factors that affects the temperature rise were investigated. Then thermal optimization for the prototype was achieved. Finally, a small-size prototype was manufactured and the thermal analysis results were verified.

Keywords: thermal analysis, temperature rise, toroidal windings, high-speed PMSM, CFD

Procedia PDF Downloads 479
9764 Co-Gasification Process for Green and Blue Hydrogen Production: Innovative Process Development, Economic Analysis, and Exergy Assessment

Authors: Yousaf Ayub

Abstract:

A co-gasification process, which involves the utilization of both biomass and plastic waste, has been developed to enable the production of blue and green hydrogen. To support this endeavor, an Aspen Plus simulation model has been meticulously created, and sustainability analysis is being conducted, focusing on economic viability, energy efficiency, advanced exergy considerations, and exergoeconomics evaluations. In terms of economic analysis, the process has demonstrated strong economic sustainability, as evidenced by an internal rate of return (IRR) of 8% at a process efficiency level of 70%. At present, the process has the potential to generate approximately 1100 kWh of electric power, with any excess electricity, beyond meeting the process requirements, capable of being harnessed for green hydrogen production via an alkaline electrolysis cell (AEC). This surplus electricity translates to a potential daily hydrogen production of around 200 kg. The exergy analysis of the model highlights that the gasifier component exhibits the lowest exergy efficiency, resulting in the highest energy losses, amounting to approximately 40%. Additionally, advanced exergy analysis findings pinpoint the gasifier as the primary source of exergy destruction, totaling around 9000 kW, with associated exergoeconomics costs amounting to 6500 $/h. Consequently, improving the gasifier's performance is a critical focal point for enhancing the overall sustainability of the process, encompassing energy, exergy, and economic considerations.

Keywords: blue hydrogen, green hydrogen, co-gasification, waste valorization, exergy analysis

Procedia PDF Downloads 43
9763 Forced-Choice Measurement Models of Behavioural, Social, and Emotional Skills: Theory, Research, and Development

Authors: Richard Roberts, Anna Kravtcova

Abstract:

Introduction: The realisation that personality can change over the course of a lifetime has led to a new companion model to the Big Five, the behavioural, emotional, and social skills approach (BESSA). BESSA hypothesizes that this set of skills represents how the individual is thinking, feeling, and behaving when the situation calls for it, as opposed to traits, which represent how someone tends to think, feel, and behave averaged across situations. The five major skill domains share parallels with the Big Five Factor (BFF) model creativity and innovation (openness), self-management (conscientiousness), social engagement (extraversion), cooperation (agreeableness), and emotional resilience (emotional stability) skills. We point to noteworthy limitations in the current operationalisation of BESSA skills (i.e., via Likert-type items) and offer up a different measurement approach: forced choice. Method: In this forced-choice paradigm, individuals were given three skill items (e.g., managing my time) and asked to select one response they believed they were “worst at” and “best at”. The Thurstonian IRT models allow these to be placed on a normative scale. Two multivariate studies (N = 1178) were conducted with a 22-item forced-choice version of the BESSA, a published measure of the BFF, and various criteria. Findings: Confirmatory factor analysis of the forced-choice assessment showed acceptable model fit (RMSEA<0.06), while reliability estimates were reasonable (around 0.70 for each construct). Convergent validity evidence was as predicted (correlations between 0.40 and 0.60 for corresponding BFF and BESSA constructs). Notable was the extent the forced-choice BESSA assessment improved upon test-criterion relationships over and above the BFF. For example, typical regression models find BFF personality accounting for 25% of the variance in life satisfaction scores; both studies showed incremental gains over the BFF exceeding 6% (i.e., BFF and BESSA together accounted for over 31% of the variance in both studies). Discussion: Forced-choice measurement models offer up the promise of creating equated test forms that may unequivocally measure skill gains and are less prone to fakability and reference bias effects. Implications for practitioners are discussed, especially those interested in selection, succession planning, and training and development. We also discuss how the forced choice method can be applied to other constructs like emotional immunity, cross-cultural competence, and self-estimates of cognitive ability.

Keywords: Big Five, forced-choice method, BFF, methods of measurements

Procedia PDF Downloads 82
9762 Conceptualizing Personalized Learning: Review of Literature 2007-2017

Authors: Ruthanne Tobin

Abstract:

As our data-driven, cloud-based, knowledge-centric lives become ever more global, mobile, and digital, educational systems everywhere are struggling to keep pace. Schools need to prepare students to become critical-thinking, tech-savvy, life-long learners who are engaged and adaptable enough to find their unique calling in a post-industrial world of work. Recognizing that no nation can afford poor achievement or high dropout rates without jeopardizing its social and economic future, the thirty-two nations of the OECD are launching initiatives to redesign schools, generally under the banner of Personalized Learning or 21st Century Learning. Their intention is to transform education by situating students as co-enquirers and co-contributors with their teachers of what, when, and how learning happens for each individual. In this focused review of the 2007-2017 literature on personalized learning, the author sought answers to two main questions: “What are the theoretical frameworks that guide personalized learning?” and “What is the conceptual understanding of the model?” Ultimately, the review reveals that, although the research area is overly theorized and under-substantiated, it does provide a significant body of knowledge about this potentially transformative educational restructuring. For example, it addresses the following questions: a) What components comprise a PL model? b) How are teachers facilitating agency (voice & choice) in their students? c) What kinds of systems, processes and procedures are being used to guide the innovation? d) How is learning organized, monitored and assessed? e) What role do inquiry based models play? f) How do teachers integrate the three types of knowledge: Content, pedagogical and technological? g) Which kinds of forces enable, and which impede, personalizing learning? h) What is the nature of the collaboration among teachers? i) How do teachers co-regulate differentiated tasks? One finding of the review shows that while technology can dramatically expand access to information, expectations of its impact on teaching and learning are often disappointing unless the technologies are paired with excellent pedagogies in order to address students’ needs, interests and aspirations. This literature review fills a significant gap in this emerging field of research, as it serves to increase conceptual clarity that has hampered both the theorizing and the classroom implementation of a personalized learning model.

Keywords: curriculum change, educational innovation, personalized learning, school reform

Procedia PDF Downloads 204
9761 Study of Superconducting Patch Printed on Electric-Magnetic Substrates Materials

Authors: Fortaki Tarek, S. Bedra

Abstract:

In this paper, the effects of both uniaxial anisotropy in the substrate and high Tc superconducting patch on the resonant frequency, half-power bandwidth, and radiation patterns are investigated using an electric field integral equation and the spectral domain Green’s function. The analysis has been based on a full electromagnetic wave model with London’s equations and the Gorter-Casimir two-fluid model has been improved to investigate the resonant and radiation characteristics of high Tc superconducting rectangular microstrip patch in the case where the patch is printed on electric-magnetic uniaxially anisotropic substrate materials. The stationary phase technique has been used for computing the radiation electric field. The obtained results demonstrate a considerable improvement in the half-power bandwidth, of the rectangular microstrip patch, by using a superconductor patch instead of a perfect conductor one. Further results show that high Tc superconducting rectangular microstrip patch on the uniaxial substrate with properly selected electric and magnetic anisotropy ratios is more advantageous than the one on the isotropic substrate by exhibiting wider bandwidth and radiation characteristic. This behavior agrees with that discovered experimentally for superconducting patches on isotropic substrates. The calculated results have been compared with measured one available in the literature and excellent agreement has been found.

Keywords: high Tc superconducting microstrip patch, electric-magnetic anisotropic substrate, Galerkin method, surface complex impedance with boundary conditions, radiation patterns

Procedia PDF Downloads 427
9760 Rocket Launch Simulation for a Multi-Mode Failure Prediction Analysis

Authors: Mennatallah M. Hussein, Olivier de Weck

Abstract:

The advancement of space exploration demands a robust space launch services program capable of reliably propelling payloads into orbit. Despite rigorous testing and quality assurance, launch failures still occur, leading to significant financial losses and jeopardizing mission objectives. Traditional failure prediction methods often lack the sophistication to account for multi-mode failure scenarios, as well as the predictive capability in complex dynamic systems. Traditional approaches also rely on expert judgment, leading to variability in risk prioritization and mitigation strategies. Hence, there is a pressing need for robust approaches that enhance launch vehicle reliability from lift-off until it reaches its parking orbit through comprehensive simulation techniques. In this study, the developed model proposes a multi-mode launch vehicle simulation framework for predicting failure scenarios when incorporating new technologies, such as new propulsion systems or advanced staging separation mechanisms in the launch system. To this end, the model combined a 6-DOF system dynamics with comprehensive data analysis to simulate multiple failure modes impacting launch performance. The simulator utilizes high-fidelity physics-based simulations to capture the complex interactions between different subsystems and environmental conditions.

Keywords: launch vehicle, failure prediction, propulsion anomalies, rocket launch simulation, rocket dynamics

Procedia PDF Downloads 9
9759 Performance Evaluation of Reinforced Concrete Framed Structure with Steel Bracing and Supplemental Energy Dissipation

Authors: Swanand Patil, Pankaj Agarwal

Abstract:

In past few decades, seismic performance objectives have shifted from earthquake resistance to earthquake resilience of the structures, especially for the lifeline buildings. Features such as negligible post-earthquake damage and replaceable damaged components, makes energy dissipating systems a valid choice for a seismically resilient building. In this study, various energy dissipation devices are applied on an eight-storey moment resisting RC building model. The energy dissipating devices include both hysteresis-based and viscous type of devices. The seismic response of the building is obtained for different positioning and mechanical properties of the devices. The investigation is carried forward to the deficiently ductile RC frame also. The performance assessment is done on the basis of drift ratio, mode shapes and displacement response of the model structures. Nonlinear dynamic analysis shows largely improved displacement response. The damping devices improve displacement response more efficiently in the deficient ductile frames than that in the perfectly moment resisting frames. This finding is important considering the number of deficient buildings in India and the world. The placement and mechanical properties of the dampers prove to be a crucial part in modelling, analyzing and designing of the structures with supplemental energy dissipation.

Keywords: earthquake resilient structures, lifeline buildings, retrofitting of structures, supplemental energy dissipation

Procedia PDF Downloads 335
9758 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 115
9757 Comparison of Spiral Circular Coil and Helical Coil Structures for Wireless Power Transfer System

Authors: Zhang Kehan, Du Luona

Abstract:

Wireless power transfer (WPT) systems have been widely investigated for advantages of convenience and safety compared to traditional plug-in charging systems. The research contents include impedance matching, circuit topology, transfer distance et al. for improving the efficiency of WPT system, which is a decisive factor in the practical application. What is more, coil structures such as spiral circular coil and helical coil with variable distance between two turns also have indispensable effects on the efficiency of WPT systems. This paper compares the efficiency of WPT systems utilizing spiral or helical coil with variable distance between two turns, and experimental results show that efficiency of spiral circular coil with an optimum distance between two turns is the highest. According to efficiency formula of resonant WPT system with series-series topology, we introduce M²/R₋₁ to measure the efficiency of spiral circular coil and helical coil WPT system. If the distance between two turns s is too close, proximity effect theory shows that the induced current in the conductor, caused by a variable flux created by the current flows in the skin of vicinity conductor, is the opposite direction of source current and has assignable impart on coil resistance. Thus in two coil structures, s affects coil resistance. At the same time, when the distance between primary and secondary coils is not variable, s can also make the influence on M to some degrees. The aforementioned study proves that s plays an indispensable role in changing M²/R₋₁ and then can be adjusted to find the optimum value with which WPT system achieves the highest efficiency. In actual application situations of WPT systems especially in underwater vehicles, miniaturization is one vital issue in designing WPT system structures. Limited by system size, the largest external radius of spiral circular coil is 100 mm, and the largest height of helical coil is 40 mm. In other words, the turn of coil N changes with s. In spiral circular and helical structures, the distance between each two turns in secondary coil is set as a constant value 1 mm to guarantee that the R2 is not variable. Based on the analysis above, we set up spiral circular coil and helical coil model using COMSOL to analyze the value of M²/R₋₁ when the distance between each two turns in primary coil sp varies from 0 mm to 10 mm. In the two structure models, the distance between primary and secondary coils is 50 mm and wire diameter is chosen as 1.5 mm. The turn of coil in secondary coil are 27 in helical coil model and 20 in spiral circular coil model. The best value of s in helical coil structure and spiral circular coil structure are 1 mm and 2 mm respectively, in which the value of M²/R₋₁ is the largest. It is obviously to select spiral circular coil as the first choice to design the WPT system for that the value of M²/R₋₁ in spiral circular coil is larger than that in helical coil under the same condition.

Keywords: distance between two turns, helical coil, spiral circular coil, wireless power transfer

Procedia PDF Downloads 330
9756 Non-Coplanar Nuclei in Heavy-Ion Reactions

Authors: Sahila Chopra, Hemdeep, Arshdeep Kaur, Raj K. Gupta

Abstract:

In recent times, we noticed an interesting and important role of non-coplanar degree-of-freedom (Φ = 00) in heavy ion reactions. Using the dynamical cluster-decay model (DCM) with Φ degree-of-freedom included, we have studied three compound systems 246Bk∗, 164Yb∗ and 105Ag∗. Here, within the DCM with pocket formula for nuclear proximity potential, we look for the effects of including compact, non-coplanar configurations (Φc = 00) on the non-compound nucleus (nCN) contribution in total fusion cross section σfus. For 246Bk∗, formed in 11B+235U and 14N+232Th reaction channels, the DCM with coplanar nuclei (Φc = 00) shows an nCN contribution for 11B+235U channel, but none for 14N+232Th channel, which on including Φ gives both reaction channels as pure compound nucleus decays. In the case of 164Yb∗, formed in 64Ni+100Mo, the small nCN effects for Φ=00 are reduced to almost zero for Φ = 00. Interestingly, however, 105Ag∗ for Φ = 00 shows a small nCN contribution, which gets strongly enhanced for Φ = 00, such that the characteristic property of PCN presents a change of behaviour, like that of a strongly fissioning superheavy element to a weakly fissioning nucleus; note that 105Ag∗ is a weakly fissioning nucleus and Psurv behaves like one for a weakly fissioning nucleus for both Φ = 00 and Φ = 00. Apparently, Φ is presenting itself like a good degree-of-freedom in the DCM.

Keywords: dynamical cluster-decay model, fusion cross sections, non-compound nucleus effects, non-coplanarity

Procedia PDF Downloads 286
9755 Cryogenic Separation of CO2 from Molten Carbonate Fuel Cell Anode Outlet—Experimental Guidelines

Authors: Jarosław Milewski, Rafał Bernat

Abstract:

This paper presents an analysis of using cryogenic separation unit for recovering fuel from anode off gas of molten carbonate fuel cells (MCFCs) in order to upgrade the efficiently of the unit. In the proposed solution, the CSU is used for condensing water and carbon dioxide from anode off gas, and re-cycling the rest of the stream to the anode, saving certain amount of fuel (at least 30%). The resulting system efficiency is increased considerably. CSU, virtually consumes power, thus this solution has energy penalty as well, on the other hand, MCFC generates large amount of heat at elevated temperature, thus part of the CSU can be based on absorption chiller. In all cases, a high amount of fuel is obtained after condensation of water and carbon dioxide and re-cycled to the anode inlet. Based on mathematical modeling done previously, the concept and guidelines for forthcoming experimental investigations are presented in this paper. During planned experiments, an existing single cell laboratory stand will be equipped with re-cycle device (a fan, a peristaltic pump, etc.). Parallel, a mixture of anode off gas will be cooled down for determining the proper temperature for the separation of water and carbon dioxide.

Keywords: cryogenic separation, experiments, fuel cells, molten carbonate fuel cells

Procedia PDF Downloads 232
9754 Classification of Business Models of Italian Bancassurance by Balance Sheet Indicators

Authors: Andrea Bellucci, Martina Tofi

Abstract:

The aim of paper is to analyze business models of bancassurance in Italy for life business. The life insurance business is very developed in the Italian market and banks branches have 80% of the market share. Given its maturity, the life insurance market needs to consolidate its organizational form to allow for the development of non-life business, which nowadays collects few premiums but represents a great opportunity to enlarge the market share of bancassurance using its strength in the distribution channel while the market share of independent agents is decreasing. Starting with the main business model of bancassurance for life business, this paper will analyze the performances of life companies in the Italian market by balance sheet indicators and by main discriminant variables of business models. The study will observe trends from 2013 to 2015 for the Italian market by exploiting a database managed by Associazione Nazionale delle Imprese di Assicurazione (ANIA). The applied approach is based on a bottom-up analysis starting with variables and indicators to define business models’ classification. The statistical classification algorithm proposed by Ward is employed to design business models’ profiles. Results from the analysis will be a representation of the main business models built by their profile related to indicators. In that way, an unsupervised analysis is developed that has the limit of its judgmental dimension based on research opinion, but it is possible to obtain a design of effective business models.

Keywords: bancassurance, business model, non life bancassurance, insurance business value drivers

Procedia PDF Downloads 284
9753 Discursively Examination of 8th Grade Students’ Geometric Thinking Levels

Authors: Ferdağ Çulhan, Emine Gaye Çontay

Abstract:

Geometric thinking levels created by Van Hiele are used to determine students' progress in geometric thinking. Many studies have been conducted on geometric thinking levels and they have taken their place in teaching curricula over time. It is thought that geometric thinking levels, which have become so important in teaching, can be examined in depth. In order to make an in-depth analysis, it was decided that the most appropriate management was discourse analysis. In this study, the focus is on examining the geometric thinking levels of 8th grade students from a discursive point of view. Sfard (2008)'s "Commognitive" theory will be used to conduct discursive analysis. The "Global Van Hiele Questionnaire" created by Patkin (2014) and translated into Turkish for this research will be used in the research. The "Global Van Hiele Questionnaire" contains questions from the sub-learning domain of triangles and quadrilaterals, circles and geometric objects. It has a wider scope than many "Van Hiele Questionnaires". “Global Van Hiele Questionnaire” will be applied to 8th grade students. Then, the geometric thinking levels of the students will be determined and interviews will be held with two students from each of the 1st, 2nd and 3rd levels. The interviews will be recorded and the students' discourses will be examined. By evaluating the relations between the students' geometric thinking levels and their discourses, it will be examined how much their discourse reflects their level of thinking. In this way, it is thought that students' geometric thinking processes can be better understood.

Keywords: mathematical discourses, commognitive framework, geometric thinking levels, van hiele

Procedia PDF Downloads 115
9752 Numerical Simulation of a Point Absorber Wave Energy Converter Using OpenFOAM in Indian Scenario

Authors: Pooja Verma, Sumana Ghosh

Abstract:

There is a growing need for alternative way of power generation worldwide. The reason can be attributed to limited resources of fossil fuels, environmental pollution, increasing cost of conventional fuels, and lower efficiency of conversion of energy in existing systems. In this context, one of the potential alternatives for power generation is wave energy. However, it is difficult to estimate the amount of electrical energy generation in an irregular sea condition by experiment and or analytical methods. Therefore in this work, a numerical wave tank is developed using the computational fluid dynamics software Open FOAM. In this software a specific utility known as waves2Foam utility is being used to carry out the simulation work. The computational domain is a tank of dimension: 5m*1.5m*1m with a floating object of dimension: 0.5m*0.2m*0.2m. Regular waves are generated at the inlet of the wave tank according to Stokes second order theory. The main objective of the present study is to validate the numerical model against existing experimental data. It shows a good matching with the existing experimental data of floater displacement. Later the model is exploited to estimate energy extraction due to the movement of such a point absorber in real sea conditions. Scale down the wave properties like wave height, wave length, etc. are used as input parameters. Seasonal variations are also considered.

Keywords: OpenFOAM, numerical wave tank, regular waves, floating object, point absorber

Procedia PDF Downloads 342
9751 A Systematic Review Examining the Experimental methodology behind in vivo testing of hiatus hernia and Diaphragmatic Hernia Mesh

Authors: Whitehead-Clarke T., Beynon V., Banks J., Karanjia R., Mudera V., Windsor A., Kureshi A.

Abstract:

Introduction: Mesh implants are regularly used to help repair both hiatus hernias (HH) and diaphragmatic hernias (DH). In vivo studies are used to test not only mesh safety but increasingly comparative efficacy. Our work examines the field of in vivo mesh testing for HH and DH models to establish current practices and standards. Method: This systematic review was registered with PROSPERO. Medline and Embase databases were searched for relevant in vivo studies. 44 articles were identified and underwent abstract review, where 22 were excluded. 4 further studies were excluded after full text review – leaving 18 to undergo data extraction. Results: Of 18 studies identified, 9 used an in vivo HH model and 9 a DH model. 5 studies undertook mechanical testing on tissue samples – all uniaxial in nature. Testing strip widths ranged from 1-20mm (median 3mm). Testing speeds varied from 1.5-60mm/minute. Upon histology, the most commonly assessed structural and cellular factors were neovascularization and macrophages, respectively (n=9 each). Structural analysis was mostly qualitative, where cellular analysis was equally likely to be quantitative. 11 studies assessed adhesion formation, of which 8 used one of four scoring systems. 8 studies measured mesh shrinkage. Discussion: In vivo studies assessing mesh for HH and DH repair are uncommon. Within this relatively young field, we encourage surgical and materials testing institutions to discuss its standardisation.

Keywords: hiatus, diaphragmatic, hernia, mesh, materials testing, in vivo

Procedia PDF Downloads 202
9750 Evaluation of Kabul BRT Route Network with Application of Integrated Land-use and Transportation Model

Authors: Mustafa Mutahari, Nao Sugiki, Kojiro Matsuo

Abstract:

The four decades of war, lack of job opportunities, poverty, lack of services, and natural disasters in different provinces of Afghanistan have contributed to a rapid increase in the population of Kabul, the capital city of Afghanistan. Population census has not been conducted since 1979, the first and last population census in Afghanistan. However, according to population estimations by Afghan authorities, the population of Kabul has been estimated at more than 4 million people, whereas the city was designed for two million people. Although the major transport mode of Kabul residents is public transport, responsible authorities within the country failed to supply the required means of transportation systems for the city. Besides, informal resettlement, lack of intersection control devices, presence of illegal vendors on streets, presence of illegal and unstandardized on-street parking and bus stops, driver`s unprofessional behavior, weak traffic law enforcement, and blocked roads and sidewalks have contributed to the extreme traffic congestion of Kabul. In 2018, the government of Afghanistan approved the Kabul city Urban Design Framework (KUDF), a vision towards the future of Kabul, which provides strategies and design guidance at different scales to direct urban development. Considering traffic congestion of the city and its budget limitations, the KUDF proposes a BRT route network with seven lines to reduce the traffic congestion, and it is said to facilitate more than 50% of Kabul population to benefit from this service. Based on the KUDF, it is planned to increase the BRT mode share from 0% to 17% and later to 30% in medium and long-term planning scenarios, respectively. Therefore, a detailed research study is needed to evaluate the proposed system before the implementation stage starts. The integrated land-use transport model is an effective tool to evaluate the Kabul BRT because of its future assessment capabilities that take into account the interaction between land use and transportation. This research aims to analyze and evaluate the proposed BRT route network with the application of an integrated land-use and transportation model. The research estimates the population distribution and travel behavior of Kabul within small boundary scales. The actual road network and land-use detailed data of the city are used to perform the analysis. The BRT corridors are evaluated not only considering its impacts on the spatial interactions in the city`s transportation system but also on the spatial developments. Therefore, the BRT are evaluated with the scenarios of improving the Kabul transportation system based on the distribution of land-use or spatial developments, planned development typology and population distribution of the city. The impacts of the new improved transport system on the BRT network are analyzed and the BRT network is evaluated accordingly. In addition, the research also focuses on the spatial accessibility of BRT stops, corridors, and BRT line beneficiaries, and each BRT stop and corridor are evaluated in terms of both access and geographic coverage, as well.

Keywords: accessibility, BRT, integrated land-use and transport model, travel behavior, spatial development

Procedia PDF Downloads 194
9749 A Triad Pedagogy for Increased Digital Competence of Human Resource Management Students: Reflecting on Human Resource Information Systems at a South African University

Authors: Esther Pearl Palmer

Abstract:

Driven by the increased pressure on Higher Education Institutions (HEIs) to produce work-ready graduates for the modern world of work, this study reflects on triad teaching and learning practices to increase student engagement and employability. In the South African higher education context, the employability of graduates is imperative in strengthening the country’s economy and in increasing competitiveness. Within this context, the field of Human Resource Management (HRM) calls for innovative methods and approaches to teaching and learning and assessing the skills and competencies of graduates to render them employable. Digital competency in Human Resource Information Systems (HRIS) is an important component and prerequisite for employment in HRM. The purpose of this research is to reflect on the subject HRIS developed by lecturers at the Central University of Technology, Free State (CUT), with the intention to actively engage students in real-world learning activities and increase their employability. The Enrichment Triad Model (ETM) was used as theoretical framework to develop the subject as it supports a triad teaching and learning approach to education. It is, furthermore, an inter-structured model that supports collaboration between industry, academics and students. The study follows a mixed-method approach to reflect on the learning experiences of the industry, academics and students in the subject field over the past three years. This paper is a work in progress and seeks to broaden the scope of extant studies about student engagement in work-related learning to increase employability. Based on the ETM as theoretical framework and pedagogical practice, this paper proposes that following a triad teaching and learning approach will increase work-related skills of students. Findings from the study show that students, academics and industry alike regard educational opportunities that incorporate active learning experiences with the world of work enhances student engagement in learning and renders them more employable.

Keywords: digital competence, enriched triad model, human resource information systems, student engagement, triad pedagogy.

Procedia PDF Downloads 78
9748 The Post-Hegemony of Post-Capitalism: Towards a Political Theory of Open Cooperativism

Authors: Vangelis Papadimitropoulos

Abstract:

The paper is part of the research project “Techno-Social Innovation in the Collaborative Economy'', funded by the Hellenic Foundation of Research and Innovation for the years 2022-2024. The research project examines the normative and empirical conditions of grassroots technologically driven innovation, potentially enabling the transition towards a commons-oriented post-capitalist economy. The project carries out a conceptually led and empirically grounded multi-case study of the digital commons, open-source technologies, platform cooperatives, open cooperatives and Distributed Autonomous Organizations (DAOs) on the Blockchain. The methodological scope of research is interdisciplinary inasmuch as it comprises political theory, economics, sustainability science and computer science, among others. The research draws specifically on Michel Bauwens and Vasilis Kostakis' model of open cooperativism between the commons, ethical market entities and a partner state. Bauwens and Kostakis advocate for a commons-based counter-hegemonic post-capitalist transition beyond and against neoliberalism. The research further employs Laclau and Mouffe's discourse theory of hegemony to introduce a post-hegemonic conceptualization of the model of open cooperativism. Thus, the paper aims to outline the theoretical contribution of the research project to contemporary political theory debates on post-capitalism and the collaborative economy.

Keywords: open cooperativism, techno-social innovation, post-hegemony, post-capitalism

Procedia PDF Downloads 49
9747 Minimum Vertices Dominating Set Algorithm for Secret Sharing Scheme

Authors: N. M. G. Al-Saidi, K. A. Kadhim, N. A. Rajab

Abstract:

Over the past decades, computer networks and data communication system has been developing fast, so, the necessity to protect a transmitted data is a challenging issue, and data security becomes a serious problem nowadays. A secret sharing scheme is a method which allows a master key to be distributed among a finite set of participants, in such a way that only certain authorized subsets of participants to reconstruct the original master key. To create a secret sharing scheme, many mathematical structures have been used; the most widely used structure is the one that is based on graph theory (graph access structure). Subsequently, many researchers tried to find efficient schemes based on graph access structures. In this paper, we propose a novel efficient construction of a perfect secret sharing scheme for uniform access structure. The dominating set of vertices in a regular graph is used for this construction in the following way; each vertex represents a participant and each minimum independent dominating subset represents a minimal qualified subset. Some relations between dominating set, graph order and regularity are achieved, and can be used to demonstrate the possibility of using dominating set to construct a secret sharing scheme. The information rate that is used as a measure for the efficiency of such systems is calculated to show that the proposed method has some improved values.

Keywords: secret sharing scheme, dominating set, information rate, access structure, rank

Procedia PDF Downloads 380
9746 Potential and Techno-Economic Analysis of Hydrogen Production from Portuguese Solid Recovered Fuels

Authors: A. Ribeiro, N. Pacheco, M. Soares, N. Valério, L. Nascimento, A. Silva, C. Vilarinho, J. Carvalho

Abstract:

Hydrogen will play a key role in changing the current global energy paradigm, associated with the high use of fossil fuels and the release of greenhouse gases. This work intended to identify and quantify the potential of Solid Recovered Fuels (SFR) existing in Portugal and project the cost of hydrogen, produced through its steam gasification in different scenarios, associated with the size or capacity of the plant and the existence of carbon capture and storage (CCS) systems. Therefore, it was performed a techno-economic analysis simulation using an ASPEN base model, the H2A Hydrogen Production Model Version 3.2018. Regarding the production of SRF, it was possible to verify the annual production of more than 200 thousand tons of SRF in Portugal in 2019. The results of the techno-economic analysis simulations showed that in the scenarios containing a high (200,000 tons/year) and medium (40,000 tons/year) amount of SFR, the cost of hydrogen production was competitive concerning the current prices of hydrogen. The results indicate that scenarios 1 and 2, which use 200,000 tons of SRF per year, have lower hydrogen production values, 1.22 USD/kg H2 and 1.63 USD/kg H2, respectively. The cost of producing hydrogen without carbon capture and storage (CCS) systems in an average amount of SFR (40,000 tons/year) was 1.70 USD/kg H2. In turn, scenarios 5 (without CCS) and 6 (with CCS), which use only 683 tons of SFR from urban sources, have the highest costs, 6.54 USD/kg H2 and 908.97 USD/kg H2, respectively. Therefore, it was possible to conclude that there is a huge potential for the use of SRF for the production of hydrogen through steam gasification in Portugal.

Keywords: gasification, hydrogen, solid recovered fuels, techno-economic analysis, waste-to-energy

Procedia PDF Downloads 109
9745 Mining User-Generated Contents to Detect Service Failures with Topic Model

Authors: Kyung Bae Park, Sung Ho Ha

Abstract:

Online user-generated contents (UGC) significantly change the way customers behave (e.g., shop, travel), and a pressing need to handle the overwhelmingly plethora amount of various UGC is one of the paramount issues for management. However, a current approach (e.g., sentiment analysis) is often ineffective for leveraging textual information to detect the problems or issues that a certain management suffers from. In this paper, we employ text mining of Latent Dirichlet Allocation (LDA) on a popular online review site dedicated to complaint from users. We find that the employed LDA efficiently detects customer complaints, and a further inspection with the visualization technique is effective to categorize the problems or issues. As such, management can identify the issues at stake and prioritize them accordingly in a timely manner given the limited amount of resources. The findings provide managerial insights into how analytics on social media can help maintain and improve their reputation management. Our interdisciplinary approach also highlights several insights by applying machine learning techniques in marketing research domain. On a broader technical note, this paper illustrates the details of how to implement LDA in R program from a beginning (data collection in R) to an end (LDA analysis in R) since the instruction is still largely undocumented. In this regard, it will help lower the boundary for interdisciplinary researcher to conduct related research.

Keywords: latent dirichlet allocation, R program, text mining, topic model, user generated contents, visualization

Procedia PDF Downloads 177
9744 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model

Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond

Abstract:

The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.

Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance

Procedia PDF Downloads 282
9743 Anti-Anxiety Activity of Ethyl Acetate Extract of Flowers Nerium indicum

Authors: Deepak Suresh Mohale, Anil V. Chandewar

Abstract:

Anxiety is defined as an exaggerated feeling of apprehension, uncertainty and fear. Nerium indicum is a well-known ornamental and medicinal plant belonging to the family Apocynaceae. A wide spectrum of biological activities has been reported with various constituents isolated from different parts of the plant. This study was conducted to investigate antianxiety activity of flower extract. Flowers were collected and dried in shade and coarsely powdered. Powdered mixture was extracted with ethyl acetate by maceration process. Extract of flowers obtained was subsequently dried in oven at 40-50 °C. This extract is then tested for antianxiety activity at low and high dose using elevated plus maze and light & dark model. Rats shown increased open arm entries and time spent in open arm in elevated Plus maze with treatment low and high dose of extract of Nerium indicum flower as compared to their respective control groups. In Light & dark Model, light box entries and time spent in light box increased with treatment low and high dose of extract of Nerium indicum flower as compared to their respective control groups. From result it is concluded that ethyl acetate extract of flower of Nerium indicum possess antianxiety activity at low and high dose.

Keywords: antianxiety, anxiety, kaner, nerium indicum, social isolation

Procedia PDF Downloads 381
9742 Effect of Quenching Medium on the Hardness of Dual Phase Steel Heat Treated at a High Temperature

Authors: Tebogo Mabotsa, Tamba Jamiru, David Ibrahim

Abstract:

Dual phase(DP) steel consists essentially of fine grained equiaxial ferrite and a dispersion of martensite. Martensite is the primary precipitate in DP steels, it is the main resistance to dislocation motion within the material. The objective of this paper is to present a relation between the intercritical annealing holding time and the hardness of a dual phase steel. The initial heat treatment involved heating the specimens to 1000oC and holding the sample at that temperature for 30 minutes. After the initial heat treatment, the samples were heated to 770oC and held for a varying amount of time at constant temperature. The samples were held at 30, 60, and 90 minutes respectively. After heating and holding the samples at the austenite-ferrite phase field, the samples were quenched in water, brine, and oil for each holding time. The experimental results proved that an equation for predicting the hardness of a dual phase steel as a function of the intercritical holding time is possible. The relation between intercritical annealing holding time and hardness of a dual phase steel heat treated at high temperatures is parabolic in nature. Theoretically, the model isdependent on the cooling rate because the model differs for each quenching medium; therefore, a universal hardness equation can be derived where the cooling rate is a variable factor.

Keywords: quenching medium, annealing temperature, dual phase steel, martensite

Procedia PDF Downloads 69
9741 Characterization of InGaAsP/InP Quantum Well Lasers

Authors: K. Melouk, M. Dellakrachaï

Abstract:

Analytical formula for the optical gain based on a simple parabolic-band by introducing theoretical expressions for the quantized energy is presented. The model used in this treatment take into account the effects of intraband relaxation. It is shown, as a result, that the gain for the TE mode is larger than that for TM mode and the presence of acceptor impurity increase the peak gain.

Keywords: InGaAsP, laser, quantum well, semiconductor

Procedia PDF Downloads 359
9740 Computational Fluid Dynamics Simulation of Reservoir for Dwell Time Prediction

Authors: Nitin Dewangan, Nitin Kattula, Megha Anawat

Abstract:

Hydraulic reservoir is the key component in the mobile construction vehicles; most of the off-road earth moving construction machinery requires bigger side hydraulic reservoirs. Their reservoir construction is very much non-uniform and designers used such design to utilize the space available under the vehicle. There is no way to find out the space utilization of the reservoir by oil and validity of design except virtual simulation. Computational fluid dynamics (CFD) helps to predict the reservoir space utilization by vortex mapping, path line plots and dwell time prediction to make sure the design is valid and efficient for the vehicle. The dwell time acceptance criteria for effective reservoir design is 15 seconds. The paper will describe the hydraulic reservoir simulation which is carried out using CFD tool acuSolve using automated mesh strategy. The free surface flow and moving reference mesh is used to define the oil flow level inside the reservoir. The first baseline design is not able to meet the acceptance criteria, i.e., dwell time below 15 seconds because the oil entry and exit ports were very close. CFD is used to redefine the port locations for the reservoir so that oil dwell time increases in the reservoir. CFD also proposed baffle design the effective space utilization. The final design proposed through CFD analysis is used for physical validation on the machine.

Keywords: reservoir, turbulence model, transient model, level set, free-surface flow, moving frame of reference

Procedia PDF Downloads 136
9739 Modeling and Simulation of Primary Atomization and Its Effects on Internal Flow Dynamics in a High Torque Low Speed Diesel Engine

Authors: Muteeb Ulhaq, Rizwan Latif, Sayed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability and adaptability. Most of the research and development up till now have been directed towards High-Speed Diesel Engine, for Commercial use. In these engines objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low-speed engines the requirement is altogether different. These types of Engines are mostly used in Maritime Industry, Agriculture industry, Static Engines Compressors Engines etc. Unfortunately due to lack of research and development, these engines have low efficiency and high soot emissions and one of the most effective way to overcome these issues is by efficient combustion in an engine cylinder, the fuel spray atomization process plays a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process is of a great importance. In this research, we will examine the effects of primary breakup modeling on the spray characteristics under diesel engine conditions. KH-ACT model is applied to cater the effect of aerodynamics in an engine cylinder and also cavitations and turbulence generated inside the injector. It is a modified form of most commonly used KH model, which considers only the aerodynamically induced breakup based on the Kelvin–Helmholtz instability. Our model is extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver. Spray characteristics like Spray Penetration, Liquid length, Spray cone angle and Souter mean diameter (SMD) were validated by comparing the results of Open Foam and Matlab. Including the effects of cavitation and turbulence enhances primary breakup, leading to smaller droplet sizes, decrease in liquid penetration, and increase in the radial dispersion of spray. All these properties favor early evaporation of fuel which enhances Engine efficiency.

Keywords: Kelvin–Helmholtz instability, open foam, primary breakup, souter mean diameter, turbulence

Procedia PDF Downloads 197