Search results for: modal assurance criterion
97 Improved Traveling Wave Method Based Fault Location Algorithm for Multi-Terminal Transmission System of Wind Farm with Grounding Transformer
Authors: Ke Zhang, Yongli Zhu
Abstract:
Due to rapid load growths in today’s highly electrified societies and the requirement for green energy sources, large-scale wind farm power transmission system is constantly developing. This system is a typical multi-terminal power supply system, whose structure of the network topology of transmission lines is complex. What’s more, it locates in the complex terrain of mountains and grasslands, thus increasing the possibility of transmission line faults and finding the fault location with difficulty after the faults and resulting in an extremely serious phenomenon of abandoning the wind. In order to solve these problems, a fault location method for multi-terminal transmission line based on wind farm characteristics and improved single-ended traveling wave positioning method is proposed. Through studying the zero sequence current characteristics by using the characteristics of the grounding transformer(GT) in the existing large-scale wind farms, it is obtained that the criterion for judging the fault interval of the multi-terminal transmission line. When a ground short-circuit fault occurs, there is only zero sequence current on the path between GT and the fault point. Therefore, the interval where the fault point exists is obtained by determining the path of the zero sequence current. After determining the fault interval, The location of the short-circuit fault point is calculated by the traveling wave method. However, this article uses an improved traveling wave method. It makes the positioning accuracy more accurate by combining the single-ended traveling wave method with double-ended electrical data. What’s more, a method of calculating the traveling wave velocity is deduced according to the above improvements (it is the actual wave velocity in theory). The improvement of the traveling wave velocity calculation method further improves the positioning accuracy. Compared with the traditional positioning method, the average positioning error of this method is reduced by 30%.This method overcomes the shortcomings of the traditional method in poor fault location of wind farm transmission lines. In addition, it is more accurate than the traditional fixed wave velocity method in the calculation of the traveling wave velocity. It can calculate the wave velocity in real time according to the scene and solve the traveling wave velocity can’t be updated with the environment and real-time update. The method is verified in PSCAD/EMTDC.Keywords: grounding transformer, multi-terminal transmission line, short circuit fault location, traveling wave velocity, wind farm
Procedia PDF Downloads 26396 A Case Study of Wildlife Crime in Bangladesh
Authors: M. Golam Rabbi
Abstract:
Theme of wildlife crime is unique in Bangladesh. In earlier of 2010, wildlife crime was not designated as a crime, unlike other offenses. Forest Department and other enforcement agencies were not in full swing to find out the organized crime scene at that time and recorded few cases along with forest crime. However, after the establishment of Wildlife Crime Control Unitin 2012a, total of 374 offenses have been detected with 566 offenders and 37,039 wildlife and trophies were seized till November 2016. Most offenses seem to be committed outside the forests where the presence of the forest staff is minimal. Total detection percentage of offenses is not known, but offenders are not identified in 60% of detected cases (UDOR). Only 20% cases are decided by the courts even after eight years, conviction rate of the total disposal is 70.65%. Mostly six months imprisonment and BDT 5000 fine seems to be the modal penalty. The monetary value of wildlife crime in the country is approximate $0.72M per year and the maximum value counted for reptiles around $0.45M especially for high-level trafficking of geckos and turtles. The most common seizures of wildlife are birds (mynas, munias, parakeets, lorikeets, water birds, etc.) which have domestic demand for pet. Some other wildlife like turtles, lizards and small mammals are also on the list. Venison and migratory waterbirds often seized which has a large quantity demand for consuming at aristocratic level.Due to porous border and weak enforcement in border region poachers use the way for trafficking of geckos, turtles, and tortoises, snakes, venom, tiger and body parts, spotted deerskin, pangolinetc. Those have very high demand in East Asian countries for so-called medicinal purposes. The recent survey also demonstrates new route for illegal trade and trafficking for instance, after poaching of tiger and deer from the Sundarbans, the largest mangrove track of the planet to Thailand through the Bay of Bengal, sharks fins and ray fish through Chittagong seaport and directly by sea routes to Myanmar and Thailand. However, a good number of records of offense demonstrate the transition route from India to South and South East Asian countries. Star tortoises and Hamilton’s turtles are smuggled in from India which mostly seized at Benapole border of Jessore and Hazrat Shah Jajal International Airport of Dhaka, in very large numbers for transmission to East Asian countries. Most of the cases of wildlife trade routes leading to China, Thailand, Malaysia, and Myanmar. Most surprisingly African ivory was seized in Bangladesh recently, which was meant to be trafficked to the South-East Asia. However; forest department is working to fight against wildlife poaching, illegal trade and trafficking in collaboration with other law enforcement agencies. The department needs a clear mandate and to build technical capabilities for identifying, seizing and holding specimens. The department also needs to step out of the forests and must develop the capacity to surveillance and patrol all sensitive locations across the country.Keywords: Bangladesh forest department, Sundarban, tiger, wildlife crime, wildlife trafficking
Procedia PDF Downloads 30795 Lithium and Sodium Ion Capacitors with High Energy and Power Densities based on Carbons from Recycled Olive Pits
Authors: Jon Ajuria, Edurne Redondo, Roman Mysyk, Eider Goikolea
Abstract:
Hybrid capacitor configurations are now of increasing interest to overcome the current energy limitations of supercapacitors entirely based on non-Faradaic charge storage. Among them, Li-ion capacitors including a negative battery-type lithium intercalation electrode and a positive capacitor-type electrode have achieved tremendous progress and have gone up to commercialization. Inexpensive electrode materials from renewable sources have recently received increased attention since cost is a persistently major criterion to make supercapacitors a more viable energy solution, with electrode materials being a major contributor to supercapacitor cost. Additionally, Na-ion battery chemistries are currently under development as less expensive and accessible alternative to Li-ion based battery electrodes. In this work, we are presenting both lithium and sodium ion capacitor (LIC & NIC) entirely based on electrodes prepared from carbon materials derived from recycled olive pits. Yearly, around 1 million ton of olive pit waste is generated worldwide, of which a third originates in the Spanish olive oil industry. On the one hand, olive pits were pyrolized at different temperatures to obtain a low specific surface area semigraphitic hard carbon to be used as the Li/Na ion intercalation (battery-type) negative electrode. The best hard carbon delivers a total capacity of 270mAh/g vs Na/Na+ in 1M NaPF6 and 350mAh/g vs Li/Li+ in 1M LiPF6. On the other hand, the same hard carbon is chemically activated with KOH to obtain high specific surface area -about 2000 m2g-1- activated carbon that is further used as the ion-adsorption (capacitor-type) positive electrode. In a voltage window of 1.5-4.2V, activated carbon delivers a specific capacity of 80 mAh/g vs. Na/Na+ and 95 mAh/g vs. Li/Li+ at 0.1A /g. Both electrodes were assembled in the same hybrid cell to build a LIC/NIC. For comparison purposes, a symmetric EDLC supercapacitor cell using the same activated carbon in 1.5M Et4NBF4 electrolyte was also built. Both LIC & NIC demonstrates considerable improvements in the energy density over its EDLC counterpart, delivering a maximum energy density of 110Wh/Kg at a power density of 30W/kg AM and a maximum power density of 6200W/Kg at an energy density of 27 Wh/Kg in the case of NIC and a maximum energy density of 110Wh/Kg at a power density of 30W/kg and a maximum power density of 18000W/Kg at an energy density of 22 Wh/Kg in the case of LIC. In conclusion, our work demonstrates that the same biomass waste can be adapted to offer a hybrid capacitor/battery storage device overcoming the limited energy density of corresponding double layer capacitors.Keywords: hybrid supercapacitor, Na-Ion capacitor, supercapacitor, Li-Ion capacitor, EDLC
Procedia PDF Downloads 20194 Intertemporal Individual Preferences for Climate Change Intergenerational Investments – Estimating the Social Discount Rate for Poland
Authors: Monika Foltyn-Zarychta
Abstract:
Climate change mitigation investment activities are inevitably extended in time extremely. The project cycle does not last for decades – sometimes it stretches out for hundreds of years and the project outcomes impact several generations. The longevity of those activities raises multiple problems in the appraisal procedure. One of the pivotal issues is the choice of the discount rate, which affect tremendously the net present value criterion. The paper aims at estimating the value of social discount rate for intergenerational investment projects in Poland based on individual intertemporal preferences. The analysis is based on questionnaire surveying Polish citizens and designed as contingent valuation method. The analysis aimed at answering two questions: 1) whether the value of the individual discount rate decline with increased time of delay, and 2) whether the value of the individual discount rate changes with increased spatial distance toward the gainers of the project. The valuation questions were designed to identify respondent’s indifference point between lives saved today and in the future due to hypothetical project mitigating climate changes. Several project effects’ delays (of 10, 30, 90 and 150 years) were used to test the decline in value with time. The variability in regard to distance was tested by asking respondents to estimate their indifference point separately for gainers in Poland and in Latvia. The results show that as the time delay increases, the average discount rate value decreases from 15,32% for 10-year delay to 2,75% for 150-year delay. Similar values were estimated for Latvian beneficiaries. There should be also noticed that the average volatility measured by standard deviation also decreased with time delay. However, the results did not show any statistically significant difference in discount rate values for Polish and Latvian gainers. The results showing the decline of the discount rate with time prove the possible economic efficiency of the intergenerational effect of climate change mitigation projects and may induce the assumption of the altruistic behavior of present generation toward future people. Furthermore, it can be backed up by the same discount rate level declared by Polish for distant in space Latvian gainers. The climate change activities usually need significant outlays and the payback period is extremely long. The more precise the variables in the appraisal are, the more trustworthy and rational the investment decision is. The discount rate estimations for Poland add to the vivid discussion concerning the issue of climate change and intergenerational justice.Keywords: climate change, social discount rate, investment appraisal, intergenerational justice
Procedia PDF Downloads 23893 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 30592 Prismatic Bifurcation Study of a Functionally Graded Dielectric Elastomeric Tube Using Linearized Incremental Theory of Deformations
Authors: Sanjeet Patra, Soham Roychowdhury
Abstract:
In recent times, functionally graded dielectric elastomer (FGDE) has gained significant attention within the realm of soft actuation due to its dual capacity to exert highly localized stresses while maintaining its compliant characteristics on application of electro-mechanical loading. Nevertheless, the full potential of dielectric elastomer (DE) has not been fully explored due to their susceptibility to instabilities when subjected to electro-mechanical loads. As a result, study and analysis of such instabilities becomes crucial for the design and realization of dielectric actuators. Prismatic bifurcation is a type of instability that has been recognized in a DE tube. Though several studies have reported on the analysis for prismatic bifurcation in an isotropic DE tube, there is an insufficiency in studies related to prismatic bifurcation of FGDE tubes. Therefore, this paper aims to determine the onset of prismatic bifurcations on an incompressible FGDE tube when subjected to electrical loading across the thickness of the tube and internal pressurization. The analysis has been conducted by imposing two axial boundary conditions on the tube, specifically axially free ends and axially clamped ends. Additionally, the rigidity modulus of the tube has been linearly graded in the direction of thickness where the inner surface of the tube has a lower stiffness than the outer surface. The static equilibrium equations for deformation of the axisymmetric tube are derived and solved using numerical technique. The condition for prismatic bifurcation of the axisymmetric static equilibrium solutions has been obtained by using the linearized incremental constitutive equations. Two modes of bifurcations, corresponding to two different non-circular cross-sectional geometries, have been explored in this study. The outcomes reveal that the FGDE tubes experiences prismatic bifurcation before the Hessian criterion of failure is satisfied. It is observed that the lower mode of bifurcation can be triggered at a lower critical voltage as compared to the higher mode of bifurcation. Furthermore, the tubes with larger stiffness gradient require higher critical voltages for triggering the bifurcation. Moreover, with the increase in stiffness gradient, a linear variation of the critical voltage is observed with the thickness of the tube. It has been found that on applying internal pressure to a tube with low thickness, the tube becomes less susceptible to bifurcations. A thicker tube with axially free end is found to be more stable than the axially clamped end tube at higher mode of bifurcation.Keywords: critical voltage, functionally graded dielectric elastomer, linearized incremental approach, modulus of rigidity, prismatic bifurcation
Procedia PDF Downloads 7791 Variation of Carbon Isotope Ratio (δ13C) and Leaf-Productivity Traits in Aquilaria Species (Thymelaeceae)
Authors: Arlene López-Sampson, Tony Page, Betsy Jackes
Abstract:
Aquilaria genus produces a highly valuable fragrant oleoresin known as agarwood. Agarwood forms in a few trees in the wild as a response to injure or pathogen attack. The resin is used in perfume and incense industry and medicine. Cultivation of Aquilaria species as a sustainable source of the resin is now a common strategy. Physiological traits are frequently used as a proxy of crop and tree productivity. Aquilaria species growing in Queensland, Australia were studied to investigate relationship between leaf-productivity traits with tree growth. Specifically, 28 trees, representing 12 plus trees and 16 trees from yield plots, were selected to conduct carbon isotope analysis (δ13C) and monitor six leaf attributes. Trees were grouped on four diametric classes (diameter at 150 mm above ground level) ensuring the variability in growth of the whole population was sampled. Model averaging technique based on the Akaike’s information criterion (AIC) was computed to identify whether leaf traits could assist in diameter prediction. Carbon isotope values were correlated with height classes and leaf traits to determine any relationship. In average four leaves per shoot were recorded. Approximately one new leaf per week is produced by a shoot. Rate of leaf expansion was estimated in 1.45 mm day-1. There were no statistical differences between diametric classes and leaf expansion rate and number of new leaves per week (p > 0.05). Range of δ13C values in leaves of Aquilaria species was from -25.5 ‰ to -31 ‰ with an average of -28.4 ‰ (± 1.5 ‰). Only 39% of the variability in height can be explained by δ13C in leaf. Leaf δ13C and nitrogen content values were positively correlated. This relationship implies that leaves with higher photosynthetic capacities also had lower intercellular carbon dioxide concentrations (ci/ca) and less depleted values of 13C. Most of the predictor variables have a weak correlation with diameter (D). However, analysis of the 95% confidence of best-ranked regression models indicated that the predictors that could likely explain growth in Aquilaria species are petiole length (PeLen), values of δ13C (true13C) and δ15N (true15N), leaf area (LA), specific leaf area (SLA) and number of new leaf produced per week (NL.week). The model constructed with PeLen, true13C, true15N, LA, SLA and NL.week could explain 45% (R2 0.4573) of the variability in D. The leaf traits studied gave a better understanding of the leaf attributes that could assist in the selection of high-productivity trees in Aquilaria.Keywords: 13C, petiole length, specific leaf area, tree growth
Procedia PDF Downloads 50990 A Regulator's Assessment of Consumer Risk When Evaluating a User Test for an Umbrella Brand Name in an over the Counter Medicine
Authors: A. Bhatt, C. Bassi, H. Farragher, J. Musk
Abstract:
Background: All medicines placed on the EU market are legally required to be accompanied by labeling and package leaflet, which provide comprehensive information, enabling its safe and appropriate use. Mock-ups with results of assessments using a target patient group must be submitted for a marketing authorisation application. Consumers need confidence in non-prescription, OTC medicines in order to manage their minor ailments and umbrella brands assist purchasing decisions by assisting easy identification within a particular therapeutic area. A number of regulatory agencies have risk management tools and guidelines to assist in developing umbrella brands for OTC medicines, however assessment and decision making is subjective and inconsistent. This study presents an evaluation in the UK following the US FDA warning concerning methaemoglobinaemia following 21 reported cases (11 children under 2 years) caused by OTC oral analgesics containing benzocaine. METHODS: A standard face to face, 25 structured task based user interview testing methodology using a standard questionnaire and rating scale in consumers aged 15-91 years, was conducted independently between June and October 2015 in their homes. Whether individuals could discriminate between the labelling, safety information and warnings on cartons and PILs between 3 different OTC medicines packs with the same umbrella name was evaluated. Each pack was presented with differing information hierarchy using, different coloured cartons, containing the 3 different active ingredients, benzocaine (oromucosal spray) and two lozenges containing 2, 4, dichlorobenzyl alcohol, amylmetacresol and hexylresorcinol respectively (for the symptomatic relief of sore throat pain). The test was designed to determine whether warnings on the carton and leaflet were prominent, accessible to alert users that one product contained benzocaine, risk of methaemoglobinaemia, and refer to the leaflet for the signs of the condition and what to do should this occur. Results: Two consumers did not locate the warnings on the side of the pack, eventually found them on the back and two suggestions to further improve accessibility of the methaemoglobinaemia warning. Using a gold pack design for the oromucosal spray, all consumers could differentiate between the 3 drugs, minimum age particulars, pharmaceutical form and the risk factor methaemoglobinaemia. The warnings for benzocaine were deemed to be clear or very clear; appearance of the 3 packs were either very well differentiated or quite well differentiated. The PIL test passed on all criteria. All consumers could use the product correctly, identify risk factors ensuring the critical information necessary for the safe use was legible and easily accessible so that confusion and errors were minimised. Conclusion: Patients with known methaemoglobinaemia are likely to be vigilant in checking for benzocaine containing products, despite similar umbrella brand names across a range of active ingredients. Despite these findings, the package design and spray format were not deemed to be sufficient to mitigate potential safety risks associated with differences in target populations and contraindications when submitted to the Regulatory Agency. Although risk management tools are increasingly being used by agencies to assist in providing objective assurance of package safety, further transparency, reduction in subjectivity and proportionate risk should be demonstrated.Keywords: labelling, OTC, risk, user testing
Procedia PDF Downloads 30989 Study of Ion Density Distribution and Sheath Thickness in Warm Electronegative Plasma
Authors: Rajat Dhawan, Hitendra K. Malik
Abstract:
Electronegative plasmas comprising electrons, positive ions, and negative ions are advantageous for their expanding applications in industries. In plasma cleaning, plasma etching, and plasma deposition process, electronegative plasmas are preferred because of relatively less potential developed on the surface of the material under investigation. Also, the presence of negative ions avoid the irregularity in etching shapes and also enhance the material working during the fabrication process. The interaction of metallic conducting surface with plasma becomes mandatory to understand these applications. A metallic conducting probe immersed in a plasma results in the formation of a thin layer of charged species around the probe called as a sheath. The density of the ions embedded on the surface of the material and the sheath thickness are the important parameters for the surface-plasma interaction. Sheath thickness will give rise to the information of affected plasma region due to conducting surface/probe. The knowledge of the density of ions in the sheath region is advantageous in plasma nitriding, and their temperature is equally important as it strongly influences the thickness of the modified layer during surface plasma interaction. In the present work, we considered a negatively biased metallic probe immersed in a warm electronegative plasma. For this system, we adopted the continuity equation and momentum transfer equation for both the positive and negative ions, whereas electrons are described by Boltzmann distribution. Finally, we use the Poisson’s equation. Here, we assumed the spherical geometry for small probe radius. Poisson’s equation reveals the behaviour of potential surrounding a conducting metallic probe along with the use of the continuity and momentum transfer equations, with the help of proper boundary conditions. In turn, it gives rise to the information about the density profile of charged species and most importantly the thickness of the sheath. By keeping in mind, the well-known Bohm-Sheath criterion, all calculations are done. We found that positive ion density decreases with an increase in positive ion temperature, whereas it increases with the higher temperature of the negative ions. Positive ion density decreases as we move away from the center of the probe and is found to show a discontinuity at a particular distance from the center of the probe. The distance where discontinuity occurs is designated as sheath edge, i.e., the point where sheath ends. These results are beneficial for industrial applications, as the density of ions embedded on material surface is strongly affected by the temperature of plasma species. It has a drastic influence on the surface properties, i.e., the hardness, corrosion resistance, etc. of the materials.Keywords: electronegative plasmas, plasma surface interaction positive ion density, sheath thickness
Procedia PDF Downloads 13388 Analysis of Minimizing Investment Risks in Power and Energy Business Development by Combining Total Quality Management and International Financing Institutions Project Management Tools
Authors: M. Radunovic
Abstract:
Region of Southeastern Europe has a substantial energy resource potential and is witnessing an increasing rate of power and energy project investments. This comes as a result of countries harmonizing their legal framework and market regulations to conform the ones of European Union, enabling direct private investments. Funding in the power and energy market in this region originates from various resources and investment entities, including commercial and institutional ones. Risk anticipation and assessment is crucial to project success, especially given the long exploitation period of project in power and energy domain, as well as the wide range of stakeholders involved. This paper analyzes the possibility of combined application of tools used in total quality management and international financing institutions for project planning, execution and evaluation, with the goal of anticipating, assessing and minimizing the risks that might occur in the development and execution phase of a power and energy project in the market of southeastern Europe. History of successful project management and investments both in the industry and institutional sector provides sufficient experience, guidance and internationally adopted tools to provide proper project assessment for investments in power and energy. Business environment of southeastern Europe provides immense potential for developing power and engineering projects of various magnitudes, depending on stakeholders’ interest. Diversification on investment sources provides assurance that there is interest and commitment to invest in this market. Global economic and political developments will be intensifying the pace of investments in the upcoming period. The proposed approach accounts for key parameters that contribute to the sustainability and profitability of a project which include technological, educational, social and economic gaps between the southeastern European region and western Europe, market trends in equipment design and production on a global level, environment friendly approach to renewable energy sources as well as conventional power generation systems, and finally the effect of the One Belt One Road Initiative led by People’s Republic of China to the power and energy market of this region in the upcoming period on a long term scale. Analysis will outline the key benefits of the approach as well as the accompanying constraints. Parallel to this it will provide an overview of dominant threats and opportunities in present and future business environment and their influence to the proposed application. Through concrete examples, full potential of this approach will be presented along with necessary improvements that need to be implemented. Number of power and engineering projects being developed in southeastern Europe will be increasing in the upcoming period. Proper risk analysis will lead to minimizing project failures. The proposed successful combination of reliable project planning tools from different investment areas can prove to be beneficial in the future power and engineering investments, and guarantee their sustainability and profitability.Keywords: capital investments, lean six sigma, logical framework approach, logical framework matrix, one belt one road initiative, project management tools, quality function deployment, Southeastern Europe, total quality management
Procedia PDF Downloads 10987 Public Values in Service Innovation Management: Case Study in Elderly Care in Danish Municipality
Authors: Christian T. Lystbaek
Abstract:
Background: The importance of innovation management has traditionally been ascribed to private production companies, however, there is an increasing interest in public services innovation management. One of the major theoretical challenges arising from this situation is to understand public values justifying public services innovation management. However, there is not single and stable definition of public value in the literature. The research question guiding this paper is: What is the supposed added value operating in the public sphere? Methodology: The study takes an action research strategy. This is highly contextualized methodology, which is enacted within a particular set of social relations into which on expects to integrate the results. As such, this research strategy is particularly well suited for its potential to generate results that can be applied by managers. The aim of action research is to produce proposals with a creative dimension capable of compelling actors to act in a new and pertinent way in relation to the situations they encounter. The context of the study is a workshop on public services innovation within elderly care. The workshop brought together different actors, such as managers, personnel and two groups of users-citizens (elderly clients and their relatives). The process was designed as an extension of the co-construction methods inherent in action research. Scenario methods and focus groups were applied to generate dialogue. The main strength of these techniques is to gather and exploit as much data as possible by exposing the discourse of justification used by the actors to explain or justify their points of view when interacting with others on a given subject. The approach does not directly interrogate the actors on their values, but allows their values to emerge through debate and dialogue. Findings: The public values related to public services innovation management in elderly care were identified in two steps. In the first step, identification of values, values were identified in the discussions. Through continuous analysis of the data, a network of interrelated values was developed. In the second step, tracking group consensus, we then ascertained the degree to which the meaning attributed to the value was common to the participants, classifying the degree of consensus as high, intermediate or low. High consensus corresponds to strong convergence in meaning, intermediate to generally shared meanings between participants, and low to divergences regarding the meaning between participants. Only values with high or intermediate degree of consensus were retained in the analysis. Conclusion: The study shows that the fundamental criterion for justifying public services innovation management is the capacity for actors to enact public values in their work. In the workshop, we identified two categories of public values, intrinsic value and behavioural values, and a list of more specific values.Keywords: public services innovation management, public value, co-creation, action research
Procedia PDF Downloads 27986 Sprinting Beyond Sexism and Gender Stereotypes: Indian Women Fans' Experiences in the Sports Fandom
Authors: Siddhi Deshpande, Jo Jo Chacko Eapen
Abstract:
Despite almost half of India’s female population engages in watching sports, their experiences in the sports fandom are concealed by ‘traditional masculinity,’ leading to potential exclusion and harassment. To explore these experiences in-depth, this qualitative study aims to understand what coping strategies Indian women fans employ, to sustain their team identification. Employing criterion sampling, participants were screened using The Sports Spectators Identification Scale (SSIS) to assess team identification and a Brief Sexism Questionnaire to confirm participants’ experience with sexism as it aligns with the purpose of the study. The participants were Indian women who had been following any sport for more than eight years, were fluent in English, and were not professionals in Sports. Ten highly identified fans with gendered experiences were recruited for one-on-one semi-structured, in-depth interviews. The data was analyzed using Interpretive Phenomenological Analysis (IPA) to understand the lived-in experiences of women fans experiencing sexism and gender stereotypes, revealing superordinate themes of (1) Ontogenesis and Emotional Investment; (2) Gendered Expectations and Sexism; (3) Coping Strategies and Resilience; (4) Identity, Femininity, Empowerment; (5) Advocacy for Equality and Inclusivity. The findings reflect that Indian women fans experience social exclusion, harassment, sexualization, and commodification, in both online and offline fandoms, where they are disproportionately targeted with threats, misogynistic comments, and attraction-based assumptions, questioning their ‘authenticity’ as fans due to their gender. Women fans interchange between proactive strategies of assertiveness, humor, and knowledge demonstration with defensive strategies of selective engagement, self-regulatory censorship, and desensitization to deal with sexism. In this interplay, the integration of women’s ‘fan identity’ with their self-concept showcases how being a sports fan adds meaning to their lives, despite the constant scrutiny in a male-dominated space, reflecting that femininity and sports should coexist. As a result, they find refuge in female fan communities due to their similar experiences in the fandom and advocate for an equal and inclusive environment where sports are above gender, and not the other way around. A key practical implication of this research is enabling sports organizations to develop inclusive fan engagement policies that actively encourage female fan participation. This includes sensitizing stadium staff and security personnel, promoting gender-neutral language, and, most importantly, establishing safety protocols to protect female fans from adverse experiences in the fandom.Keywords: coping strategies, female sports fans, femininity, gendered experiences, team identification
Procedia PDF Downloads 4985 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model
Authors: Mohammad Zamani, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.Keywords: circular vertical, spillway, numerical model, boundary conditions
Procedia PDF Downloads 8684 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.Keywords: heterophony, modalism, serialism, synchrony, syntax
Procedia PDF Downloads 34583 Spatial Deictics in Face-to-Face Communication: Findings in Baltic Languages
Authors: Gintare Judzentyte
Abstract:
The present research is aimed to discuss semantics and pragmatics of spatial deictics (deictic adverbs of place and demonstrative pronouns) in the Baltic languages: in spoken Lithuanian and in spoken Latvian. The following objectives have been identified to achieve the aim: 1) to determine the usage of adverbs of place in spoken Lithuanian and Latvian and to verify their meanings in face-to-face communication; 2) to determine the usage of demonstrative pronouns in spoken Lithuanian and Latvian and to verify their meanings in face-to-face communication; 3) to compare the systems between the two spoken languages and to identify the main tendencies. As meanings of demonstratives (adverbs of place and demonstrative pronouns) are context-bound, it is necessary to verify their usage in spontaneous interaction. Besides, deictic gestures play a very important role in face-to-face communication. Therefore, an experimental method is necessary to collect the data. Video material representing spoken Lithuanian and spoken Latvian was recorded by means of the method of a qualitative interview (a semi-structured interview: an empirical research is all about asking right questions). The collected material was transcribed and evaluated taking into account several approaches: 1) physical distance (location of the referent, visual accessibility of the referent); 2) deictic gestures (the combination of language and gesture is especially characteristic of the exophoric use); 3) representation of mental spaces in physical space (a speaker sometimes wishes to mark something that is psychically close as psychologically distant and vice versa). The research of the collected data revealed that in face-to-face communication the participants choose deictic adverbs of place instead of demonstrative pronouns to locate/identify entities in situations where the demonstrative pronouns would be expected in spoken Lithuanian and in spoken Latvian. The analysis showed that visual accessibility of the referent is very important in face-to-face communication, but the main criterion while localizing objects and entities is the need for contrast: lith. čia ‘here’, šis ‘this’, latv. šeit ‘here’, šis ‘this’ usually identify distant entities and are used instead of distal demonstratives (lith. ten ‘there’, tas ‘that’, latv. tur ‘there’, tas ‘that’), because the referred objects/subjects contrast to further entities. Furthermore, the interlocutors in examples from a spontaneously situated interaction usually extend their space and can refer to a ‘distal’ object/subject with a ‘proximal’ demonstrative based on the psychological choice. As the research of the spoken Baltic languages confirmed, the choice of spatial deictics in face-to-face communication is strongly effected by a complex of criteria. Although there are some main tendencies, the exact meaning of spatial deictics in the spoken Baltic languages is revealed and is relevant only in a certain context.Keywords: Baltic languages, face-to-face communication, pragmatics, semantics, spatial deictics
Procedia PDF Downloads 28982 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 35281 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics
Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí
Abstract:
A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding
Procedia PDF Downloads 9680 Exploitation Pattern of Atlantic Bonito in West African Waters: Case Study of the Bonito Stock in Senegalese Waters
Authors: Ousmane Sarr
Abstract:
The Senegalese coasts have high productivity of fishery resources due to the frequency of intense up-welling system that occurs along its coast, caused by the maritime trade winds making its waters nutrients rich. Fishing plays a primordial role in Senegal's socioeconomic plans and food security. However, a global diagnosis of the Senegalese maritime fishing sector has highlighted the challenges this sector encounters. Among these concerns, some significant stocks, a priority target for artisanal fishing, need further assessment. If no efforts are made in this direction, most stock will be overexploited or even in decline. It is in this context that this research was initiated. This investigation aimed to apply a multi-modal approach (LBB, Catch-only-based CMSY model and its most recent version (CMSY++); JABBA, and JABBA-Select) to assess the stock of Atlantic bonito, Sarda sarda (Bloch, 1793) in the Senegalese Exclusive Economic Zone (SEEZ). Available catch, effort, and size data from Atlantic bonito over 15 years (2004-2018) were used to calculate the nominal and standardized CPUE, size-frequency distribution, and length at retentions (50 % and 95 % selectivity) of the species. These relevant results were employed as input parameters for stock assessment models mentioned above to define the stock status of this species in this region of the Atlantic Ocean. The LBB model indicated an Atlantic bonito healthy stock status with B/BMSY values ranging from 1.3 to 1.6 and B/B0 values varying from 0.47 to 0.61 of the main scenarios performed (BON_AFG_CL, BON_GN_Length, and BON_PS_Length). The results estimated by LBB are consistent with those obtained by CMSY. The CMSY model results demonstrate that the SEEZ Atlantic bonito stock is in a sound condition in the final year of the main scenarios analyzed (BON, BON-bt, BON-GN-bt, and BON-PS-bt) with sustainable relative stock biomass (B2018/BMSY = 1.13 to 1.3) and fishing pressure levels (F2018/FMSY= 0.52 to 1.43). The B/BMSY and F/FMSY results for the JABBA model ranged between 2.01 to 2.14 and 0.47 to 0.33, respectively. In contrast, The estimated B/BMSY and F/FMSY for JABBA-Select ranged from 1.91 to 1.92 and 0.52 to 0.54. The Kobe plots results of the base case scenarios ranged from 75% to 89% probability in the green area, indicating sustainable fishing pressure and an Atlantic bonito healthy stock size capable of producing high yields close to the MSY. Based on the stock assessment results, this study highlighted scientific advice for temporary management measures. This study suggests an improvement of the selectivity parameters of longlines and purse seines and a temporary prohibition of the use of sleeping nets in the fishery for the Atlantic bonito stock in the SEEZ based on the results of the length-base models. Although these actions are temporary, they can be essential to reduce or avoid intense pressure on the Atlantic bonito stock in the SEEZ. However, it is necessary to establish harvest control rules to provide coherent and solid scientific information that leads to appropriate decision-making for rational and sustainable exploitation of Atlantic bonito in the SEEZ and the Eastern Atlantic Ocean.Keywords: multi-model approach, stock assessment, atlantic bonito, SEEZ
Procedia PDF Downloads 6279 A Hybrid LES-RANS Approach to Analyse Coupled Heat Transfer and Vortex Structures in Separated and Reattached Turbulent Flows
Authors: C. D. Ellis, H. Xia, X. Chen
Abstract:
Experimental and computational studies investigating heat transfer in separated flows have been of increasing importance over the last 60 years, as efforts are being made to understand and improve the efficiency of components such as combustors, turbines, heat exchangers, nuclear reactors and cooling channels. Understanding of not only the time-mean heat transfer properties but also the unsteady properties is vital for design of these components. As computational power increases, more sophisticated methods of modelling these flows become available for use. The hybrid LES-RANS approach has been applied to a blunt leading edge flat plate, utilising a structured grid at a moderate Reynolds number of 20300 based on the plate thickness. In the region close to the wall, the RANS method is implemented for two turbulence models; the one equation Spalart-Allmaras model and Menter’s two equation SST k-ω model. The LES region occupies the flow away from the wall and is formulated without any explicit subgrid scale LES modelling. Hybridisation is achieved between the two methods by the blending of the nearest wall distance. Validation of the flow was obtained by assessing the mean velocity profiles in comparison to similar studies. Identifying the vortex structures of the flow was obtained by utilising the λ2 criterion to identify vortex cores. The qualitative structure of the flow compared with experiments of similar Reynolds number. This identified the 2D roll up of the shear layer, breaking down via the Kelvin-Helmholtz instability. Through this instability the flow progressed into hairpin like structures, elongating as they advanced downstream. Proper Orthogonal Decomposition (POD) analysis has been performed on the full flow field and upon the surface temperature of the plate. As expected, the breakdown of POD modes for the full field revealed a relatively slow decay compared to the surface temperature field. Both POD fields identified the most energetic fluctuations occurred in the separated and recirculation region of the flow. Latter modes of the surface temperature identified these levels of fluctuations to dominate the time-mean region of maximum heat transfer and flow reattachment. In addition to the current research, work will be conducted in tracking the movement of the vortex cores and the location and magnitude of temperature hot spots upon the plate. This information will support the POD and statistical analysis performed to further identify qualitative relationships between the vortex dynamics and the response of the surface heat transfer.Keywords: heat transfer, hybrid LES-RANS, separated and reattached flow, vortex dynamics
Procedia PDF Downloads 23178 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 37777 A Study on Development Strategies of Marine Leisure Tourism Using AHP
Authors: Da-Hye Jang, Woo-Jeong Cho
Abstract:
Marine leisure tourism contributes greatly to the national economy in which the sea is located nearby and many countries are using marine tourism to create value added. The interest and investment of government and local governments on marine leisure tourism growing as a major trend of marine tourism is steadily increasing. But indiscriminate investment in marine leisure tourism such as duplicated business wastes limited resources. In other words, government and local governments need to select and concentrate on the goal they pursue by drawing priority on maritime leisure tourism policies. The purpose of this study is to analyze development strategies on supplier for marine leisure tourism and thus provide a comprehensive and rational framework for developing marine leisure tourism. In order to achieve the purpose, this study is to analyze priorities for each evaluation criterion of marine leisure tourism development policies using Analytic Hierarchy Process. In this study, a questionnaire was used as the survey tool and was developed based on the previous studies, government report, regional report, related thesis and literature for marine leisure tourism. The questionnaire was constructed by verifying the validity of contents from the expert group related to marine leisure tourism after conducting the first and second preliminary surveys. The AHP survey was conducted to experts (university professors, researchers, field specialists and related public officials) from April 6, 2018 to April 30, 2018 by visiting in person or e-mail. This study distributed 123 questionnaires and 68 valid questionnaires were used for data analysis. As a result, 4 factors with 12 detail strategies were analyzed using Excel. Extracted factors of development strategies of marine leisure tourism are consist of 4 factors such as infrastructure, popularization, law & system improvement and advancement. In conclusion, the results of the pairwise comparison of the four major factor on the first class were infrastructure, popularization, law & system improvement and advancement in order. Second, marine water front space maintenance had higher priority than marina facilities expansion and the establishment of marine leisure education center. Third, marine leisure safety·culture improvement had higher priority than strengthening experience·education program and the upkeep and open promotion event. Fourth, specialization·cluster of marine leisure tourism had higher priority than business support system of marine leisure tourism. Fifth, the revision of water-related leisure activities safety act had higher priority than an enactment of marine tourism promotion act and the foster of marina service industry. Finally, marine water front space maintenance was the most important development plan to boost marine leisure tourism.Keywords: marine leisure tourism, marine leisure, marine tourism, analytic hierarchy process
Procedia PDF Downloads 16576 Testing a Dose-Response Model of Intergenerational Transmission of Family Violence
Authors: Katherine Maurer
Abstract:
Background and purpose: Violence that occurs within families is a global social problem. Children who are victims or witness to family violence are at risk for many negative effects both proximally and distally. One of the most disconcerting long-term effects occurs when child victims become adult perpetrators: the intergenerational transmission of family violence (ITFV). Early identification of those children most at risk for ITFV is needed to inform interventions to prevent future family violence perpetration and victimization. Only about 25-30% of child family violence victims become perpetrators of adult family violence (either child abuse, partner abuse, or both). Prior research has primarily been conducted using dichotomous measures of exposure (yes; no) to predict ITFV, given the low incidence rate in community samples. It is often assumed that exposure to greater amounts of violence predicts greater risk of ITFV. However, no previous longitudinal study with a community sample has tested a dose-response model of exposure to physical child abuse and parental physical intimate partner violence (IPV) using count data of frequency and severity of violence to predict adult ITFV. The current study used advanced statistical methods to test if increased childhood exposure would predict greater risk of ITFV. Methods: The study utilized 3 panels of prospective data from a cohort of 15 year olds (N=338) from the Project on Human Development in Chicago Neighborhoods longitudinal study. The data were comprised of a stratified probability sample of seven ethnic/racial categories and three socio-economic status levels. Structural equation modeling was employed to test a hurdle regression model of dose-response to predict ITFV. A version of the Conflict Tactics Scale was used to measure physical violence victimization, witnessing parental IPV and young adult IPV perpetration and victimization. Results: Consistent with previous findings, past 12 months incidence rates severity and frequency of interpersonal violence were highly skewed. While rates of parental and young adult IPV were about 40%, an unusually high rate of physical child abuse (57%) was reported. The vast majority of a number of acts of violence, whether minor or severe, were in the 1-3 range in the past 12 months. Reported frequencies of more than 5 times in the past year were rare, with less than 10% of those reporting more than six acts of minor or severe physical violence. As expected, minor acts of violence were much more common than acts of severe violence. Overall, regression analyses were not significant for the dose-response model of ITFV. Conclusions and implications: The results of the dose-response model were not significant due to a lack of power in the final sample (N=338). Nonetheless, the value of the approach was confirmed for the future research given the bi-modal nature of the distributions which suggest that in the context of both child physical abuse and physical IPV, there are at least two classes when frequency of acts is considered. Taking frequency into account in predictive models may help to better understand the relationship of exposure to ITFV outcomes. Further testing using hurdle regression models is suggested.Keywords: intergenerational transmission of family violence, physical child abuse, intimate partner violence, structural equation modeling
Procedia PDF Downloads 24375 Unpacking the Spatial Outcomes of Public Transportation in a Developing Country Context: The Case of Johannesburg
Authors: Adedayo B. Adegbaju, Carel B. Schoeman, Ilse M. Schoeman
Abstract:
The unique urban contexts that emanated from the apartheid history of South Africa informed the transport landscape of the City of Johannesburg. Apartheid‘s divisive spatial planning and land use management policies promoted sprawling and separated workers from job opportunities. This was further exacerbated by poor funding of public transport and road designs that encouraged the use of private cars. However, the democratization of the country in 1994 and the hosting of the 2010 FIFA World Cup provided a new impetus to the city’s public transport-oriented urban planning inputs. At the same time, the state’s new approach to policy formulations that entails the provision of public transport as one of the tools to end years of marginalization and inequalities soon began to largely reflect in planning decisions of other spheres of government. The Rea Vaya BRT and the Gautrain were respectively implemented by the municipal and provincial governments to demonstrate strong political will and commitment to the new policy direction. While the Gautrain was implemented to facilitate elite movement within Gauteng and to crowd investments and economic growths around station nodes, the BRT was provided for previously marginalized public transport users to provide a sustainable alternative to the dominant minibus taxi. The aim of this research is to evaluate the spatial impacts of the Gautrain and Rea Vaya BRT on the City of Johannesburg and to inform future outcomes by determining the existing potentials. By using the case study approach with a focus on the BRT and fast rail in a metropolitan context, the triangulation research method, which combines various data collection methods, was used to determine the research outcomes. The use of interviews, questionnaires, field observation, and databases such as REX, Quantec, StatsSA, GCRO observatory, national and provincial household travel surveys, and the quality of life surveys provided the basis for data collection. The research concludes that the Gautrain has demonstrated that viable alternatives to the private car can be provided, with its satisfactory feedbacks from users; while some of its station nodes (Sandton, Rosebank) have shown promises of transit-oriented development, one of the project‘s key objectives. The other stations have been unable to stimulate growth due to reasons like non-implementation of their urban design frameworks and lack of public sector investment required to attract private investors. The Rea Vaya BRT continues to be expanded in spite of both its inability to induce modal change and its low ridership figures. The research identifies factors like the low peak to base ratio, pricing, and the city‘s disjointed urban fabric as some of the reasons for its below-average performance. By drawing from the highlights and limitations, the study recommends that public transport provision should be institutionally integrated across and within spheres of government. Similarly, harmonization of the funding structure, better understanding of users’ needs, and travel patterns, underlined with continuity of policy direction and objectives, will equally promote optimal outcomes.Keywords: bus rapid transit, Gautrain, Rea Vaya, sustainable transport, spatial and transport planning, transit oriented development
Procedia PDF Downloads 11474 Fields of Power, Visual Culture, and the Artistic Practice of Two 'Unseen' Women of Central Brazil
Authors: Carolina Brandão Piva
Abstract:
In our visual culture, images play a newly significant role in the basis of a complex dialogue between imagination, creativity, and social practice. Insofar as imagination has broken out of the 'special expressive space of art' to become a part of the quotidian mental work of ordinary people, it is pertinent to recognize that visual representation can no longer be assumed as if in a domain detached from everyday life or exclusively 'centered' within the limited frame of 'art history.' The approach of Visual Culture as a field of study is, in this sense, indispensable to comprehend that not only 'the image,' but also 'the imagined' and 'the imaginary' are produced in the plurality of social interactions; crucial enough, this assertion directs us to something new in contemporary cultural processes, namely both imagination and image production constitute a social practice. This paper starts off with this approach and seeks to examine the artistic practice of two women from the State of Goiás, Brazil, who are ordinary citizens with their daily activities and narratives but also dedicated to visuality production. With no formal training from art schools, branded or otherwise, Maria Aparecida de Souza Pires deploys 'waste disposal' of daily life—from car tires to old work clothes—as a trampoline for art; also adept at sourcing raw materials collected from her surroundings, she manipulates raw hewn wood, tree trunks, plant life, and various other pieces she collects from nature giving them new meaning and possibility. Hilda Freire works with sculptures in clay using different scales and styles; her art focuses on representations of women and pays homage to unprivileged groups such as the practitioners of African-Brazilian religions, blue-collar workers, poor live-in housekeepers, and so forth. Although they have never been acknowledged by any mainstream art institution in Brazil, whose 'criterion of value' still favors formally trained artists, Maria Aparecida de Souza Pires, and Hilda Freire have produced visualities that instigate 'new ways of seeing,' meriting cultural significance in many ways. Their artworks neither descend from a 'traditional' medium nor depend on 'canonical viewing settings' of visual representation; rather, they consist in producing relationships with the world which do not result in 'seeing more,' but 'at least differently.' From this perspective, the paper finally demonstrates that grouping this kind of artistic production under the label of 'mere craft' has much more to do with who is privileged within the fields of power in art system, who we see and who we do not see, and whose imagination of what is fed by which visual images in Brazilian contemporary society.Keywords: visual culture, artistic practice, women's art in the Brazilian State of Goiás, Maria Aparecida de Souza Pires, Hilda Freire
Procedia PDF Downloads 15273 An Integrated Framework for Wind-Wave Study in Lakes
Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung
Abstract:
The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.Keywords: wave modelling, wind-wave, extreme value analysis, marina
Procedia PDF Downloads 8472 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording
Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen
Abstract:
It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration
Procedia PDF Downloads 18071 Exploring Problem-Based Learning and University-Industry Collaborations for Fostering Students’ Entrepreneurial Skills: A Qualitative Study in a German Urban Setting
Authors: Eylem Tas
Abstract:
This empirical study aims to explore the development of students' entrepreneurial skills through problem-based learning within the context of university-industry collaborations (UICs) in curriculum co-design and co-delivery (CDD). The research question guiding this study is: "How do problem-based learning and university-industry collaborations influence the development of students' entrepreneurial skills in the context of curriculum co-design and co-delivery?” To address this question, the study was conducted in a big city in Germany and involved interviews with stakeholders from various industries, including the private sector, government agencies (govt), and non-governmental organizations (NGOs). These stakeholders had established collaborative partnerships with the targeted university for projects encompassing entrepreneurial development aspects in CDD. The study sought to gain insights into the intricacies and subtleties of UIC dynamics and their impact on fostering entrepreneurial skills. Qualitative content analysis, based on Mayring's guidelines, was employed to analyze the interview transcriptions. Through an iterative process of manual coding, 442 codes were generated, resulting in two main sections: "the role of problem-based learning and UIC in fostering entrepreneurship" and "challenges and requirements of problem-based learning within UIC for systematical entrepreneurship development.” The chosen experimental approach of semi-structured interviews was justified by its capacity to provide in-depth perspectives and rich data from stakeholders with firsthand experience in UICs in CDD. By enlisting participants with diverse backgrounds, industries, and company sizes, the study ensured a comprehensive and heterogeneous sample, enhancing the credibility of the findings. The first section of the analysis delved into problem-based learning and entrepreneurial self-confidence to gain a deeper understanding of UIC dynamics from an industry standpoint. It explored factors influencing problem-based learning, alignment of students' learning styles and preferences with the experiential learning approach, specific activities and strategies, and the role of mentorship from industry professionals in fostering entrepreneurial self-confidence. The second section focused on various interactions within UICs, including communication, knowledge exchange, and collaboration. It identified key elements, patterns, and dynamics of interaction, highlighting challenges and limitations. Additionally, the section emphasized success stories and notable outcomes related to UICs' positive impact on students' entrepreneurial journeys. Overall, this research contributes valuable insights into the dynamics of UICs and their role in fostering students' entrepreneurial skills. UICs face challenges in communication and establishing a common language. Transparency, adaptability, and regular communication are vital for successful collaboration. Realistic expectation management and clearly defined frameworks are crucial. Responsible data handling requires data assurance and confidentiality agreements, emphasizing the importance of trust-based relationships when dealing with data sharing and handling issues. The identified key factors and challenges provide a foundation for universities and industrial partners to develop more effective UIC strategies for enhancing students' entrepreneurial capabilities and preparing them for success in today's digital age labor market. The study underscores the significance of collaborative learning and transparent communication in UICs for entrepreneurial development in CDD.Keywords: collaborative learning, curriculum co-design and co-delivery, entrepreneurial skills, problem-based learning, university-industry collaborations
Procedia PDF Downloads 6070 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 10069 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 13768 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 232