Search results for: direct steam generation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6740

Search results for: direct steam generation

1040 The Proposal of a Shared Mobility City Index to Support Investment Decision Making for Carsharing

Authors: S. Murr, S. Phillips

Abstract:

One of the biggest challenges entering a market with a carsharing or any other shared mobility (SM) service is sound investment decision-making. To support this process, the authors think that a city index evaluating different criteria is necessary. The goal of such an index is to benchmark cities along a set of external measures to answer the main two challenges: financially viability and the understanding of its specific requirements. The authors have consulted several shared mobility projects and industry experts to create such a Shared Mobility City Index (SMCI). The current proposal of the SMCI consists of 11 individual index measures: general data (demographics, geography, climate and city culture), shared mobility landscape (current SM providers, public transit options, commuting patterns and driving culture) and political vision and goals (vision of the Mayor, sustainability plan, bylaws/tenders supporting SM). To evaluate the suitability of the index, 16 cities on the East Coast of North America were selected and secondary research was conducted. The main sources of this study were census data, organisational records, independent press releases and informational websites. Only non-academic sources where used because the relevant data for the chosen cities is not published in academia. Applying the index measures to the selected cities resulted in three major findings. Firstly, density (city area divided by number of inhabitants) is not an indicator for the number of SM services offered: the city with the lowest density has five bike and carsharing options. Secondly, there is a direct correlation between commuting patterns and how many shared mobility services are offered. New York, Toronto and Washington DC have the highest public transit ridership and the most shared mobility providers. Lastly, except one, all surveyed cities support shared mobility with their sustainability plan. The current version of the shared mobility index is proving a practical tool to evaluate cities, and to understand functional, political, social and environmental considerations. More cities will have to be evaluated to refine the criteria further. However, the current version of the index can be used to assess cities on their suitability for shared mobility services and will assist investors deciding which city is a financially viable market.

Keywords: carsharing, transportation, urban planning, shared mobility city index

Procedia PDF Downloads 297
1039 Synthesis and Characterization of AFe₂O₄ (A=CA, Co, CU) Nano-Spinels: Application to Hydrogen Photochemical Production under Visible Light Irradiation

Authors: H. Medjadji, A. Boulahouache, N. Salhi, A. Boudjemaa, M. Trari

Abstract:

Hydrogen from renewable sources, such as solar, is referred to as green hydrogen. The splitting water process using semiconductors, such as photocatalysts, has attracted significant attention due to its potential application for solving the energy crisis and environmental pollution. Spinel ferrites of the MF₂O₄ type have shown broad interest in diverse energy conversion processes, including fuel cells and photo electrocatalytic water splitting. This work focuses on preparing nano-spinels based on iron AFe₂O₄ (A= Ca, Co, and Cu) as photocatalysts using the nitrate method. These materials were characterized both physically and optically and subsequently tested for hydrogen generation under visible light irradiation. Various techniques were used to investigate the properties of the materials, including TGA-DT, X-ray diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR), UV-visible spectroscopy, Scanning Electron Microscopy with Energy Dispersive X-ray Spectroscopy (SEM-EDX) and X-ray Photoelectron Spectroscopy (XPS) was also undertaken. XRD analysis confirmed the formation of pure phases at 850°C, with crystalline sizes of 31 nm for CaFe₂O₄, 27 nm for CoFe₂O₄, and 40 nm for CuFe₂O₄. The energy gaps, calculated from recorded diffuse reflection data, are 1.85 eV for CaFe₂O₄, 1.27 eV for CoFe₂O₄, and 1.64 eV for CuFe₂O₄. SEM micrographs showed homogeneous grains with uniform shapes and medium porosity in all samples. EDX elemental analysis determined the absence of any contaminating elements, highlighting the high purity of the prepared materials via the nitrate route. XPS spectra revealed the presence of Fe3+ and O in all samples. Additionally, XPS analysis revealed the presence of Ca²⁺, Co²⁺, and Cu²⁺ on the surface of CaFe₂O₄ and CoFe₂O₄ spinels, respectively. The photocatalytic activity was successfully evaluated by measuring H₂ evolution through the water-splitting process. The best performance was achieved with CaFe₂O₄ in a neutral medium (pH ~ 7), yielding 189 µmol at an optimal temperature of ~50°C. The highest hydrogen production rates for CoFe₂O₄ and CuFe₂O₄ were obtained at pH ~ 12 with release rates of 65 and 85 µmol, respectively, under visible light irradiation at the same optimal temperature. Various conditions were investigated including the pH of the solution, the hole sensors utilization and recyclability.

Keywords: hydrogen, MFe₂O₄, nitrate route, spinel ferrite

Procedia PDF Downloads 29
1038 An Analysis of Employee Attitudes to Organisational Change Management Practices When Adopting New Technologies Within the Architectural, Engineering, and Construction Industry: A Case Study

Authors: Hannah O'Sullivan, Esther Quinn

Abstract:

Purpose: The Architectural, Engineering, and Construction (AEC) industry has historically struggled to adapt to change. Although the ability to innovate and successfully implement organizational change has been demonstrated to be critical in achieving a sustainable competitive advantage in the industry, many AEC organizations continue to struggle when affecting organizational change. One prominent area of organizational change that presents many challenges in the industry is the adoption of new forms of technology, for example, Building Information Modelling (BIM). Certain Organisational Change Management (OCM) practices have been proven to be effective in supporting organizations to adopt change, but little research has been carried out on diverging employee attitudes to change relative to their roles within the organization. The purpose of this research study is to examine how OCM practices influence employee attitudes to change when adopting new forms of technology and to analyze the diverging employee perspectives within an organization on the importance of different OCM strategies. Methodology: Adopting an interview-based approach, a case study was carried out on a large-sized, prominent Irish construction organization who are currently adopting a new technology platform for its projects. Qualitative methods were used to gain insight into differing perspectives on the utilization of various OCM practices and their efficacy when adopting a new form of technology on projects. Change agents implementing the organizational change gave insight into their intentions with the technology rollout strategy, while other employees were interviewed to understand how this rollout strategy was received and the challenges that were encountered. Findings: The results of this research study are currently being finalized. However, it is expected that employees in different roles will value different OCM practices above others. Findings and conclusions will be determined within the coming weeks. Value: This study will contribute to the body of knowledge relating to the introduction of new technologies, including BIM, to AEC organizations. It will also contribute to the field of organizational change management, providing insight into methods of introducing change that will be most effective for different employees based on their roles and levels of experience within the industry. The focus of this study steers away from traditional studies of the barriers to adopting BIM in its first instance at an organizational level and centers on the direct effect on employees when a company changes the technology platform being used.

Keywords: architectural, engineering, and construction (AEC) industry, Building Information Modelling, case study, challenges, employee perspectives, organisational change management.

Procedia PDF Downloads 62
1037 Na Doped ZnO UV Filters with Reduced Photocatalytic Activity for Sunscreen Application

Authors: Rafid Mueen, Konstantin Konstantinov, Micheal Lerch, Zhenxiang Cheng

Abstract:

In the past two decades, the concern for skin protection from ultraviolet (UV) radiation has attracted considerable attention due to the increased intensity of UV rays that can reach the Earth’s surface as a result of the breakdown of ozone layer. Recently, UVA has also attracted attention, since, in comparison to UVB, it can penetrate deeply into the skin, which can result in significant health concerns. Sunscreen agents are one of the significant tools to protect the skin from UV irradiation, and it is either organic or in organic. Developing of inorganic UV blockers is essential, which provide efficient UV protection over a wide spectrum rather than organic filters. Furthermore inorganic UV blockers are good comfort, and high safety when applied on human skin. Inorganic materials can absorb, reflect, or scatter the ultraviolet radiation, depending on their particle size, unlike the organic blockers, which absorb the UV irradiation. Nowadays, most inorganic UV-blocking filters are based on (TiO2) and ZnO). ZnO can provide protection in the UVA range. Indeed, ZnO is attractive for in sunscreen formulization, and this relates to many advantages, such as its modest refractive index (2.0), absorption of a small fraction of solar radiation in the UV range which is equal to or less than 385 nm, its high probable recombination of photogenerated carriers (electrons and holes), large direct band gap, high exciton binding energy, non-risky nature, and high tendency towards chemical and physical stability which make it transparent in the visible region with UV protective activity. A significant issue for ZnO use in sunscreens is that it can generate ROS in the presence of UV light because of its photocatalytic activity. Therefore it is essential to make a non-photocatalytic material through modification by other metals. Several efforts have been made to deactivate the photocatalytic activity of ZnO by using inorganic surface modifiers. The doping of ZnO by different metals is another way to modify its photocatalytic activity. Recently, successful doping of ZnO with different metals such as Ce, La, Co, Mn, Al, Li, Na, K, and Cr by various procedures, such as a simple and facile one pot water bath, co-precipitation, hydrothermal, solvothermal, combustion, and sol gel methods has been reported. These materials exhibit greater performance than undoped ZnO towards increasing the photocatalytic activity of ZnO in visible light. Therefore, metal doping can be an effective technique to modify the ZnO photocatalytic activity. However, in the current work, we successfully reduce the photocatalytic activity of ZnO through Na doped ZnO fabricated via sol-gel and hydrothermal methods.

Keywords: photocatalytic, ROS, UVA, ZnO

Procedia PDF Downloads 138
1036 Recommendations to Improve Classification of Grade Crossings in Urban Areas of Mexico

Authors: Javier Alfonso Bonilla-Chávez, Angélica Lozano

Abstract:

In North America, more than 2,000 people annually die in accidents related to railroad tracks. In 2020, collisions at grade crossings were the main cause of deaths related to railway accidents in Mexico. Railway networks have constant interaction with motor transport users, cyclists, and pedestrians, mainly in grade crossings, where is the greatest vulnerability and risk of accidents. Usually, accidents at grade crossings are directly related to risky behavior and non-compliance with regulations by motorists, cyclists, and pedestrians, especially in developing countries. Around the world, countries classify these crossings in different ways. In Mexico, according to their dangerousness (high, medium, or low), types A, B and C have been established, recommending for each one different type of auditive and visual signaling and gates, as well as horizontal and vertical signaling. This classification is based in a weighting, but regrettably, it is not explained how the weight values were obtained. A review of the variables and the current approach for the grade crossing classification is required, since it is inadequate for some crossings. In contrast, North America (USA and Canada) and European countries consider a broader classification so that attention to each crossing is addressed more precisely and equipment costs are adjusted. Lack of a proper classification, could lead to cost overruns in the equipment and a deficient operation. To exemplify the lack of a good classification, six crossings are studied, three located in the rural area of Mexico and three in Mexico City. These cases show the need of: improving the current regulations, improving the existing infrastructure, and implementing technological systems, including informative signals with nomenclature of the involved crossing and direct telephone line for reporting emergencies. This implementation is unaffordable for most municipal governments. Also, an inventory of the most dangerous grade crossings in urban and rural areas must be obtained. Then, an approach for improving the classification of grade crossings is suggested. This approach must be based on criteria design, characteristics of adjacent roads or intersections which can influence traffic flow through the crossing, accidents related to motorized and non-motorized vehicles, land use and land management, type of area, and services and economic activities in the zone where the grade crossings is located. An expanded classification of grade crossing in Mexico could reduce accidents and improve the efficiency of the railroad.

Keywords: accidents, grade crossing, railroad, traffic safety

Procedia PDF Downloads 103
1035 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 265
1034 Genetic Advance versus Environmental Impact toward Sustainable Protein, Wet Gluten and Zeleny Sedimentation in Bread and Durum Wheat

Authors: Gordana Branković, Dejan Dodig, Vesna Pajić, Vesna Kandić, Desimir Knežević, Nenad Đurić

Abstract:

The wheat grain quality properties are influenced by genotype, environmental conditions and genotype × environment interaction (GEI). The increasing request of more nutritious wheat products will direct future breeding programmes. Therefore, the aim of investigation was to determine: i) variability of the protein content (PC), wet gluten content (WG) and Zeleny sedimentation volume (ZS); ii) components of variance, heritability in a broad sense (hb2), and expected genetic advance as percent of mean (GAM) for PC, WG, and ZS; iii) correlations between PC, WG, ZS, and most important agronomic traits; in order to assess expected breeding success versus environmental impact for these quality traits. The plant material consisted of 30 genotypes of bread wheat (Triticum aestivum L. ssp. aestivum) and durum wheat (Triticum durum Desf.). The trials were sown at the three test locations in Serbia: Rimski Šančevi, Zemun Polje and Padinska Skela during 2010-2011 and 2011-2012. The experiments were set as randomized complete block design with four replications. The plot consisted of five rows of 1 m2 (5 × 0.2 m × 1 m). PC, WG and ZS were determined by the use of Near infrared spectrometry (NIRS) with the Infraneo analyser (Chopin Technologies, France). PC, WG and ZS, in bread wheat, were in the range 13.4-16.4%, 22.8-30.3%, and 39.4-67.1 mL, respectively, and in durum wheat, in the range 15.3-18.1%, 28.9-36.3%, 37.4-48.3 mL, respectively. The dominant component of variance for PC, WG, and ZS, in bread wheat, was genotype with the genetic variance/GEI variance (VG/VG × E) relation of 3.2, 2.9 and 1.0, respectively, and in durum wheat was GEI with the VG/VG × E relation of 0.70, 0.69 and 0.49, respectively. hb2 and GAM values for PC, WG and ZS, in bread wheat, were 94.9% and 12.6%, 93.7% and 18.4%, and 86.2% and 28.1%, respectively, and in durum wheat, 80.7% and 7.6%, 79.7% and 10.2%, and 74% and 11.2%, respectively. The most consistent through six environments, statistically significant correlations, for bread wheat, were between PC and spike length (-0.312 to -0.637); PC, WG, ZS and grain number per spike (-0.320 to -0.620; -0.369 to -0.567; -0.301 to -0.378, respectively); PC and grain thickness (0.338 to 0.566), and for durum wheat, were between PC, WG, ZS and yield (-0.290 to -0.690; -0.433 to -0.753; -0.297 to -0.660, respectively); PC and plant height (-0.314 to -0.521); PC, WG and spike length (-0.298 to -0.597; -0.293 to -0.627, respectively); PC, WG and grain thickness (0.260 to 0.575; 0.269 to 0.498, respectively); PC, WG and grain vitreousness (0.278 to 0.665; 0.357 to 0.690, respectively). Breeding success can be anticipated for ZS in bread wheat due to coupled high values for hb2 and GAM, suggesting existence of additive genetic effects, and also for WG in bread wheat, due to very high hb2 and medium high GAM. The small, and medium, negative correlations between PC, WG, ZS, and yield or yield components, indicate difficulties to select simultaneously for high quality and yield, depending on linkage for particular genetic arrangements to be broken by recombination.

Keywords: bread and durum wheat, genetic advance, protein and wet gluten content, Zeleny sedimentation volume

Procedia PDF Downloads 248
1033 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards

Authors: Hanna Schübel, Ivo Wallimann-Helmer

Abstract:

In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.

Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility

Procedia PDF Downloads 114
1032 The Implementation of a Nurse-Driven Palliative Care Trigger Tool

Authors: Sawyer Spurry

Abstract:

Problem: Palliative care providers at an academic medical center in Maryland stated medical intensive care unit (MICU) patients are often referred late in their hospital stay. The MICU has performed well below the hospital quality performance metric of 80% of patients who expire with expected outcomes should have received a palliative care consult within 48 hours of admission. Purpose: The purpose of this quality improvement (QI) project is to increase palliative care utilization in the MICU through the implementation of a Nurse-Driven PalliativeTriggerTool to prompt the need for specialty palliative care consult. Methods: MICU nursing staff and providers received education concerning the implications of underused palliative care services and the literature data supporting the use of nurse-driven palliative care tools as a means of increasing utilization of palliative care. A MICU population specific criteria of palliative triggers (Palliative Care Trigger Tool) was formulated by the QI implementation team, palliative care team, and patient care services department. Nursing staff were asked to assess patients daily for the presence of palliative triggers using the Palliative Care Trigger Tool and present findings during bedside rounds. MICU providers were asked to consult palliative medicinegiven the presence of palliative triggers; following interdisciplinary rounds. Rates of palliative consult, given the presence of triggers, were collected via electronic medical record e-data pull, de-identified, and recorded in the data collection tool. Preliminary Results: Over 140 MICU registered nurses were educated on the palliative trigger initiative along with 8 nurse practitioners, 4 intensivists, 2 pulmonary critical care fellows, and 2 palliative medicine physicians. Over 200 patients were admitted to the MICU and screened for palliative triggers during the 15-week implementation period. Primary outcomes showed an increase in palliative care consult rates to those patients presenting with triggers, a decreased mean time from admission to palliative consult, and increased recognition of unmet palliative care needs by MICU nurses and providers. Conclusions: Anticipatory findings of this QI project would suggest a positive correlation between utilizing palliative care trigger criteria and decreased time to palliative care consult. The direct outcomes of effective palliative care results in decreased length of stay, healthcare costs, and moral distress, as well as improved symptom management and quality of life (QOL).

Keywords: palliative care, nursing, quality improvement, trigger tool

Procedia PDF Downloads 187
1031 A Study on the Effect of the Work-Family Conflict on Work Engagement: A Mediated Moderation Model of Emotional Exhaustion and Positive Psychology Capital

Authors: Sungeun Hyun, Sooin Lee, Gyewan Moon

Abstract:

Work-Family Conflict has been an active research area for the past decades. Work-Family Conflict harms individuals and organizations, it is ultimately expected to bring the cost of losses to the company in the long run. WFC has mainly focused on effects of organizational effectiveness and job attitude such as Job Satisfaction, Organizational Commitment, and Turnover Intention variables. This study is different from consequence variable with previous research. For this purpose, we selected the positive job attitude 'Work Engagement' as a consequence of WFC. This research has its primary research purpose in identifying the negative effects of the Work-Family Conflict, and started out from the recognition of the problem that the research on the direct relationship on the influence of the WFC on Work Engagement is lacking. Based on the COR(Conservation of resource theory) and JD-R(Job Demand- Resource model), the empirical study model to examine the negative effects of WFC with Emotional Exhaustion as the link between WFC and Work Engagement was suggested and validated. Also, it was analyzed how much Positive Psychological Capital may buffer the negative effects arising from WFC within this relationship, and the Mediated Moderation model controlling the indirect effect influencing the Work Engagement by the Positive Psychological Capital mediated by the WFC and Emotional Exhaustion was verified. Data was collected by using questionnaires distributed to 500 employees engaged manufacturing, services, finance, IT industry, education services, and other sectors, of which 389 were used in the statistical analysis. The data are analyzed by statistical package, SPSS 21.0, SPSS macro and AMOS 21.0. The hierarchical regression analysis, SPSS PROCESS macro and Bootstrapping method for hypothesis testing were conducted. Results showed that all hypotheses are supported. First, WFC showed a negative effect on Work Engagement. Specifically, WIF appeared to be on more negative effects than FIW. Second, Emotional exhaustion found to mediate the relationship between WFC and Work Engagement. Third, Positive Psychological Capital showed to moderate the relationship between WFC and Emotional Exhaustion. Fourth, the effect of mediated moderation through the integration verification, Positive Psychological Capital demonstrated to buffer the relationship among WFC, Emotional Exhastion, and Work Engagement. Also, WIF showed a more negative effects than FIW through verification of all hypotheses. Finally, we discussed the theoretical and practical implications on research and management of the WFC, and proposed limitations and future research directions of research.

Keywords: emotional exhaustion, positive psychological capital, work engagement, work-family conflict

Procedia PDF Downloads 217
1030 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 113
1029 The Effect of Filter Design and Face Velocity on Air Filter Performance

Authors: Iyad Al-Attar

Abstract:

Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.

Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop

Procedia PDF Downloads 132
1028 Origins of the Tattoo: Decoding the Ancient Meanings of Terrestrial Body Art to Establish a Connection between the Natural World and Humans Today

Authors: Sangeet Anand

Abstract:

Body art and tattooing have long been practiced as a form of self-expression for centuries, and this study studies and analyzes the pertinence of tattoo culture in our everyday lives and ancient past. Individuals of different cultures represent ideas, practices, and elements of their cultures through symbolic representation. These symbols come in all shapes and sizes and can be as simple as the makeup you put on every day to something more permanent such as a tattoo. In the long run, these individuals who choose to display art on their bodies are seeking to express their individuality. In addition, these visuals are ultimately a reflection of our own appropriate cultures deem as beautiful, important, and powerful to the human eye. They make us known to the world and give us a plausible identity in an ever-changing world. We have lived through and seen a rise in hippie culture today. This type of bodily decoration displayed by this fad has made it seem as though body art is a visual language that is relatively new. But quite to the contrary, it is not. Through cultural symbolic exploration, we can answer key questions to ideas that have been raised for centuries. Through careful, in-depth interviews, this study takes a broad subject matter-art, and symbolism-and culminates it into a deeper philosophical connection between the world and its past. The basic methodologies used in this sociocultural study include interview questionnaires and textual analysis, which encompass a subject and interviewer as well as source material. The major findings of this study contain a distinct connection between cultural heritage and the day-to-day likings of an individual. The participant that was studied during this project demonstrated a clear passion for hobbies that were practiced even by her ancestors. We can conclude, through these findings, that there is a deeper cultural connection between modern day humans, the first humans, and the surrounding environments. Our symbols today are a direct reflection of the elements of nature that our human ancestors were exposed to, and, through cultural acceptance, we can adorn ourselves with these representations to help others identify our pasts. Body art embraces the different aspects of different cultures and holds significance, tells stories, and persists, even as the human population rapidly integrates. With this pattern, our human descendents will continue to represent their cultures and identities in the future. Body art is an integral element in understanding how and why people identify with certain aspects of life over others and broaden the scope for conducting more analysis cross-culturally.

Keywords: natural, symbolism, tattoo, terrestrial

Procedia PDF Downloads 103
1027 Hepatoprotective Action of Emblica officinalis Linn. against Radiation and Lead Induced Changes in Swiss Albino Mice

Authors: R. K. Purohit

Abstract:

Ionizing radiation induces cellular damage through direct ionization of DNA and other cellular targets and indirectly via reactive oxygen species which may include effects from epigenetic changes. So there is a need of hour is to search for an ideal radioprotector which could minimize the deleterious and damaging effects caused by ionizing radiation. Radioprotectors are agents which reduce the radiation effects on cell when applied prior to exposure of radiation. The aim of this study was to access the efficacy of Emblica officinalis in reducing radiation and lead induced changes in mice liver. For the present experiment, healthy male Swiss albino mice (6-8 weeks) were selected and maintained under standard conditions of temperature and light. Fruit extract of Emblica was fed orally at the dose of 0.01 ml/animal/day. The animal were divided into seven groups according to the treatment i.e. lead acetate solution as drinking water (group-II) or exposed to 3.5 or 7.0 Gy gamma radiation (group-III) or combined treatment of radiation and lead acetate (group-IV). The animals of experimental groups were administered Emblica extract seven days prior to radiation or lead acetate treatment (group V, VI and VII) respectively. The animals from all the groups were sacrificed by cervical dislocation at each post-treatment intervals of 1, 2, 4, 7, 14 and 28 days. After sacrificing the animals pieces of liver were taken out and some of them were kept at -20°C for different biochemical parameters. The histopathological changes included cytoplasmic degranulation, vacuolation, hyperaemia, pycnotic and crenated nuclei. The changes observed in the control groups were compared with the respective experimental groups. An increase in the value of total proteins, glycogen, acid phosphtase, alkaline phosphatase activity and RNA was observed up to day-14 in the non drug treated group and day 7 in the Emblica treated groups, thereafter value declined up to day-28 without reaching to normal. The value of cholesterol and DNA showed a decreasing trend up to day -14 in non drug treated groups and day-7 in drug treated groups, thereafter value elevated up to day-28. The biochemical parameters were observed in the form of increase or decrease in the values. The changes were found dose dependent. After combined treatment of radiation and lead acetate synergistic effect were observed. The liver of Emblica treated animals exhibited less severe damage as compared to non-drug treated animals at all the corresponding intervals. An early and fast recovery was also noticed in Emblica pretreated animals. Thus, it appears that Emblica is potent enough to check lead and radiation induced heptic lesion in Swiss albino mice.

Keywords: radiation, lead , emblica, mice, liver

Procedia PDF Downloads 318
1026 Personality Composition in Senior Management Teams: The Importance of Homogeneity in Dynamic Managerial Capabilities

Authors: Shelley Harrington

Abstract:

As a result of increasingly dynamic business environments, the creation and fostering of dynamic capabilities, [those capabilities that enable sustained competitive success despite of dynamism through the awareness and reconfiguration of internal and external competencies], supported by organisational learning [a dynamic capability] has gained increased and prevalent momentum in the research arena. Presenting findings funded by the Economic Social Research Council, this paper investigates the extent to which Senior Management Team (SMT) personality (at the trait and facet level) is associated with the creation of dynamic managerial capabilities at the team level, and effective organisational learning/knowledge sharing within the firm. In doing so, this research highlights the importance of micro-foundations in organisational psychology and specifically dynamic capabilities, a field which to date has largely ignored the importance of psychology in understanding these important and necessary capabilities. Using a direct measure of personality (NEO PI-3) at the trait and facet level across 32 high technology and finance firms in the UK, their CEOs (N=32) and their complete SMTs [N=212], a new measure of dynamic managerial capabilities at the team level was created and statistically validated for use within the work. A quantitative methodology was employed with regression and gap analysis being used to show the empirical foundations of personality being positioned as a micro-foundation of dynamic capabilities. The results of this study found that personality homogeneity within the SMT was required to strengthen the dynamic managerial capabilities of sensing, seizing and transforming, something which was required to reflect strong organisational learning at middle management level [N=533]. In particular, it was found that the greater the difference [t-score gaps] between the personality profiles of a Chief Executive Officer (CEO) and their complete, collective SMT, the lower the resulting self-reported nature of dynamic managerial capabilities. For example; the larger the difference between a CEOs level of dutifulness, a facet contributing to the definition of conscientiousness, and their SMT’s level of dutifulness, the lower the reported level of transforming, a capability fundamental to strategic change in a dynamic business environment. This in turn directly questions recent trends, particularly in upper echelons research highlighting the need for heterogeneity within teams. In doing so, it successfully positions personality as a micro-foundation of dynamic capabilities, thus contributing to recent discussions from within the strategic management field calling for the need to empirically explore dynamic capabilities at such a level.

Keywords: dynamic managerial capabilities, senior management teams, personality, dynamism

Procedia PDF Downloads 262
1025 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 180
1024 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 127
1023 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads

Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li

Abstract:

Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.

Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability

Procedia PDF Downloads 223
1022 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids

Authors: S. Gariani, I. Shyha

Abstract:

Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.

Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions

Procedia PDF Downloads 273
1021 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 90
1020 Exploring Coping Strategies among Caregivers of Children Who Have Survived Cancer

Authors: Noor Ismael, Somaya Malkawi, Sherin Al Awady, Taleb Ismael

Abstract:

Background/Significance: Cancer is a serious health condition that affects individuals’ quality of life during and after the course of this condition. Children who have survived cancer and their caregivers may deal with residual physical, cognitive or social disabilities. There is little research on caregivers’ health and wellbeing after cancer. To the authors’ best knowledge; there is no specific research about how caregivers cope with everyday stressors after cancer. Therefore, this study aimed to explore the coping strategies that caregivers of children who have survived cancer utilize to overcome everyday stressors. Methods: This study utilized a descriptive survey design. The sample consisted of 103 caregivers, who visited the health and wellness clinic at a national cancer center (additional demographics are presented in the results). The sample included caregivers of children who were off cancer treatments for at least two years from the beginning of data collection. The institution’s internal review board approved this study. Caregivers who agreed to participate completed the survey. The survey collected caregiver reported demographic information and the Brief COPE which measures caregivers' frequency of engaging in certain coping strategies. The Brief COPE consisted of 14 coping sub-scales, which are self-distraction, active coping, denial, substance use, use of emotional support, use of instrumental support, behavioral disengagement, venting, positive reframing, planning, humor, acceptance, religion, and self-blame. Data analyses included calculating sub-scales’ scores for the fourteen coping strategies and analysis of frequencies of demographics and coping strategies. Results: The 103 caregivers who participated in this study were 62% mothers, 80% married, 45% finished high school, 50% do not work outside the house, and 60% have low family income. Result showed that religious coping (66%) and acceptance (60%) were the most utilized coping strategies, followed by positive reframing (45%), active coping (44%) and planning (43%). The least utilized coping strategies in our sample were humor (5%), behavioral disengagement (8%), and substance-use (10%). Conclusions: Caregivers of children who have survived cancer mostly utilize religious coping and acceptance in dealing with everyday stressors. Because these coping strategies do not directly solve stressors like active coping and planning coping strategies, it is important to support caregivers in choosing and implementing effective coping strategies. Knowing from our results that some caregivers may utilize substance use as a coping strategy, which has negative health effects on caregivers and their children, there must be direct interventions that target these caregivers and their families.

Keywords: caregivers, cancer, stress, coping

Procedia PDF Downloads 165
1019 Fillet Chemical Composition of Sharpsnout Seabream (Diplodus puntazzo) from Wild and Cage-Cultured Conditions

Authors: Oğuz Taşbozan, Celal Erbaş, Şefik Surhan Tabakoğlu, Mahmut Ali Gökçe

Abstract:

Polyunsaturated fatty acids (PUFAs) and particularly the levels and ratios of ω-3 and ω-6 fatty acids are important for biological functions in humans and recognized as essential components of human diet. According to the terms of many different points of view, the nutritional composition of fish in culture conditions and caught from wild are wondered by the consumers. Therefore the aim of this study was to investigate the chemical composition of cage-cultured and wild sharpsnout seabream which has been preferred by the consumers as an economical important fish species in Turkey. The fish were caught from wild and obtained from cage-cultured commercial companies. Eight fish were obtained for each group, and their average weights of the samples were 245.8±13.5 g for cultured, 149.4±13.3 g for wild samples. All samples were stored in freezer (-18 °C) and analyses were carried out in triplicates, using homogenized boneless fish fillets. Proximate compositions (protein, ash, moisture and lipid) were determined. The fatty acid composition was analyzed by a GC Clarous 500 with auto sampler (Perkin–Elmer, USA). Proximate compositions of cage-cultured and wild samples of sharpsnout seabream were found statistical differences in terms of proximate composition between the groups. The saturated fatty acid (SFA), monounsaturated fatty acid (MUFA) and PUFA amounts of cultured and wild sharpsnout seabream were significantly different. ω3/ω6 ratio was higher in the cultured group. Especially in protein level and lipid level of cultured samples was significantly higher than wild counterparts. One of the reasons for this, cultured species exposed to continuous feeding. This situation had a direct effect on their body lipid content. The fatty acid composition of fish differs depending on a variety of factors including species, diet, environmental factors and whether they are farmed or wild. The higher levels of MUFA in the cultured fish may be explained with the high content of monoenoic fatty acids in the feed of cultured fish as in some other species. The ω3/ω6 ratio is a good index for comparing the relative nutritional value of fish oils. In our study, the cultured sharpsnout seabream appears to be better nutritious in terms of ω3/ω6. Acknowledgement: This work was supported by the Scientific Research Project Unit of the University of Cukurova, Turkey under grant no FBA-2016-5780.

Keywords: Diplodus puntazo, cage cultured, PUFA, fatty acid

Procedia PDF Downloads 260
1018 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 96
1017 Identification of Suitable Sites for Rainwater Harvesting in Salt Water Intruded Area by Using Geospatial Techniques in Jafrabad, Amreli District, India

Authors: Pandurang Balwant, Ashutosh Mishra, Jyothi V., Abhay Soni, Padmakar C., Rafat Quamar, Ramesh J.

Abstract:

The sea water intrusion in the coastal aquifers has become one of the major environmental concerns. Although, it is a natural phenomenon but, it can be induced with anthropogenic activities like excessive exploitation of groundwater, seacoast mining, etc. The geological and hydrogeological conditions including groundwater heads and groundwater pumping pattern in the coastal areas also influence the magnitude of seawater intrusion. However, this problem can be remediated by taking some preventive measures like rainwater harvesting and artificial recharge. The present study is an attempt to identify suitable sites for rainwater harvesting in salt intrusion affected area near coastal aquifer of Jafrabad town, Amreli district, Gujrat, India. The physico-chemical water quality results show that out of 25 groundwater samples collected from the study area most of samples were found to contain high concentration of Total Dissolved Solids (TDS) with major fractions of Na and Cl ions. The Cl/HCO3 ratio was also found greater than 1 which indicates the salt water contamination in the study area. The geophysical survey was conducted at nine sites within the study area to explore the extent of contamination of sea water. From the inverted resistivity sections, low resistivity zone (<3 Ohm m) associated with seawater contamination were demarcated in North block pit and south block pit of NCJW mines, Mitiyala village Lotpur and Lunsapur village at the depth of 33 m, 12 m, 40 m, 37 m, 24 m respectively. Geospatial techniques in combination of Analytical Hierarchy Process (AHP) considering hydrogeological factors, geographical features, drainage pattern, water quality and geophysical results for the study area were exploited to identify potential zones for the Rainwater Harvesting. Rainwater harvesting suitability model was developed in ArcGIS 10.1 software and Rainwater harvesting suitability map for the study area was generated. AHP in combination of the weighted overlay analysis is an appropriate method to identify rainwater harvesting potential zones. The suitability map can be further utilized as a guidance map for the development of rainwater harvesting infrastructures in the study area for either artificial groundwater recharge facilities or for direct use of harvested rainwater.

Keywords: analytical hierarchy process, groundwater quality, rainwater harvesting, seawater intrusion

Procedia PDF Downloads 168
1016 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints

Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno

Abstract:

Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.

Keywords: battery energy storage, power system stability, system strength, weak power system

Procedia PDF Downloads 58
1015 Gender and Total Compensation, in an ‘Age’ of Disruption

Authors: Daniel J. Patricio Jiménez

Abstract:

The term 'total compensation’ refers to salary, training, innovation, and development, and of course, motivation; total compensation is an open and flexible system which must facilitate personal and family conciliation and therefore cannot be isolated from social reality. Today, the challenge for any company that wants to have a future is to be sustainable, and women play a ‘special’ role in this. Spain, in its statutory and conventional development, has not given sufficient response to new phenomena such as ‘bonuses’, ‘stock options’ or ‘fringe benefits’ (constructed dogmatically and by court decisions), the new digital reality, where cryptocurrency, new collaborative models and service provision -such as remote work-, are always ahead of the law. To talk about compensation is to talk about the gender gap, and with the entry into force of RD.902 /2020 on 14 April 2021, certain measures are necessary under the principle of salary transparency; the valuation of jobs, the pay register (Rd. 6/2019) and the pay audit, are an example of this. Analyzing the methodologies, and in particular the determination and weight of the factors -so that the system itself is not discriminatory- is essential. The wage gap in Spain is smaller than in Europe, but the sources do not reflect the reality, and since the beginning of the pandemic, there has been a clear stagnation. A living wage is not the minimum wage; it is identified with rights and needs; it is that which, based on internal equity, reflects the competitiveness of the company in terms of human capital. Spain has lost and has not recovered the relative weight of its wages; this is having a direct impact on our competitiveness, consequently on the precariousness of employment and undoubtedly on the levels of extreme poverty. Training is becoming more than ever a strategic factor; the new digital reality requires that each component of the system is connected, the transversality is imposed on us, this forces us to redefine content, to give answers to the new demands that the new normality requires because technology and robotization are changing the concept of employability. The presence of women in this context is necessary, and there is a long way to go. The so-called emotional compensation becomes particularly relevant at a time when pandemics, silence, and disruption, are leaving after-effects; technostress (in all its manifestations) is just one of them. Talking about motivation today makes no sense without first being aware that mental health is a priority, that it must be treated and communicated in an inclusive way because it increases satisfaction, productivity, and engagement. There is a clear conclusion to all this: compensation systems do not respond to the ‘new normality’: diversity, and in particular women, cannot be invisible in human resources policies if the company wants to be sustainable.

Keywords: diversity, gender gap, human resources, sustainability.

Procedia PDF Downloads 165
1014 The Acquisition of /r/ By Setswana-Learning Children

Authors: Keneilwe Matlhaku

Abstract:

Crosslinguistic studies (theoretical and clinical) have shown delays and significant misarticulation in the acquisition of the rhotics. This article provides a detailed analysis of the early development of the rhotic phoneme, an apical trill /r/, by monolingual Setswana (Tswana S30) children of age ranges between 1 and 4 years. The data display the following trends: (1) late acquisition of /r/; (2) a wide range of substitution patterns involving this phoneme (i.e., gliding, coronal stopping, affrication, deletion, lateralization, as well as, substitution to a dental and uvular fricative). The primary focus of the article is on the potential origins of these variations of /r/, even within the same language. Our data comprises naturalistic longitudinal audio recordings of 6 children (2 males and 4 females) whose speech was recorded in their homes over a period of 4 months with no or only minimal disruptions in their daily environments. Phon software (Rose et al. 2013; Rose & MacWhinney 2014) was used to carry out the orthographic and phonetic transcriptions of the children’s data. Phon also enabled the generation of the children’s phonological inventories for comparison with adult target IPA forms. We explain the children’s patterns through current models of phonological emergence (MacWhinney 2015) as well as McAllister Byun, Inkelas & Rose (2016); Rose et al., (2022), which highlight the perceptual and articulatory factors influencing the development of sounds and sound classes. We highlight how the substitution patterns observed in the data can be captured through a consideration of the auditory properties of the target speech sounds, combined with an understanding of the types of articulatory gestures involved in the production of these sounds. These considerations, in turn, highlight some of the most central aspects of the challenges faced by the child toward learning these auditory-articulatory mappings. We provide a cross-linguistic survey of the acquisition of rhotic consonants in a sample of related and unrelated languages in which we show that the variability and volatility in the substitution patterns of /r/ is also brought about by the properties of the children’s ambient languages. Beyond theoretical issues, this article sets an initial foundation for developing speech-language pathology materials and services for Setswana learning children, an emerging area of public service in Botswana.

Keywords: rhotic, apical trill, Phon, phonological emergence, auditory, articulatory, mapping

Procedia PDF Downloads 28
1013 Evaluation of the Influence of Graphene Oxide on Spheroid and Monolayer Culture under Flow Conditions

Authors: A. Zuchowska, A. Buta, M. Mazurkiewicz-Pawlicka, A. Malolepszy, L. Stobinski, Z. Brzozka

Abstract:

In recent years, graphene-based materials are finding more and more applications in biological science. As a thin, tough, transparent and chemically resistant materials, they appear to be a very good material for the production of implants and biosensors. Interest in graphene derivatives also resulted at the beginning of research about the possibility of their application in cancer therapy. Currently, the analysis of their potential use in photothermal therapy and as a drug carrier is mostly performed. Moreover, the direct anticancer properties of graphene-based materials are also tested. Nowadays, cytotoxic studies are conducted on in vitro cell culture in standard culture vessels (macroscale). However, in this type of cell culture, the cells grow on the synthetic surface in static conditions. For this reason, cell culture in macroscale does not reflect in vivo environment. The microfluidic systems, called Lab-on-a-chip, are proposed as a solution for improvement of cytotoxicity analysis of new compounds. Here, we present the evaluation of cytotoxic properties of graphene oxide (GO) on breast, liver and colon cancer cell line in a microfluidic system in two spatial models (2D and 3D). Before cell introduction, the microchambers surface was modified by the fibronectin (2D, monolayer) and poly(vinyl alcohol) (3D, spheroids) covering. After spheroid creation (3D) and cell attachment (2D, monolayer) the selected concentration of GO was introduced into microsystems. Then monolayer and spheroids viability/proliferation using alamarBlue® assay and standard microplate reader was checked for three days. Moreover, in every day of the culture, the morphological changes of cells were determined using microscopic analysis. Additionally, on the last day of the culture differential staining using Calcein AM and Propidium iodide were performed. We were able to note that the GO has an influence on all tested cell line viability in both monolayer and spheroid arrangement. We showed that GO caused higher viability/proliferation decrease for spheroids than a monolayer (this was observed for all tested cell lines). Higher cytotoxicity of GO on spheroid culture can be caused by different geometry of the microchambers for 2D and 3D cell cultures. Probably, GO was removed from the flat microchambers for 2D culture. Those results were also confirmed by differential staining. Comparing our results with the studies conducted in the macroscale, we also proved that the cytotoxic properties of GO are changed depending on the cell culture conditions (static/ flow).

Keywords: cytotoxicity, graphene oxide, monolayer, spheroid

Procedia PDF Downloads 123
1012 Human Factors as the Main Reason of the Accident in Scaffold Use Assessment

Authors: Krzysztof J. Czarnocki, E. Czarnocka, K. Szaniawska

Abstract:

Main goal of the research project is Scaffold Use Risk Assessment Model (SURAM) formulation, developed for the assessment of risk levels as a various construction process stages with various work trades. Finally, in 2016, the project received financing by the National Center for Research and development according to PBS3/A2/19/2015–Research Grant. The presented data, calculations and analyzes discussed in this paper were created as a result of the completion on the first and second phase of the PBS3/A2/19/2015 project. Method: One of the arms of the research project is the assessment of worker visual concentration on the sight zones as well as risky visual point inadequate observation. In this part of research, the mobile eye-tracker was used to monitor the worker observation zones. SMI Eye Tracking Glasses is a tool, which allows us to analyze in real time and place where our eyesight is concentrated on and consequently build the map of worker's eyesight concentration during a shift. While the project is still running, currently 64 construction sites have been examined, and more than 600 workers took part in the experiment including monitoring of typical parameters of the work regimen, workload, microclimate, sound vibration, etc. Full equipment can also be useful in more advanced analyses. Because of that technology we have verified not only main focus of workers eyes during work on or next to scaffolding, but we have also examined which changes in the surrounding environment during their shift influenced their concentration. In the result of this study it has been proven that only up to 45.75% of the shift time, workers’ eye concentration was on one of three work-related areas. Workers seem to be distracted by noisy vehicles or people nearby. In opposite to our initial assumptions and other authors’ findings, we observed that the reflective parts of the scaffoldings were not more recognized by workers in their direct workplaces. We have noticed that the red curbs were the only well recognized part on a very few scaffoldings. Surprisingly on numbers of samples, we have not recognized any significant number of concentrations on those curbs. Conclusion: We have found the eye-tracking method useful for the construction of the SURAM model in the risk perception and worker’s behavior sub-modules. We also have found that the initial worker's stress and work visual conditions seem to be more predictive for assessment of the risky developing situation or an accident than other parameters relating to a work environment.

Keywords: accident assessment model, eye tracking, occupational safety, scaffolding

Procedia PDF Downloads 194
1011 Switching of Series-Parallel Connected Modules in an Array for Partially Shaded Conditions in a Pollution Intensive Area Using High Powered MOSFETs

Authors: Osamede Asowata, Christo Pienaar, Johan Bekker

Abstract:

Photovoltaic (PV) modules may become a trend for future PV systems because of their greater flexibility in distributed system expansion, easier installation due to their nature, and higher system-level energy harnessing capabilities under shaded or PV manufacturing mismatch conditions. This is as compared to the single or multi-string inverters. Novel residential scale PV arrays are commonly connected to the grid by a single DC–AC inverter connected to a series, parallel or series-parallel string of PV panels, or many small DC–AC inverters which connect one or two panels directly to the AC grid. With an increasing worldwide interest in sustainable energy production and use, there is renewed focus on the power electronic converter interface for DC energy sources. Three specific examples of such DC energy sources that will have a role in distributed generation and sustainable energy systems are the photovoltaic (PV) panel, the fuel cell stack, and batteries of various chemistries. A high-efficiency inverter using Metal Oxide Semiconductor Field-Effect Transistors (MOSFETs) for all active switches is presented for a non-isolated photovoltaic and AC-module applications. The proposed configuration features a high efficiency over a wide load range, low ground leakage current and low-output AC-current distortion with no need for split capacitors. The detailed power stage operating principles, pulse width modulation scheme, multilevel bootstrap power supply, and integrated gate drivers for the proposed inverter is described. Experimental results of a hardware prototype, show that not only are MOSFET efficient in the system, it also shows that the ground leakage current issues are alleviated in the proposed inverter and also a 98 % maximum associated driver circuit is achieved. This, in turn, provides the need for a possible photovoltaic panel switching technique. This will help to reduce the effect of cloud movements as well as improve the overall efficiency of the system.

Keywords: grid connected photovoltaic (PV), Matlab efficiency simulation, maximum power point tracking (MPPT), module integrated converters (MICs), multilevel converter, series connected converter

Procedia PDF Downloads 119