Search results for: direct shear
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4441

Search results for: direct shear

631 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River

Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko

Abstract:

Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.

Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling

Procedia PDF Downloads 238
630 Dialectic Relationship between Urban Pattern Structural Methods and Construction Materials in Traditional Settlements

Authors: Sawsan Domi

Abstract:

Identifying urban patterns of traditional settlements perfumed in various ways. One of them through the three-dimensional ‘reading’ of the urban web: the density of structures, the construction materials and the colors used. Objectives of this study are to paraphrase and understand the relation between the formation of the traditional settlements and the shape and structure of their structural method. In the beginning, the study considered the components of the historical neighborhood, which reflected the social and economical effects in the urban planning pattern. Then, by analyzing the main components of the old neighborhood which included: analysis of urban patterns & streets systems, analysis of traditional architectural elements and the construction materials and their usage. ‘’Hamasa’’ Neighborhood in ‘’Al Buraimi’’ Governorate is considered as one of the most important archaeological sites in the Sultanate of Oman. The vivid features of this archaeological site are the living witness to the genius of the Omani person and his unique architecture. ‘’Hamasa’’ Neighborhood is also considered as the oldest human settlement at ‘’Al Buraimi’’ Governorate. It used to be the gathering area for Arab and Omani tribes who are coming from other governorates of Oman. In this old settlement, local characters were created to meet the climate problems and the social, religious requirements of the life. Traditional buildings were built of materials that were available in the surround environment and within hand reach. The Historical component was containing four main separate neighborhoods. The morphological structure of ‘’Hamasa’’ was characterized by a continuous and densely built-up pattern, featuring close interdependence between the spatial and functional pattern. The streets linked the plots, the marketplace and the open areas. Consequently, the traditional fabric had narrow streets with one- and two- storey houses. The material used in building facilities at ‘’Hamasa’' historical are from the traditionally used materials. These materials were cleverly used in building of local facilities. Most of these materials are locally made and formed, and used by the locals. ‘’Hamasa’’ neighborhood is an example of analyzing the urban patterns and geometrical features. The old ‘’ Hamasa’’ retains the patterns of its old settlements. Urban patterns were defined by both forms and structure. The traditional architecture of ‘’Hamasa’’ neighborhood has evolved as a direct result of its climatic conditions. The study figures out that the neighborhood characterized by the used construction materials, the scope of the residential structures and by the streets system. All formed the urban pattern of the settlement.

Keywords: urban pattern, construction materials, neighborhood, architectural elements, historical

Procedia PDF Downloads 66
629 The Proposal of a Shared Mobility City Index to Support Investment Decision Making for Carsharing

Authors: S. Murr, S. Phillips

Abstract:

One of the biggest challenges entering a market with a carsharing or any other shared mobility (SM) service is sound investment decision-making. To support this process, the authors think that a city index evaluating different criteria is necessary. The goal of such an index is to benchmark cities along a set of external measures to answer the main two challenges: financially viability and the understanding of its specific requirements. The authors have consulted several shared mobility projects and industry experts to create such a Shared Mobility City Index (SMCI). The current proposal of the SMCI consists of 11 individual index measures: general data (demographics, geography, climate and city culture), shared mobility landscape (current SM providers, public transit options, commuting patterns and driving culture) and political vision and goals (vision of the Mayor, sustainability plan, bylaws/tenders supporting SM). To evaluate the suitability of the index, 16 cities on the East Coast of North America were selected and secondary research was conducted. The main sources of this study were census data, organisational records, independent press releases and informational websites. Only non-academic sources where used because the relevant data for the chosen cities is not published in academia. Applying the index measures to the selected cities resulted in three major findings. Firstly, density (city area divided by number of inhabitants) is not an indicator for the number of SM services offered: the city with the lowest density has five bike and carsharing options. Secondly, there is a direct correlation between commuting patterns and how many shared mobility services are offered. New York, Toronto and Washington DC have the highest public transit ridership and the most shared mobility providers. Lastly, except one, all surveyed cities support shared mobility with their sustainability plan. The current version of the shared mobility index is proving a practical tool to evaluate cities, and to understand functional, political, social and environmental considerations. More cities will have to be evaluated to refine the criteria further. However, the current version of the index can be used to assess cities on their suitability for shared mobility services and will assist investors deciding which city is a financially viable market.

Keywords: carsharing, transportation, urban planning, shared mobility city index

Procedia PDF Downloads 277
628 An Analysis of Employee Attitudes to Organisational Change Management Practices When Adopting New Technologies Within the Architectural, Engineering, and Construction Industry: A Case Study

Authors: Hannah O'Sullivan, Esther Quinn

Abstract:

Purpose: The Architectural, Engineering, and Construction (AEC) industry has historically struggled to adapt to change. Although the ability to innovate and successfully implement organizational change has been demonstrated to be critical in achieving a sustainable competitive advantage in the industry, many AEC organizations continue to struggle when affecting organizational change. One prominent area of organizational change that presents many challenges in the industry is the adoption of new forms of technology, for example, Building Information Modelling (BIM). Certain Organisational Change Management (OCM) practices have been proven to be effective in supporting organizations to adopt change, but little research has been carried out on diverging employee attitudes to change relative to their roles within the organization. The purpose of this research study is to examine how OCM practices influence employee attitudes to change when adopting new forms of technology and to analyze the diverging employee perspectives within an organization on the importance of different OCM strategies. Methodology: Adopting an interview-based approach, a case study was carried out on a large-sized, prominent Irish construction organization who are currently adopting a new technology platform for its projects. Qualitative methods were used to gain insight into differing perspectives on the utilization of various OCM practices and their efficacy when adopting a new form of technology on projects. Change agents implementing the organizational change gave insight into their intentions with the technology rollout strategy, while other employees were interviewed to understand how this rollout strategy was received and the challenges that were encountered. Findings: The results of this research study are currently being finalized. However, it is expected that employees in different roles will value different OCM practices above others. Findings and conclusions will be determined within the coming weeks. Value: This study will contribute to the body of knowledge relating to the introduction of new technologies, including BIM, to AEC organizations. It will also contribute to the field of organizational change management, providing insight into methods of introducing change that will be most effective for different employees based on their roles and levels of experience within the industry. The focus of this study steers away from traditional studies of the barriers to adopting BIM in its first instance at an organizational level and centers on the direct effect on employees when a company changes the technology platform being used.

Keywords: architectural, engineering, and construction (AEC) industry, Building Information Modelling, case study, challenges, employee perspectives, organisational change management.

Procedia PDF Downloads 27
627 Na Doped ZnO UV Filters with Reduced Photocatalytic Activity for Sunscreen Application

Authors: Rafid Mueen, Konstantin Konstantinov, Micheal Lerch, Zhenxiang Cheng

Abstract:

In the past two decades, the concern for skin protection from ultraviolet (UV) radiation has attracted considerable attention due to the increased intensity of UV rays that can reach the Earth’s surface as a result of the breakdown of ozone layer. Recently, UVA has also attracted attention, since, in comparison to UVB, it can penetrate deeply into the skin, which can result in significant health concerns. Sunscreen agents are one of the significant tools to protect the skin from UV irradiation, and it is either organic or in organic. Developing of inorganic UV blockers is essential, which provide efficient UV protection over a wide spectrum rather than organic filters. Furthermore inorganic UV blockers are good comfort, and high safety when applied on human skin. Inorganic materials can absorb, reflect, or scatter the ultraviolet radiation, depending on their particle size, unlike the organic blockers, which absorb the UV irradiation. Nowadays, most inorganic UV-blocking filters are based on (TiO2) and ZnO). ZnO can provide protection in the UVA range. Indeed, ZnO is attractive for in sunscreen formulization, and this relates to many advantages, such as its modest refractive index (2.0), absorption of a small fraction of solar radiation in the UV range which is equal to or less than 385 nm, its high probable recombination of photogenerated carriers (electrons and holes), large direct band gap, high exciton binding energy, non-risky nature, and high tendency towards chemical and physical stability which make it transparent in the visible region with UV protective activity. A significant issue for ZnO use in sunscreens is that it can generate ROS in the presence of UV light because of its photocatalytic activity. Therefore it is essential to make a non-photocatalytic material through modification by other metals. Several efforts have been made to deactivate the photocatalytic activity of ZnO by using inorganic surface modifiers. The doping of ZnO by different metals is another way to modify its photocatalytic activity. Recently, successful doping of ZnO with different metals such as Ce, La, Co, Mn, Al, Li, Na, K, and Cr by various procedures, such as a simple and facile one pot water bath, co-precipitation, hydrothermal, solvothermal, combustion, and sol gel methods has been reported. These materials exhibit greater performance than undoped ZnO towards increasing the photocatalytic activity of ZnO in visible light. Therefore, metal doping can be an effective technique to modify the ZnO photocatalytic activity. However, in the current work, we successfully reduce the photocatalytic activity of ZnO through Na doped ZnO fabricated via sol-gel and hydrothermal methods.

Keywords: photocatalytic, ROS, UVA, ZnO

Procedia PDF Downloads 113
626 Recommendations to Improve Classification of Grade Crossings in Urban Areas of Mexico

Authors: Javier Alfonso Bonilla-Chávez, Angélica Lozano

Abstract:

In North America, more than 2,000 people annually die in accidents related to railroad tracks. In 2020, collisions at grade crossings were the main cause of deaths related to railway accidents in Mexico. Railway networks have constant interaction with motor transport users, cyclists, and pedestrians, mainly in grade crossings, where is the greatest vulnerability and risk of accidents. Usually, accidents at grade crossings are directly related to risky behavior and non-compliance with regulations by motorists, cyclists, and pedestrians, especially in developing countries. Around the world, countries classify these crossings in different ways. In Mexico, according to their dangerousness (high, medium, or low), types A, B and C have been established, recommending for each one different type of auditive and visual signaling and gates, as well as horizontal and vertical signaling. This classification is based in a weighting, but regrettably, it is not explained how the weight values were obtained. A review of the variables and the current approach for the grade crossing classification is required, since it is inadequate for some crossings. In contrast, North America (USA and Canada) and European countries consider a broader classification so that attention to each crossing is addressed more precisely and equipment costs are adjusted. Lack of a proper classification, could lead to cost overruns in the equipment and a deficient operation. To exemplify the lack of a good classification, six crossings are studied, three located in the rural area of Mexico and three in Mexico City. These cases show the need of: improving the current regulations, improving the existing infrastructure, and implementing technological systems, including informative signals with nomenclature of the involved crossing and direct telephone line for reporting emergencies. This implementation is unaffordable for most municipal governments. Also, an inventory of the most dangerous grade crossings in urban and rural areas must be obtained. Then, an approach for improving the classification of grade crossings is suggested. This approach must be based on criteria design, characteristics of adjacent roads or intersections which can influence traffic flow through the crossing, accidents related to motorized and non-motorized vehicles, land use and land management, type of area, and services and economic activities in the zone where the grade crossings is located. An expanded classification of grade crossing in Mexico could reduce accidents and improve the efficiency of the railroad.

Keywords: accidents, grade crossing, railroad, traffic safety

Procedia PDF Downloads 79
625 Genetic Advance versus Environmental Impact toward Sustainable Protein, Wet Gluten and Zeleny Sedimentation in Bread and Durum Wheat

Authors: Gordana Branković, Dejan Dodig, Vesna Pajić, Vesna Kandić, Desimir Knežević, Nenad Đurić

Abstract:

The wheat grain quality properties are influenced by genotype, environmental conditions and genotype × environment interaction (GEI). The increasing request of more nutritious wheat products will direct future breeding programmes. Therefore, the aim of investigation was to determine: i) variability of the protein content (PC), wet gluten content (WG) and Zeleny sedimentation volume (ZS); ii) components of variance, heritability in a broad sense (hb2), and expected genetic advance as percent of mean (GAM) for PC, WG, and ZS; iii) correlations between PC, WG, ZS, and most important agronomic traits; in order to assess expected breeding success versus environmental impact for these quality traits. The plant material consisted of 30 genotypes of bread wheat (Triticum aestivum L. ssp. aestivum) and durum wheat (Triticum durum Desf.). The trials were sown at the three test locations in Serbia: Rimski Šančevi, Zemun Polje and Padinska Skela during 2010-2011 and 2011-2012. The experiments were set as randomized complete block design with four replications. The plot consisted of five rows of 1 m2 (5 × 0.2 m × 1 m). PC, WG and ZS were determined by the use of Near infrared spectrometry (NIRS) with the Infraneo analyser (Chopin Technologies, France). PC, WG and ZS, in bread wheat, were in the range 13.4-16.4%, 22.8-30.3%, and 39.4-67.1 mL, respectively, and in durum wheat, in the range 15.3-18.1%, 28.9-36.3%, 37.4-48.3 mL, respectively. The dominant component of variance for PC, WG, and ZS, in bread wheat, was genotype with the genetic variance/GEI variance (VG/VG × E) relation of 3.2, 2.9 and 1.0, respectively, and in durum wheat was GEI with the VG/VG × E relation of 0.70, 0.69 and 0.49, respectively. hb2 and GAM values for PC, WG and ZS, in bread wheat, were 94.9% and 12.6%, 93.7% and 18.4%, and 86.2% and 28.1%, respectively, and in durum wheat, 80.7% and 7.6%, 79.7% and 10.2%, and 74% and 11.2%, respectively. The most consistent through six environments, statistically significant correlations, for bread wheat, were between PC and spike length (-0.312 to -0.637); PC, WG, ZS and grain number per spike (-0.320 to -0.620; -0.369 to -0.567; -0.301 to -0.378, respectively); PC and grain thickness (0.338 to 0.566), and for durum wheat, were between PC, WG, ZS and yield (-0.290 to -0.690; -0.433 to -0.753; -0.297 to -0.660, respectively); PC and plant height (-0.314 to -0.521); PC, WG and spike length (-0.298 to -0.597; -0.293 to -0.627, respectively); PC, WG and grain thickness (0.260 to 0.575; 0.269 to 0.498, respectively); PC, WG and grain vitreousness (0.278 to 0.665; 0.357 to 0.690, respectively). Breeding success can be anticipated for ZS in bread wheat due to coupled high values for hb2 and GAM, suggesting existence of additive genetic effects, and also for WG in bread wheat, due to very high hb2 and medium high GAM. The small, and medium, negative correlations between PC, WG, ZS, and yield or yield components, indicate difficulties to select simultaneously for high quality and yield, depending on linkage for particular genetic arrangements to be broken by recombination.

Keywords: bread and durum wheat, genetic advance, protein and wet gluten content, Zeleny sedimentation volume

Procedia PDF Downloads 222
624 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards

Authors: Hanna Schübel, Ivo Wallimann-Helmer

Abstract:

In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.

Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility

Procedia PDF Downloads 94
623 The Implementation of a Nurse-Driven Palliative Care Trigger Tool

Authors: Sawyer Spurry

Abstract:

Problem: Palliative care providers at an academic medical center in Maryland stated medical intensive care unit (MICU) patients are often referred late in their hospital stay. The MICU has performed well below the hospital quality performance metric of 80% of patients who expire with expected outcomes should have received a palliative care consult within 48 hours of admission. Purpose: The purpose of this quality improvement (QI) project is to increase palliative care utilization in the MICU through the implementation of a Nurse-Driven PalliativeTriggerTool to prompt the need for specialty palliative care consult. Methods: MICU nursing staff and providers received education concerning the implications of underused palliative care services and the literature data supporting the use of nurse-driven palliative care tools as a means of increasing utilization of palliative care. A MICU population specific criteria of palliative triggers (Palliative Care Trigger Tool) was formulated by the QI implementation team, palliative care team, and patient care services department. Nursing staff were asked to assess patients daily for the presence of palliative triggers using the Palliative Care Trigger Tool and present findings during bedside rounds. MICU providers were asked to consult palliative medicinegiven the presence of palliative triggers; following interdisciplinary rounds. Rates of palliative consult, given the presence of triggers, were collected via electronic medical record e-data pull, de-identified, and recorded in the data collection tool. Preliminary Results: Over 140 MICU registered nurses were educated on the palliative trigger initiative along with 8 nurse practitioners, 4 intensivists, 2 pulmonary critical care fellows, and 2 palliative medicine physicians. Over 200 patients were admitted to the MICU and screened for palliative triggers during the 15-week implementation period. Primary outcomes showed an increase in palliative care consult rates to those patients presenting with triggers, a decreased mean time from admission to palliative consult, and increased recognition of unmet palliative care needs by MICU nurses and providers. Conclusions: Anticipatory findings of this QI project would suggest a positive correlation between utilizing palliative care trigger criteria and decreased time to palliative care consult. The direct outcomes of effective palliative care results in decreased length of stay, healthcare costs, and moral distress, as well as improved symptom management and quality of life (QOL).

Keywords: palliative care, nursing, quality improvement, trigger tool

Procedia PDF Downloads 158
622 A Study on the Effect of the Work-Family Conflict on Work Engagement: A Mediated Moderation Model of Emotional Exhaustion and Positive Psychology Capital

Authors: Sungeun Hyun, Sooin Lee, Gyewan Moon

Abstract:

Work-Family Conflict has been an active research area for the past decades. Work-Family Conflict harms individuals and organizations, it is ultimately expected to bring the cost of losses to the company in the long run. WFC has mainly focused on effects of organizational effectiveness and job attitude such as Job Satisfaction, Organizational Commitment, and Turnover Intention variables. This study is different from consequence variable with previous research. For this purpose, we selected the positive job attitude 'Work Engagement' as a consequence of WFC. This research has its primary research purpose in identifying the negative effects of the Work-Family Conflict, and started out from the recognition of the problem that the research on the direct relationship on the influence of the WFC on Work Engagement is lacking. Based on the COR(Conservation of resource theory) and JD-R(Job Demand- Resource model), the empirical study model to examine the negative effects of WFC with Emotional Exhaustion as the link between WFC and Work Engagement was suggested and validated. Also, it was analyzed how much Positive Psychological Capital may buffer the negative effects arising from WFC within this relationship, and the Mediated Moderation model controlling the indirect effect influencing the Work Engagement by the Positive Psychological Capital mediated by the WFC and Emotional Exhaustion was verified. Data was collected by using questionnaires distributed to 500 employees engaged manufacturing, services, finance, IT industry, education services, and other sectors, of which 389 were used in the statistical analysis. The data are analyzed by statistical package, SPSS 21.0, SPSS macro and AMOS 21.0. The hierarchical regression analysis, SPSS PROCESS macro and Bootstrapping method for hypothesis testing were conducted. Results showed that all hypotheses are supported. First, WFC showed a negative effect on Work Engagement. Specifically, WIF appeared to be on more negative effects than FIW. Second, Emotional exhaustion found to mediate the relationship between WFC and Work Engagement. Third, Positive Psychological Capital showed to moderate the relationship between WFC and Emotional Exhaustion. Fourth, the effect of mediated moderation through the integration verification, Positive Psychological Capital demonstrated to buffer the relationship among WFC, Emotional Exhastion, and Work Engagement. Also, WIF showed a more negative effects than FIW through verification of all hypotheses. Finally, we discussed the theoretical and practical implications on research and management of the WFC, and proposed limitations and future research directions of research.

Keywords: emotional exhaustion, positive psychological capital, work engagement, work-family conflict

Procedia PDF Downloads 193
621 Reasons for Food Losses and Waste in Basic Production of Meat Sector in Poland

Authors: Sylwia Laba, Robert Laba, Krystian Szczepanski, Mikolaj Niedek, Anna Kaminska-Dworznicka

Abstract:

Meat and its products are considered food products, having the most unfavorable effect on the environment that requires rational management of these products and waste, originating throughout the whole chain of manufacture, processing, transport, and trade of meat. From the economic and environmental viewpoints, it is important to limit the losses and food wastage and the food waste in the whole meat sector. The link to basic production includes obtaining raw meat, i.e., animal breeding, management, and transport of animals to the slaughterhouse. Food is any substance or product, intended to be consumed by humans. It was determined (for the needs of the present studies) when the raw material is considered as a food. It is the moment when the animals are prepared to loading with the aim to be transported to a slaughterhouse and utilized for food purposes. The aim of the studies was to determine the reasons for loss generation in the basic production of the meat sector in Poland during the years 2017 – 2018. The studies on food losses and waste in the meat sector in basic production were carried out in two areas: red meat i.e., pork and beef and poultry meat. The studies of basic production were conducted in the period of March-May 2019 at the territory of the whole country on a representative trial of 278 farms, including 102 pork production, 55–beef production, and 121 poultry meat production. The surveys were carried out with the utilization of questionnaires by the PAPI (Paper & Pen Personal Interview) method; the pollsters conducted direct questionnaire interviews. Research results indicate that it is followed that any losses were not recorded during the preparation, loading, and transport of the animals to the slaughterhouse in 33% of the visited farms. In the farms where the losses were indicated, the crushing and suffocations, occurring during the production of pigs, beef cattle and poultry, were the main reasons for these losses. They constituted ca. 40% of the reported reasons. The stress generated by loading and transport caused 16 – 17% (depending on the season of the year) of the loss reasons. In the case of poultry production, in 2017, additionally, 10.7% of losses were caused by inappropriate conditions of loading and transportation, while in 2018 – 11.8%. The diseases were one of the reasons for the losses in pork and beef production (7% of the losses). The losses and waste, generated during livestock production and in meat processing and trade cannot be managed or recovered. They have to be disposed of. It is, therefore, important to prevent and minimize the losses throughout the whole production chain. It is possible to introduce the appropriate measures, connected mainly with the appropriate conditions and methods of animal loading and transport.

Keywords: food losses, food waste, livestock production, meat sector

Procedia PDF Downloads 106
620 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 93
619 Origins of the Tattoo: Decoding the Ancient Meanings of Terrestrial Body Art to Establish a Connection between the Natural World and Humans Today

Authors: Sangeet Anand

Abstract:

Body art and tattooing have long been practiced as a form of self-expression for centuries, and this study studies and analyzes the pertinence of tattoo culture in our everyday lives and ancient past. Individuals of different cultures represent ideas, practices, and elements of their cultures through symbolic representation. These symbols come in all shapes and sizes and can be as simple as the makeup you put on every day to something more permanent such as a tattoo. In the long run, these individuals who choose to display art on their bodies are seeking to express their individuality. In addition, these visuals are ultimately a reflection of our own appropriate cultures deem as beautiful, important, and powerful to the human eye. They make us known to the world and give us a plausible identity in an ever-changing world. We have lived through and seen a rise in hippie culture today. This type of bodily decoration displayed by this fad has made it seem as though body art is a visual language that is relatively new. But quite to the contrary, it is not. Through cultural symbolic exploration, we can answer key questions to ideas that have been raised for centuries. Through careful, in-depth interviews, this study takes a broad subject matter-art, and symbolism-and culminates it into a deeper philosophical connection between the world and its past. The basic methodologies used in this sociocultural study include interview questionnaires and textual analysis, which encompass a subject and interviewer as well as source material. The major findings of this study contain a distinct connection between cultural heritage and the day-to-day likings of an individual. The participant that was studied during this project demonstrated a clear passion for hobbies that were practiced even by her ancestors. We can conclude, through these findings, that there is a deeper cultural connection between modern day humans, the first humans, and the surrounding environments. Our symbols today are a direct reflection of the elements of nature that our human ancestors were exposed to, and, through cultural acceptance, we can adorn ourselves with these representations to help others identify our pasts. Body art embraces the different aspects of different cultures and holds significance, tells stories, and persists, even as the human population rapidly integrates. With this pattern, our human descendents will continue to represent their cultures and identities in the future. Body art is an integral element in understanding how and why people identify with certain aspects of life over others and broaden the scope for conducting more analysis cross-culturally.

Keywords: natural, symbolism, tattoo, terrestrial

Procedia PDF Downloads 80
618 Hepatoprotective Action of Emblica officinalis Linn. against Radiation and Lead Induced Changes in Swiss Albino Mice

Authors: R. K. Purohit

Abstract:

Ionizing radiation induces cellular damage through direct ionization of DNA and other cellular targets and indirectly via reactive oxygen species which may include effects from epigenetic changes. So there is a need of hour is to search for an ideal radioprotector which could minimize the deleterious and damaging effects caused by ionizing radiation. Radioprotectors are agents which reduce the radiation effects on cell when applied prior to exposure of radiation. The aim of this study was to access the efficacy of Emblica officinalis in reducing radiation and lead induced changes in mice liver. For the present experiment, healthy male Swiss albino mice (6-8 weeks) were selected and maintained under standard conditions of temperature and light. Fruit extract of Emblica was fed orally at the dose of 0.01 ml/animal/day. The animal were divided into seven groups according to the treatment i.e. lead acetate solution as drinking water (group-II) or exposed to 3.5 or 7.0 Gy gamma radiation (group-III) or combined treatment of radiation and lead acetate (group-IV). The animals of experimental groups were administered Emblica extract seven days prior to radiation or lead acetate treatment (group V, VI and VII) respectively. The animals from all the groups were sacrificed by cervical dislocation at each post-treatment intervals of 1, 2, 4, 7, 14 and 28 days. After sacrificing the animals pieces of liver were taken out and some of them were kept at -20°C for different biochemical parameters. The histopathological changes included cytoplasmic degranulation, vacuolation, hyperaemia, pycnotic and crenated nuclei. The changes observed in the control groups were compared with the respective experimental groups. An increase in the value of total proteins, glycogen, acid phosphtase, alkaline phosphatase activity and RNA was observed up to day-14 in the non drug treated group and day 7 in the Emblica treated groups, thereafter value declined up to day-28 without reaching to normal. The value of cholesterol and DNA showed a decreasing trend up to day -14 in non drug treated groups and day-7 in drug treated groups, thereafter value elevated up to day-28. The biochemical parameters were observed in the form of increase or decrease in the values. The changes were found dose dependent. After combined treatment of radiation and lead acetate synergistic effect were observed. The liver of Emblica treated animals exhibited less severe damage as compared to non-drug treated animals at all the corresponding intervals. An early and fast recovery was also noticed in Emblica pretreated animals. Thus, it appears that Emblica is potent enough to check lead and radiation induced heptic lesion in Swiss albino mice.

Keywords: radiation, lead , emblica, mice, liver

Procedia PDF Downloads 294
617 Personality Composition in Senior Management Teams: The Importance of Homogeneity in Dynamic Managerial Capabilities

Authors: Shelley Harrington

Abstract:

As a result of increasingly dynamic business environments, the creation and fostering of dynamic capabilities, [those capabilities that enable sustained competitive success despite of dynamism through the awareness and reconfiguration of internal and external competencies], supported by organisational learning [a dynamic capability] has gained increased and prevalent momentum in the research arena. Presenting findings funded by the Economic Social Research Council, this paper investigates the extent to which Senior Management Team (SMT) personality (at the trait and facet level) is associated with the creation of dynamic managerial capabilities at the team level, and effective organisational learning/knowledge sharing within the firm. In doing so, this research highlights the importance of micro-foundations in organisational psychology and specifically dynamic capabilities, a field which to date has largely ignored the importance of psychology in understanding these important and necessary capabilities. Using a direct measure of personality (NEO PI-3) at the trait and facet level across 32 high technology and finance firms in the UK, their CEOs (N=32) and their complete SMTs [N=212], a new measure of dynamic managerial capabilities at the team level was created and statistically validated for use within the work. A quantitative methodology was employed with regression and gap analysis being used to show the empirical foundations of personality being positioned as a micro-foundation of dynamic capabilities. The results of this study found that personality homogeneity within the SMT was required to strengthen the dynamic managerial capabilities of sensing, seizing and transforming, something which was required to reflect strong organisational learning at middle management level [N=533]. In particular, it was found that the greater the difference [t-score gaps] between the personality profiles of a Chief Executive Officer (CEO) and their complete, collective SMT, the lower the resulting self-reported nature of dynamic managerial capabilities. For example; the larger the difference between a CEOs level of dutifulness, a facet contributing to the definition of conscientiousness, and their SMT’s level of dutifulness, the lower the reported level of transforming, a capability fundamental to strategic change in a dynamic business environment. This in turn directly questions recent trends, particularly in upper echelons research highlighting the need for heterogeneity within teams. In doing so, it successfully positions personality as a micro-foundation of dynamic capabilities, thus contributing to recent discussions from within the strategic management field calling for the need to empirically explore dynamic capabilities at such a level.

Keywords: dynamic managerial capabilities, senior management teams, personality, dynamism

Procedia PDF Downloads 238
616 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 150
615 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 105
614 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads

Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li

Abstract:

Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.

Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability

Procedia PDF Downloads 206
613 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 66
612 Exploring Coping Strategies among Caregivers of Children Who Have Survived Cancer

Authors: Noor Ismael, Somaya Malkawi, Sherin Al Awady, Taleb Ismael

Abstract:

Background/Significance: Cancer is a serious health condition that affects individuals’ quality of life during and after the course of this condition. Children who have survived cancer and their caregivers may deal with residual physical, cognitive or social disabilities. There is little research on caregivers’ health and wellbeing after cancer. To the authors’ best knowledge; there is no specific research about how caregivers cope with everyday stressors after cancer. Therefore, this study aimed to explore the coping strategies that caregivers of children who have survived cancer utilize to overcome everyday stressors. Methods: This study utilized a descriptive survey design. The sample consisted of 103 caregivers, who visited the health and wellness clinic at a national cancer center (additional demographics are presented in the results). The sample included caregivers of children who were off cancer treatments for at least two years from the beginning of data collection. The institution’s internal review board approved this study. Caregivers who agreed to participate completed the survey. The survey collected caregiver reported demographic information and the Brief COPE which measures caregivers' frequency of engaging in certain coping strategies. The Brief COPE consisted of 14 coping sub-scales, which are self-distraction, active coping, denial, substance use, use of emotional support, use of instrumental support, behavioral disengagement, venting, positive reframing, planning, humor, acceptance, religion, and self-blame. Data analyses included calculating sub-scales’ scores for the fourteen coping strategies and analysis of frequencies of demographics and coping strategies. Results: The 103 caregivers who participated in this study were 62% mothers, 80% married, 45% finished high school, 50% do not work outside the house, and 60% have low family income. Result showed that religious coping (66%) and acceptance (60%) were the most utilized coping strategies, followed by positive reframing (45%), active coping (44%) and planning (43%). The least utilized coping strategies in our sample were humor (5%), behavioral disengagement (8%), and substance-use (10%). Conclusions: Caregivers of children who have survived cancer mostly utilize religious coping and acceptance in dealing with everyday stressors. Because these coping strategies do not directly solve stressors like active coping and planning coping strategies, it is important to support caregivers in choosing and implementing effective coping strategies. Knowing from our results that some caregivers may utilize substance use as a coping strategy, which has negative health effects on caregivers and their children, there must be direct interventions that target these caregivers and their families.

Keywords: caregivers, cancer, stress, coping

Procedia PDF Downloads 142
611 Fillet Chemical Composition of Sharpsnout Seabream (Diplodus puntazzo) from Wild and Cage-Cultured Conditions

Authors: Oğuz Taşbozan, Celal Erbaş, Şefik Surhan Tabakoğlu, Mahmut Ali Gökçe

Abstract:

Polyunsaturated fatty acids (PUFAs) and particularly the levels and ratios of ω-3 and ω-6 fatty acids are important for biological functions in humans and recognized as essential components of human diet. According to the terms of many different points of view, the nutritional composition of fish in culture conditions and caught from wild are wondered by the consumers. Therefore the aim of this study was to investigate the chemical composition of cage-cultured and wild sharpsnout seabream which has been preferred by the consumers as an economical important fish species in Turkey. The fish were caught from wild and obtained from cage-cultured commercial companies. Eight fish were obtained for each group, and their average weights of the samples were 245.8±13.5 g for cultured, 149.4±13.3 g for wild samples. All samples were stored in freezer (-18 °C) and analyses were carried out in triplicates, using homogenized boneless fish fillets. Proximate compositions (protein, ash, moisture and lipid) were determined. The fatty acid composition was analyzed by a GC Clarous 500 with auto sampler (Perkin–Elmer, USA). Proximate compositions of cage-cultured and wild samples of sharpsnout seabream were found statistical differences in terms of proximate composition between the groups. The saturated fatty acid (SFA), monounsaturated fatty acid (MUFA) and PUFA amounts of cultured and wild sharpsnout seabream were significantly different. ω3/ω6 ratio was higher in the cultured group. Especially in protein level and lipid level of cultured samples was significantly higher than wild counterparts. One of the reasons for this, cultured species exposed to continuous feeding. This situation had a direct effect on their body lipid content. The fatty acid composition of fish differs depending on a variety of factors including species, diet, environmental factors and whether they are farmed or wild. The higher levels of MUFA in the cultured fish may be explained with the high content of monoenoic fatty acids in the feed of cultured fish as in some other species. The ω3/ω6 ratio is a good index for comparing the relative nutritional value of fish oils. In our study, the cultured sharpsnout seabream appears to be better nutritious in terms of ω3/ω6. Acknowledgement: This work was supported by the Scientific Research Project Unit of the University of Cukurova, Turkey under grant no FBA-2016-5780.

Keywords: Diplodus puntazo, cage cultured, PUFA, fatty acid

Procedia PDF Downloads 232
610 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 73
609 Identification of Suitable Sites for Rainwater Harvesting in Salt Water Intruded Area by Using Geospatial Techniques in Jafrabad, Amreli District, India

Authors: Pandurang Balwant, Ashutosh Mishra, Jyothi V., Abhay Soni, Padmakar C., Rafat Quamar, Ramesh J.

Abstract:

The sea water intrusion in the coastal aquifers has become one of the major environmental concerns. Although, it is a natural phenomenon but, it can be induced with anthropogenic activities like excessive exploitation of groundwater, seacoast mining, etc. The geological and hydrogeological conditions including groundwater heads and groundwater pumping pattern in the coastal areas also influence the magnitude of seawater intrusion. However, this problem can be remediated by taking some preventive measures like rainwater harvesting and artificial recharge. The present study is an attempt to identify suitable sites for rainwater harvesting in salt intrusion affected area near coastal aquifer of Jafrabad town, Amreli district, Gujrat, India. The physico-chemical water quality results show that out of 25 groundwater samples collected from the study area most of samples were found to contain high concentration of Total Dissolved Solids (TDS) with major fractions of Na and Cl ions. The Cl/HCO3 ratio was also found greater than 1 which indicates the salt water contamination in the study area. The geophysical survey was conducted at nine sites within the study area to explore the extent of contamination of sea water. From the inverted resistivity sections, low resistivity zone (<3 Ohm m) associated with seawater contamination were demarcated in North block pit and south block pit of NCJW mines, Mitiyala village Lotpur and Lunsapur village at the depth of 33 m, 12 m, 40 m, 37 m, 24 m respectively. Geospatial techniques in combination of Analytical Hierarchy Process (AHP) considering hydrogeological factors, geographical features, drainage pattern, water quality and geophysical results for the study area were exploited to identify potential zones for the Rainwater Harvesting. Rainwater harvesting suitability model was developed in ArcGIS 10.1 software and Rainwater harvesting suitability map for the study area was generated. AHP in combination of the weighted overlay analysis is an appropriate method to identify rainwater harvesting potential zones. The suitability map can be further utilized as a guidance map for the development of rainwater harvesting infrastructures in the study area for either artificial groundwater recharge facilities or for direct use of harvested rainwater.

Keywords: analytical hierarchy process, groundwater quality, rainwater harvesting, seawater intrusion

Procedia PDF Downloads 146
608 Gender and Total Compensation, in an ‘Age’ of Disruption

Authors: Daniel J. Patricio Jiménez

Abstract:

The term 'total compensation’ refers to salary, training, innovation, and development, and of course, motivation; total compensation is an open and flexible system which must facilitate personal and family conciliation and therefore cannot be isolated from social reality. Today, the challenge for any company that wants to have a future is to be sustainable, and women play a ‘special’ role in this. Spain, in its statutory and conventional development, has not given sufficient response to new phenomena such as ‘bonuses’, ‘stock options’ or ‘fringe benefits’ (constructed dogmatically and by court decisions), the new digital reality, where cryptocurrency, new collaborative models and service provision -such as remote work-, are always ahead of the law. To talk about compensation is to talk about the gender gap, and with the entry into force of RD.902 /2020 on 14 April 2021, certain measures are necessary under the principle of salary transparency; the valuation of jobs, the pay register (Rd. 6/2019) and the pay audit, are an example of this. Analyzing the methodologies, and in particular the determination and weight of the factors -so that the system itself is not discriminatory- is essential. The wage gap in Spain is smaller than in Europe, but the sources do not reflect the reality, and since the beginning of the pandemic, there has been a clear stagnation. A living wage is not the minimum wage; it is identified with rights and needs; it is that which, based on internal equity, reflects the competitiveness of the company in terms of human capital. Spain has lost and has not recovered the relative weight of its wages; this is having a direct impact on our competitiveness, consequently on the precariousness of employment and undoubtedly on the levels of extreme poverty. Training is becoming more than ever a strategic factor; the new digital reality requires that each component of the system is connected, the transversality is imposed on us, this forces us to redefine content, to give answers to the new demands that the new normality requires because technology and robotization are changing the concept of employability. The presence of women in this context is necessary, and there is a long way to go. The so-called emotional compensation becomes particularly relevant at a time when pandemics, silence, and disruption, are leaving after-effects; technostress (in all its manifestations) is just one of them. Talking about motivation today makes no sense without first being aware that mental health is a priority, that it must be treated and communicated in an inclusive way because it increases satisfaction, productivity, and engagement. There is a clear conclusion to all this: compensation systems do not respond to the ‘new normality’: diversity, and in particular women, cannot be invisible in human resources policies if the company wants to be sustainable.

Keywords: diversity, gender gap, human resources, sustainability.

Procedia PDF Downloads 134
607 Evaluation of the Influence of Graphene Oxide on Spheroid and Monolayer Culture under Flow Conditions

Authors: A. Zuchowska, A. Buta, M. Mazurkiewicz-Pawlicka, A. Malolepszy, L. Stobinski, Z. Brzozka

Abstract:

In recent years, graphene-based materials are finding more and more applications in biological science. As a thin, tough, transparent and chemically resistant materials, they appear to be a very good material for the production of implants and biosensors. Interest in graphene derivatives also resulted at the beginning of research about the possibility of their application in cancer therapy. Currently, the analysis of their potential use in photothermal therapy and as a drug carrier is mostly performed. Moreover, the direct anticancer properties of graphene-based materials are also tested. Nowadays, cytotoxic studies are conducted on in vitro cell culture in standard culture vessels (macroscale). However, in this type of cell culture, the cells grow on the synthetic surface in static conditions. For this reason, cell culture in macroscale does not reflect in vivo environment. The microfluidic systems, called Lab-on-a-chip, are proposed as a solution for improvement of cytotoxicity analysis of new compounds. Here, we present the evaluation of cytotoxic properties of graphene oxide (GO) on breast, liver and colon cancer cell line in a microfluidic system in two spatial models (2D and 3D). Before cell introduction, the microchambers surface was modified by the fibronectin (2D, monolayer) and poly(vinyl alcohol) (3D, spheroids) covering. After spheroid creation (3D) and cell attachment (2D, monolayer) the selected concentration of GO was introduced into microsystems. Then monolayer and spheroids viability/proliferation using alamarBlue® assay and standard microplate reader was checked for three days. Moreover, in every day of the culture, the morphological changes of cells were determined using microscopic analysis. Additionally, on the last day of the culture differential staining using Calcein AM and Propidium iodide were performed. We were able to note that the GO has an influence on all tested cell line viability in both monolayer and spheroid arrangement. We showed that GO caused higher viability/proliferation decrease for spheroids than a monolayer (this was observed for all tested cell lines). Higher cytotoxicity of GO on spheroid culture can be caused by different geometry of the microchambers for 2D and 3D cell cultures. Probably, GO was removed from the flat microchambers for 2D culture. Those results were also confirmed by differential staining. Comparing our results with the studies conducted in the macroscale, we also proved that the cytotoxic properties of GO are changed depending on the cell culture conditions (static/ flow).

Keywords: cytotoxicity, graphene oxide, monolayer, spheroid

Procedia PDF Downloads 102
606 Human Factors as the Main Reason of the Accident in Scaffold Use Assessment

Authors: Krzysztof J. Czarnocki, E. Czarnocka, K. Szaniawska

Abstract:

Main goal of the research project is Scaffold Use Risk Assessment Model (SURAM) formulation, developed for the assessment of risk levels as a various construction process stages with various work trades. Finally, in 2016, the project received financing by the National Center for Research and development according to PBS3/A2/19/2015–Research Grant. The presented data, calculations and analyzes discussed in this paper were created as a result of the completion on the first and second phase of the PBS3/A2/19/2015 project. Method: One of the arms of the research project is the assessment of worker visual concentration on the sight zones as well as risky visual point inadequate observation. In this part of research, the mobile eye-tracker was used to monitor the worker observation zones. SMI Eye Tracking Glasses is a tool, which allows us to analyze in real time and place where our eyesight is concentrated on and consequently build the map of worker's eyesight concentration during a shift. While the project is still running, currently 64 construction sites have been examined, and more than 600 workers took part in the experiment including monitoring of typical parameters of the work regimen, workload, microclimate, sound vibration, etc. Full equipment can also be useful in more advanced analyses. Because of that technology we have verified not only main focus of workers eyes during work on or next to scaffolding, but we have also examined which changes in the surrounding environment during their shift influenced their concentration. In the result of this study it has been proven that only up to 45.75% of the shift time, workers’ eye concentration was on one of three work-related areas. Workers seem to be distracted by noisy vehicles or people nearby. In opposite to our initial assumptions and other authors’ findings, we observed that the reflective parts of the scaffoldings were not more recognized by workers in their direct workplaces. We have noticed that the red curbs were the only well recognized part on a very few scaffoldings. Surprisingly on numbers of samples, we have not recognized any significant number of concentrations on those curbs. Conclusion: We have found the eye-tracking method useful for the construction of the SURAM model in the risk perception and worker’s behavior sub-modules. We also have found that the initial worker's stress and work visual conditions seem to be more predictive for assessment of the risky developing situation or an accident than other parameters relating to a work environment.

Keywords: accident assessment model, eye tracking, occupational safety, scaffolding

Procedia PDF Downloads 172
605 A Semiotic Analysis of the Changes in the Visual Sign System of International Advertisements in the Arab World

Authors: Nabil Mohammed Nasser Salem

Abstract:

International advertisements targeting the Arab world are usually modified to be compatible with the conservative culture in many Arab countries. The portrayal of female models in international advertisements in Arab magazines avoids direct sexual representation. Arab culture is guided by religious teachings and social restrictions that prohibit the display of many parts of the female body. Exposure of shoulders, arms, armpits, cleavage, legs, thighs, etc., of the female body is usually avoided in international advertisements published in Arab magazines. Exposure to parts of the female body other than the face and hands may be considered offensive in many parts of Arab countries. Although extensive research has been conducted on Arabic advertisements, to our best knowledge, there are no publications in the literature that address the recent changes in the visual sign system in international advertisements in Arab magazines using semiotics as a research method. The present study aims to analyze the changes in the visual sign system of international advertisements published in Arab magazines that promote female fragrances. It tries to analyze the differences in the sexual representations of the same female models in some selected advertisements during different periods. The magazines are randomly selected from the period between 2000 and 2019. The selection of magazines is based on their availability and popularity. The study focuses on the Dior Jadore ads because they reflect important changes in the appearance of the same female model between 2000 to 2019. The result of the study shows important changes in the sexual representation of the same female body. The Dior Jadore advertisement in 2000 shows only the head of the female model. The model is modestly portrayed and shows clear cultural and religious restrictions on the sexual representation of the female body. The result shows that the same female model is portrayed differently in the Dior Jadore advertisement from the period 2005 to 2019. These versions of advertisements show more parts of the female body that are covered in the older versions and show stronger sexual representations. The study is an important contribution as it fills an important gap in the literature by extending semiotic research to the study of recent visual changes in the sign system of international advertisements published in Arab magazines during an important period in the history of international advertisement targeting the Arab world, as they reflect changes in the sexual representation of female models.

Keywords: Arab magazine, female body, international advertisements, semiotics, sexual representation

Procedia PDF Downloads 56
604 Pathway Linking Early Use of Electronic Device and Psychosocial Wellbeing in Early Childhood

Authors: Rosa S. Wong, Keith T.S. Tung, Winnie W. Y. Tso, King-Wa Fu, Nirmala Rao, Patrick Ip

Abstract:

Electronic devices have become an essential part of our lives. Various reports have highlighted the alarming usage of electronic devices at early ages and its long-term developmental consequences. More sedentary screen time was associated with increased adiposity, worse cognitive and motor development, and psychosocial health. Apart from the problems caused by children’s own screen time, parents today are often paying less attention to their children due to hand-held device. Some anecdotes suggest that distracted parenting has negative impact on parent-child relationship. This study examined whether distracted parenting detrimentally affected parent-child activities which may, in turn, impair children’s psychosocial health. In 2018/19, we recruited a cohort of preschoolers from 32 local kindergartens in Tin Shui Wai and Sham Shui Po for a 5-year programme aiming to build stronger foundations for children from disadvantaged backgrounds through an integrated support model involving medical, education and social service sectors. A comprehensive set of questionnaires were used to survey parents on their frequency of being distracted while parenting and their frequency of learning and recreational activities with children. Furthermore, they were asked to report children’s screen time amount and their psychosocial problems. Mediation analyses were performed to test the direct and indirect effects of electronic device-distracted parenting on children’s psychosocial problems. This study recruited 873 children (448 females and 425 males, average age: 3.42±0.35). Longer screen time was associated with more psychosocial difficulties (Adjusted B=0.37, 95%CI: 0.12 to 0.62, p=0.004). Children’s screen time positively correlated with electronic device-distracted parenting (r=0.369, p < 01). We also found that electronic device-distracted parenting was associated with more hyperactive/inattentive problems (Adjusted B=0.66, p < 0.01), fewer prosocial behavior (Adjusted B=-0.74, p < 0.01), and more emotional symptoms (Adjusted B=0.61, p < 0.001) in children. Further analyses showed that electronic device-distracted parenting exerted influences both directly and indirectly through parent-child interactions but to different extent depending upon the outcome under investigation (38.8% for hyperactivity/inattention, 31.3% for prosocial behavior, and 15.6% for emotional symptoms). We found that parents’ use of devices and children’s own screen time both have negative effects on children’s psychosocial health. It is important for parents to set “device-free times” each day so as to ensure enough relaxed downtime for connecting with children and responding to their needs.

Keywords: early childhood, electronic device, psychosocial wellbeing, parenting

Procedia PDF Downloads 133
603 Insights on the Social-Economic Implications of the Blue Economy Concept on Coastal Tourism in Tonga

Authors: Amelia Faotusia

Abstract:

The blue economy concept was coined by Pacific nations in recognition of the importance of sustainably managing their extensive marine territories. This is especially important for major ocean-based economic sectors of Pacific economies, such as coastal tourism. There is an absence of research, however, on the key ways in which the blue economy concept has emerged in discourse and public policy in Pacific countries, as well as how it articulates with coastal tourism. This research helps to fill such a gap with a specific focus on Tonga through the application of a post-positivist research approach to conduct a desktop study of relevant national documents and qualitative interviews with relevant government staff, civil society organizations, and tourism operators. The findings of the research reflect the importance of institutional integration and partnerships for a successful blue economy transition and are presented in the form of two case studies corresponding to two sub-sectors of Tonga’s coastal tourism sector: (i) the whale-watching and swimming industry, and (ii) beach resorts and restaurants. A thematic analysis applied to the interview data of both cases then enabled the identification of key areas and issues for socio-economic policy intervention and recommendations in support of blue economy transitions in Tonga’s coastal tourism sector. Examples of the relevant areas and issues that emerged included the importance of foreign direct investment, local market access, community-based special management areas, as well as the need to address the anthropogenic impacts of tropical cyclones, whale tourism, plastic litter on coastal assets, and ecosystems. Policy and practical interventions in support of addressing such issues include a proposed restructuring of the whale-watching and swimming licensing system; integration of climate resilience, adaptation, and capacity building as priorities of local blue economy interventions; as well as strengthening of the economic sustainability dimension of blue economy policies. Finally, this research also revealed the need for further specificity and research on the influence and value of local Tongan culture and traditional knowledge, particularly within existing customary marine tenure systems, on Tonga’s national and sectoral blue economy policies and transitions.

Keywords: blue economy, coastal tourism, integrated ocean management, ecosystem resilience

Procedia PDF Downloads 63
602 Study of Bis(Trifluoromethylsulfonyl)Imide Based Ionic Liquids by Gas Chromatography

Authors: F. Mutelet, L. Cesari

Abstract:

Development of safer and environmentally friendly processes and products is needed to achieve sustainable production and consumption patterns. Ionic liquids, which are of great interest to the chemical and related industries because of their attractive properties as solvents, should be considered. Ionic liquids are comprised of an asymmetric, bulky organic cation and a weakly coordinating organic or inorganic anion. A large number of possible combinations allows for the ability to ‘fine tune’ the solvent properties for a specific purpose. Physical and chemical properties of ionic liquids are not only influenced by the nature of the cation and the nature of cation substituents but also by the polarity and the size of the anion. These features infer to ionic liquids numerous applications, in organic synthesis, separation processes, and electrochemistry. Separation processes required a good knowledge of the behavior of organic compounds with ionic liquids. Gas chromatography is a useful tool to estimate the interactions between organic compounds and ionic liquids. Indeed, retention data may be used to determine infinite dilution thermodynamic properties of volatile organic compounds in ionic liquids. Among others, the activity coefficient at infinite dilution is a direct measure of solute-ionic liquid interaction. In this work, infinite dilution thermodynamic properties of volatile organic compounds in specific bis(trifluoromethylsulfonyl)imide based ionic liquids measured by gas chromatography is presented. It was found that apolar compounds are not miscible in this family of ionic liquids. As expected, the solubility of organic compounds is related to their polarity and hydrogen-bond. Through activity coefficients data, the performance of these ionic liquids was evaluated for different separation processes (benzene/heptane, thiophene/heptane and pyridine/heptane). Results indicate that ionic liquids may be used for the extraction of polar compounds (aromatics, alcohols, pyridine, thiophene, tetrahydrofuran) from aliphatic media. For example, 1-benzylpyridinium bis(trifluoromethylsulfonyl) imide and 1-cyclohexylmethyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl)imide are more efficient for the extraction of aromatics or pyridine from aliphatics than classical solvents. Ionic liquids with long alkyl chain length present important capacity values but their selectivity values are low. In conclusion, we have demonstrated that specific bis(trifluoromethylsulfonyl)imide based ILs containing polar chain grafted on the cation (for example benzyl or cyclohexyl) increases considerably their performance in separation processes.

Keywords: interaction organic solvent-ionic liquid, gas chromatography, solvation model, COSMO-RS

Procedia PDF Downloads 79