Search results for: direct insertion
659 Nanomaterials for Archaeological Stone Conservation: Re-Assembly of Archaeological Heavy Stones Using Epoxy Resin Modified with Clay Nanoparticles
Authors: Sayed Mansour, Mohammad Aldoasri, Nagib Elmarzugi, Nadia A. Al-Mouallimi
Abstract:
The archaeological large stone used in construction of ancient Pharaonic tombs, temples, obelisks and other sculptures, always subject to physicomechanical deterioration and destructive forces, leading to their partial or total broken. The task of reassembling this type of artifact represent a big challenge for the conservators. Recently, the researchers are turning to new technologies to improve the properties of traditional adhesive materials and techniques used in re-assembly of broken large stone. The epoxy resins are used extensively in stone conservation and re-assembly of broken stone because of their outstanding mechanical properties. The introduction of nanoparticles to polymeric adhesives at low percentages may lead to substantial improvements of their mechanical performances in structural joints and large objects. The aim of this study is to evaluate the effectiveness of clay nanoparticles in enhancing the performances of epoxy adhesives used in re-assembly of archaeological massive stone by adding proper amounts of those nanoparticles. The nanoparticles reinforced epoxy nanocomposite was prepared by direct melt mixing with a nanoparticles content of 3% (w/v), and then mould forming in the form of rectangular samples, and used as adhesive for experimental stone samples. Scanning electron microscopy (SEM) was employed to investigate the morphology of the prepared nanocomposites, and the distribution of nanoparticles inside the composites. The stability and efficiency of the prepared epoxy-nanocomposites and stone block assemblies with new formulated adhesives were tested by aging artificially the samples under different environmental conditions. The effect of incorporating clay nanoparticles on the mechanical properties of epoxy adhesives was evaluated comparatively before and after aging by measuring the tensile, compressive, and Elongation strength tests. The morphological studies revealed that the mixture process between epoxy and nanoparticles has succeeded with a relatively homogeneous morphology and good dispersion in low nano-particles loadings in epoxy matrix was obtained. The results show that the epoxy-clay nanocomposites exhibited superior tensile, compressive, and Elongation strength. Moreover, a marked improvement of the mechanical properties of stone joints increased in all states by adding nano-clay to epoxy in comparison with pure epoxy resin.Keywords: epoxy resins, nanocomposites, clay nanoparticles, re-assembly, archaeological massive stones, mechanical properties
Procedia PDF Downloads 113658 The Impact of Research Anxiety on Research Orientation and Interest in Research Courses in Social Work Students
Authors: Daniel Gredig, Annabelle Bartelsen-Raemy
Abstract:
Social work professionals should underpin their decisions with scientific knowledge and research findings. Hence, research is used as a framework for social work education and research courses have become a taken-for-granted component of study programmes. However, it has been acknowledged that social work students have negative beliefs and attitudes as well as frequently feelings of fear of research courses. Against this background, the present study aimed to establish the relationship between student’s fear of research courses, their research orientation and interest in research courses. We hypothesized that fear predicts the interest in research courses. Further, we hypothesized that research orientation (perceived importance and attributed usefulness for research for social work practice and perceived unbiased nature of research) was a mediating variable. In the years 2014, 2015 and 2016, we invited students enrolled for a bachelor programme in social work in Switzerland to participate in the study during their introduction day to the school taking place two weeks before their programme started. For data collection, we used an anonymous self-administered on-line questionnaire filled in on site. Data were analysed using descriptive statistics and structural equation modelling (generalized least squares estimates method). The sample included 708 students enrolled in a social work bachelor-programme, 501 being female, 184 male, and 5 intersexual, aged 19–56, having various entitlements to study, and registered for three different types of programme modes (full time programme; part time study with field placements in blocks; part time study involving concurrent field placement). Analysis showed that the interest in research courses was predicted by fear of research courses (β = -0.29) as well as by the perceived importance (β = 0.27), attributed usefulness of research (β = 0.15) and perceived unbiased nature of research (β = 0.08). These variables were predicted, in turn, by fear of research courses (β = -0.10, β = -0.23, and β = -0.13). Moreover, interest was predicted by age (β = 0.13). Fear of research courses was predicted by age (β = -0.10) female gender (β = 0.28) and having completed a general baccalaureate (β = -0.09). (GFI = 0.997, AGFI = 0.988, SRMR = 0.016, CMIN/df = 0.946, adj. R2 = 0.312). Findings evidence a direct as well as a mediated impact of fear on the interest in research courses in entering first-year students in a social work bachelor-programme. It highlights one of the challenges social work education in a research framework has to meet with. It seems, there have been considerable efforts to address the research orientation of students. However, these findings point out that, additionally, research anxiety in terms of fear of research courses should be considered and addressed by teachers when conceptualizing research courses.Keywords: research anxiety, research courses, research interest, research orientation, social work students, teaching
Procedia PDF Downloads 188657 Evidence-Based Practices in Education: A General Review of the Literature on Elementary Classroom Setting
Authors: Carolina S. Correia, Thalita V. Thomé, Andersen Boniolo, Dhayana I. Veiga
Abstract:
Evidence-based practices (EBP) in education is a set of principles and practices used to raise educational policy, it involves the integration of professional expertise in education with the best empirical evidence in making decisions about how to deliver instruction. The purpose of this presentation is to describe and characterize studies about EBP in education in elementary classroom setting. Data here presented is part of an ongoing systematic review research. Articles were searched and selected from four academic databases: ProQuest, Scielo, Science Direct and Capes. The search terms were evidence-based practices or program effectiveness, and education or teaching or teaching practices or teaching methods. Articles were included according to the following criteria: The studies were explicitly described as evidence-based or discussed the most effective practices in education, they discussed teaching practices in classroom context in elementary school level. Document excerpts were extracted and recorded in Excel, organized by reference, descriptors, abstract, purpose, setting, participants, type of teaching practice, study design and main results. The total amount of articles selected were 1.185, 569 articles from Proquest Research Library; 216 from CAPES; 251 from ScienceDirect and 149 from Scielo Library. The potentially relevant references were 178, from which duplicates were removed. The final number of articles analyzed was 140. From 140 articles, are 47 theoretical studies and 93 empirical articles. The following research design methods were identified: longitudinal intervention study, cluster-randomized trial, meta-analysis and pretest-posttest studies. From 140 articles, 103 studies were about regular school teaching and 37 were on special education teaching practices. In several studies, used as teaching method: active learning, content acquisition podcast (CAP), precision teaching (PT), mediated reading practice, speech therapist programs and peer-assisted learning strategies (PALS). The countries of origin of the studies were United States of America, United Kingdom, Panama, Sweden, Scotland, South Korea, Argentina, Chile, New Zealand and Brunei. The present study in is an ongoing project, so some representative findings will be discussed, providing further acknowledgment on the best teaching practices in elementary classroom setting.Keywords: best practices, children, evidence-based education, elementary school, teaching methods
Procedia PDF Downloads 334656 CFD Modeling of Stripper Ash Cooler of Circulating Fluidized Bed
Authors: Ravi Inder Singh
Abstract:
Due to high heat transfer rate, high carbon utilizing efficiency, fuel flexibilities and other advantages numerous circulating fluidized bed boilers have grown up in India in last decade. Many companies like BHEL, ISGEC, Thermax, Cethar Limited, Enmas GB Power Systems Projects Limited are making CFBC and installing the units throughout the India. Due to complexity many problems exists in CFBC units and only few have been reported. Agglomeration i.e clinker formation in riser, loop seal leg and stripper ash coolers is one of problem industry is facing. Proper documentation is rarely found in the literature. Circulating fluidized bed (CFB) boiler bottom ash contains large amounts of physical heat. While the boiler combusts the low-calorie fuel, the ash content is normally more than 40% and the physical heat loss is approximately 3% if the bottom ash is discharged without cooling. In addition, the red-hot bottom ash is bad for mechanized handling and transportation, as the upper limit temperature of the ash handling machinery is 200 °C. Therefore, a bottom ash cooler (BAC) is often used to treat the high temperature bottom ash to reclaim heat, and to have the ash easily handled and transported. As a key auxiliary device of CFB boilers, the BAC has a direct influence on the secure and economic operation of the boiler. There are many kinds of BACs equipped for large-scale CFB boilers with the continuous development and improvement of the CFB boiler. These ash coolers are water cooled ash cooling screw, rolling-cylinder ash cooler (RAC), fluidized bed ash cooler (FBAC).In this study prototype of a novel stripper ash cooler is studied. The Circulating Fluidized bed Ash Coolers (CFBAC) combined the major technical features of spouted bed and bubbling bed, and could achieve the selective discharge on the bottom ash. The novel stripper ash cooler is bubbling bed and it is visible cold test rig. The reason for choosing cold test is that high temperature is difficult to maintain and create in laboratory level. The aim of study to know the flow pattern inside the stripper ash cooler. The cold rig prototype is similar to stripper ash cooler used industry and it was made after scaling down to some parameter. The performance of a fluidized bed ash cooler is studied using a cold experiment bench. The air flow rate, particle size of the solids and air distributor type are considered to be the key parameters of the operation of a fluidized bed ash cooler (FBAC) are studied in this.Keywords: CFD, Eulerian-Eulerian, Eulerian-Lagraingian model, parallel simulations
Procedia PDF Downloads 510655 Radio Frequency Heating of Iron-Filled Carbon Nanotubes for Cancer Treatment
Authors: L. Szymanski, S. Wiak, Z. Kolacinski, G. Raniszewski, L. Pietrzak, Z. Staniszewska
Abstract:
There exist more than one hundred different types of cancer, and therefore no particular treatment is offered to people struggling with this disease. The character of treatment proposed to a patient will depend on a variety of factors such as type of the cancer diagnosed, advancement of the disease, its location in the body, as well as personal preferences of a patient. None of the commonly known methods of cancer-fighting is recognised as a perfect cure, however great advances in this field have been made over last few decades. Once a patient is diagnosed with cancer, he is in need of medical care and professional treatment for upcoming months, and in most cases even for years. Among the principal modes of treatment offered by medical centres, one can find radiotherapy, chemotherapy, and surgery. All of them can be applied separately or in combination, and the relative contribution of each is usually determined by medical specialist in agreement with a patient. In addition to the conventional treatment option, every day more complementary and alternative therapies are integrated into mainstream care. There is one promising cancer modality - hyperthermia therapy which is based on exposing body tissues to high temperatures. This treatment is still being investigated and is not widely available in hospitals and oncological centres. There are two kinds of hyperthermia therapies with direct and indirect heating. The first is not commonly used due to low efficiency and invasiveness, while the second is deeply investigated and a variety of methods have been developed, including ultrasounds, infrared sauna, induction heating and magnetic hyperthermia. The aim of this work was to examine possibilities of heating magnetic nanoparticles under the influence of electromagnetic field for cancer treatment. For this purpose, multiwalled carbon nanotubes used as nanocarriers for iron particles were investigated for its heating properties. The samples were subjected to an alternating electromagnetic field with frequency range between 110-619 kHz. Moreover, samples with various concentrations of carbon nanotubes were examined. The lowest frequency of 110 kHz and sample containing 10 wt% of carbon nanotubes occurred to influence the most effective heating process. Description of hyperthermia therapy aiming at enhancing currently available cancer treatment was also presented in this paper. Most widely applied conventional cancer modalities such as radiation or chemotherapy were also described. Methods for overcoming the most common obstacles in conventional cancer modalities, such as invasiveness and lack of selectivity, has been presented in magnetic hyperthermia characteristics, which explained the increasing interest of the treatment.Keywords: hyperthermia, carbon nanotubes, cancer colon cells, ligands
Procedia PDF Downloads 266654 Analyzing the Perception of Social Networking Sites as a Learning Tool among University Students: Case Study of a Business School in India
Authors: Bhaskar Basu
Abstract:
Universities and higher education institutes are finding it increasingly difficult to engage students fruitfully through traditional pedagogic tools. Web 2.0 technologies comprising social networking sites (SNSs) offer a platform for students to collaborate and share information, thereby enhancing their learning experience. Despite the potential and reach of SNSs, its use has been limited in academic settings promoting higher education. The purpose of this paper is to assess the perception of social networking sites among business school students in India and analyze its role in enhancing quality of student experiences in a business school leading to the proposal of an agenda for future research. In this study, more than 300 students of a reputed business school were involved in a survey of their preferences of different social networking sites and their perceptions and attitudes towards these sites. A questionnaire with three major sections was designed, validated and distributed among a sample of students, the research method being descriptive in nature. Crucial questions were addressed to the students concerning time commitment, reasons for usage, nature of interaction on these sites, and the propensity to share information leading to direct and indirect modes of learning. It was further supplemented with focus group discussion to analyze the findings. The paper notes the resistance in the adoption of new technology by a section of business school faculty, who are staunch supporters of the classical “face-to-face” instruction. In conclusion, social networking sites like Facebook and LinkedIn provide new avenues for students to express themselves and to interact with one another. Universities could take advantage of the new ways in which students are communicating with one another. Although interactive educational options such as Moodle exist, social networking sites are rarely used for academic purposes. Using this medium opens new ways of academically-oriented interactions where faculty could discover more about students' interests, and students, in turn, might express and develop more intellectual facets of their lives. hitherto unknown intellectual facets. This study also throws up the enormous potential of mobile phones as a tool for “blended learning” in business schools going forward.Keywords: business school, India, learning, social media, social networking, university
Procedia PDF Downloads 264653 Studying Together Affects Perceived Social Distance but Not Stereotypes: Nursing Students' Perception of Their Intergroup Relationship
Authors: Michal Alon-Tirosh, Dorit Hadar-Shoval
Abstract:
Social Psychology theories, such as the intergroup contact theory, content that bringing members of different social groups into contact is a promising approach for improving intergroup relations. The heterogeneous nature of the nursing profession generates encounters between members of different social groups .The social relations that nursing students develop with their peers during their years of study, and the meanings they ascribe to these contacts, may affect the success of their nursing careers. Jewish-Arab relations in Israel are the product of an ongoing conflict and are characterized by stereotyped negative perceptions and mutual suspicions. Nursing education is often the first situation in which Jewish and Arab nursing students have direct and long-term contact with people from the other group. These encounters present a significant challenge. The current study explores whether this contact between Jewish and Arab nursing students during their academic studies improves their perception of their intergroup relationship. The study explores the students' perceptions of the social relations between the two groups. We examine attribution of stereotypes (positive and negative) and willingness to engage in social interactions with individuals from the other group. The study hypothesis is that academic seniority (beginning students, advanced students) will be related to perceptions of the relations between the two groups, as manifested in attributions of positive and negative stereotypes and willingness to reduce the social distance between the two groups. Method: One hundred and eighty Jewish and Arab nursing students (111 Jewish and 69 Arab) completed questionnaires examining their perceptions of the social relations between the two groups. The questionnaires were administered at two different points in their studies (beginning students and those at more advanced stages Results: No differences were found between beginning students and advanced students with respect to stereotypes. However, advanced students expressed greater willingness to reduce social distance than did beginning students. Conclusions: The findings indicate that bringing members of different social groups into contact may improve some aspects of intergroup relations. The findings suggest that different aspects of perceptions of social relations are influenced by different contexts: the students' specific context (joint studies and joint work in the future) and the broader general context of relations between the groups. Accordingly, it is recommended that programs aimed at improving relations in a between social groups will focus on willingness to cooperate and reduce social distance rather than on attempts to eliminate stereotypes.Keywords: nursing education, perceived social relations, social distance, stereotypes
Procedia PDF Downloads 104652 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River
Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko
Abstract:
Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling
Procedia PDF Downloads 260651 Dialectic Relationship between Urban Pattern Structural Methods and Construction Materials in Traditional Settlements
Authors: Sawsan Domi
Abstract:
Identifying urban patterns of traditional settlements perfumed in various ways. One of them through the three-dimensional ‘reading’ of the urban web: the density of structures, the construction materials and the colors used. Objectives of this study are to paraphrase and understand the relation between the formation of the traditional settlements and the shape and structure of their structural method. In the beginning, the study considered the components of the historical neighborhood, which reflected the social and economical effects in the urban planning pattern. Then, by analyzing the main components of the old neighborhood which included: analysis of urban patterns & streets systems, analysis of traditional architectural elements and the construction materials and their usage. ‘’Hamasa’’ Neighborhood in ‘’Al Buraimi’’ Governorate is considered as one of the most important archaeological sites in the Sultanate of Oman. The vivid features of this archaeological site are the living witness to the genius of the Omani person and his unique architecture. ‘’Hamasa’’ Neighborhood is also considered as the oldest human settlement at ‘’Al Buraimi’’ Governorate. It used to be the gathering area for Arab and Omani tribes who are coming from other governorates of Oman. In this old settlement, local characters were created to meet the climate problems and the social, religious requirements of the life. Traditional buildings were built of materials that were available in the surround environment and within hand reach. The Historical component was containing four main separate neighborhoods. The morphological structure of ‘’Hamasa’’ was characterized by a continuous and densely built-up pattern, featuring close interdependence between the spatial and functional pattern. The streets linked the plots, the marketplace and the open areas. Consequently, the traditional fabric had narrow streets with one- and two- storey houses. The material used in building facilities at ‘’Hamasa’' historical are from the traditionally used materials. These materials were cleverly used in building of local facilities. Most of these materials are locally made and formed, and used by the locals. ‘’Hamasa’’ neighborhood is an example of analyzing the urban patterns and geometrical features. The old ‘’ Hamasa’’ retains the patterns of its old settlements. Urban patterns were defined by both forms and structure. The traditional architecture of ‘’Hamasa’’ neighborhood has evolved as a direct result of its climatic conditions. The study figures out that the neighborhood characterized by the used construction materials, the scope of the residential structures and by the streets system. All formed the urban pattern of the settlement.Keywords: urban pattern, construction materials, neighborhood, architectural elements, historical
Procedia PDF Downloads 97650 The Proposal of a Shared Mobility City Index to Support Investment Decision Making for Carsharing
Authors: S. Murr, S. Phillips
Abstract:
One of the biggest challenges entering a market with a carsharing or any other shared mobility (SM) service is sound investment decision-making. To support this process, the authors think that a city index evaluating different criteria is necessary. The goal of such an index is to benchmark cities along a set of external measures to answer the main two challenges: financially viability and the understanding of its specific requirements. The authors have consulted several shared mobility projects and industry experts to create such a Shared Mobility City Index (SMCI). The current proposal of the SMCI consists of 11 individual index measures: general data (demographics, geography, climate and city culture), shared mobility landscape (current SM providers, public transit options, commuting patterns and driving culture) and political vision and goals (vision of the Mayor, sustainability plan, bylaws/tenders supporting SM). To evaluate the suitability of the index, 16 cities on the East Coast of North America were selected and secondary research was conducted. The main sources of this study were census data, organisational records, independent press releases and informational websites. Only non-academic sources where used because the relevant data for the chosen cities is not published in academia. Applying the index measures to the selected cities resulted in three major findings. Firstly, density (city area divided by number of inhabitants) is not an indicator for the number of SM services offered: the city with the lowest density has five bike and carsharing options. Secondly, there is a direct correlation between commuting patterns and how many shared mobility services are offered. New York, Toronto and Washington DC have the highest public transit ridership and the most shared mobility providers. Lastly, except one, all surveyed cities support shared mobility with their sustainability plan. The current version of the shared mobility index is proving a practical tool to evaluate cities, and to understand functional, political, social and environmental considerations. More cities will have to be evaluated to refine the criteria further. However, the current version of the index can be used to assess cities on their suitability for shared mobility services and will assist investors deciding which city is a financially viable market.Keywords: carsharing, transportation, urban planning, shared mobility city index
Procedia PDF Downloads 303649 An Analysis of Employee Attitudes to Organisational Change Management Practices When Adopting New Technologies Within the Architectural, Engineering, and Construction Industry: A Case Study
Authors: Hannah O'Sullivan, Esther Quinn
Abstract:
Purpose: The Architectural, Engineering, and Construction (AEC) industry has historically struggled to adapt to change. Although the ability to innovate and successfully implement organizational change has been demonstrated to be critical in achieving a sustainable competitive advantage in the industry, many AEC organizations continue to struggle when affecting organizational change. One prominent area of organizational change that presents many challenges in the industry is the adoption of new forms of technology, for example, Building Information Modelling (BIM). Certain Organisational Change Management (OCM) practices have been proven to be effective in supporting organizations to adopt change, but little research has been carried out on diverging employee attitudes to change relative to their roles within the organization. The purpose of this research study is to examine how OCM practices influence employee attitudes to change when adopting new forms of technology and to analyze the diverging employee perspectives within an organization on the importance of different OCM strategies. Methodology: Adopting an interview-based approach, a case study was carried out on a large-sized, prominent Irish construction organization who are currently adopting a new technology platform for its projects. Qualitative methods were used to gain insight into differing perspectives on the utilization of various OCM practices and their efficacy when adopting a new form of technology on projects. Change agents implementing the organizational change gave insight into their intentions with the technology rollout strategy, while other employees were interviewed to understand how this rollout strategy was received and the challenges that were encountered. Findings: The results of this research study are currently being finalized. However, it is expected that employees in different roles will value different OCM practices above others. Findings and conclusions will be determined within the coming weeks. Value: This study will contribute to the body of knowledge relating to the introduction of new technologies, including BIM, to AEC organizations. It will also contribute to the field of organizational change management, providing insight into methods of introducing change that will be most effective for different employees based on their roles and levels of experience within the industry. The focus of this study steers away from traditional studies of the barriers to adopting BIM in its first instance at an organizational level and centers on the direct effect on employees when a company changes the technology platform being used.Keywords: architectural, engineering, and construction (AEC) industry, Building Information Modelling, case study, challenges, employee perspectives, organisational change management.
Procedia PDF Downloads 69648 Na Doped ZnO UV Filters with Reduced Photocatalytic Activity for Sunscreen Application
Authors: Rafid Mueen, Konstantin Konstantinov, Micheal Lerch, Zhenxiang Cheng
Abstract:
In the past two decades, the concern for skin protection from ultraviolet (UV) radiation has attracted considerable attention due to the increased intensity of UV rays that can reach the Earth’s surface as a result of the breakdown of ozone layer. Recently, UVA has also attracted attention, since, in comparison to UVB, it can penetrate deeply into the skin, which can result in significant health concerns. Sunscreen agents are one of the significant tools to protect the skin from UV irradiation, and it is either organic or in organic. Developing of inorganic UV blockers is essential, which provide efficient UV protection over a wide spectrum rather than organic filters. Furthermore inorganic UV blockers are good comfort, and high safety when applied on human skin. Inorganic materials can absorb, reflect, or scatter the ultraviolet radiation, depending on their particle size, unlike the organic blockers, which absorb the UV irradiation. Nowadays, most inorganic UV-blocking filters are based on (TiO2) and ZnO). ZnO can provide protection in the UVA range. Indeed, ZnO is attractive for in sunscreen formulization, and this relates to many advantages, such as its modest refractive index (2.0), absorption of a small fraction of solar radiation in the UV range which is equal to or less than 385 nm, its high probable recombination of photogenerated carriers (electrons and holes), large direct band gap, high exciton binding energy, non-risky nature, and high tendency towards chemical and physical stability which make it transparent in the visible region with UV protective activity. A significant issue for ZnO use in sunscreens is that it can generate ROS in the presence of UV light because of its photocatalytic activity. Therefore it is essential to make a non-photocatalytic material through modification by other metals. Several efforts have been made to deactivate the photocatalytic activity of ZnO by using inorganic surface modifiers. The doping of ZnO by different metals is another way to modify its photocatalytic activity. Recently, successful doping of ZnO with different metals such as Ce, La, Co, Mn, Al, Li, Na, K, and Cr by various procedures, such as a simple and facile one pot water bath, co-precipitation, hydrothermal, solvothermal, combustion, and sol gel methods has been reported. These materials exhibit greater performance than undoped ZnO towards increasing the photocatalytic activity of ZnO in visible light. Therefore, metal doping can be an effective technique to modify the ZnO photocatalytic activity. However, in the current work, we successfully reduce the photocatalytic activity of ZnO through Na doped ZnO fabricated via sol-gel and hydrothermal methods.Keywords: photocatalytic, ROS, UVA, ZnO
Procedia PDF Downloads 143647 The Effect of Ambient Temperature on the Performance of the Simple and Modified Cycle Gas Turbine Plants
Authors: Ogbe E. E., Ossia. C. V., Saturday. E. G., Ezekwe M. C.
Abstract:
The disparity in power output between a simple and a modified gas turbine plant is noticeable when the gas turbine functions under local environmental conditions that deviate from the standard ISO specifications. Extensive research and literature have demonstrated a well-known direct correlation between ambient temperature and the power output of a gas turbine plant. In this study, the Omotosho gas turbine plant was modified into three different configurations. The reason for the modification is to improve its performance and reduce the fuel consumption and emission rate. Aspen Hysys software was used to simulate both the simple (Omotosho) and the three modified gas turbine plants. The input parameters considered include ambient temperature, air mass flow rate, fuel mass flow rate, water mass flow rate, turbine inlet temperature, compressor efficiency, and turbine efficiency, while the output parameters considered are thermal efficiency, specific fuel consumption, heat rate, emission rate, compressor power, turbine power and power output. The three modified gas turbine power plants incorporate an inlet air cooling system and a heat recovery steam generator. The variations between the modifications are due to additional components or enhancements alongside the inlet air cooling system and heat recovery steam generator incorporated; the first modification has an additional turbine, the second modification has an additional combustion chamber, and the third modification has an additional turbine and combustion chamber. This paper clearly shows ambient temperature effects on both the simple and three modified gas turbine plants. for every 10-degree kelvin increase in ambient temperature, there is an approximate reduction of 3977 kW, 4795 kW, 4681 kW, and 4793 kW of the power output for the simple gas turbine, first, second, and third modifications, respectively. Also, for every 10-degree kelvin increase in temperature, there is a thermal efficiency decrease of 1.22%, 1.45%, 1.43%, and 1.44% for the simple gas turbine, first, second, and third modifications respectively. Low ambient temperature will help save fuel; looking at the high price of fuel presently in Nigeria for every 10 degrees kelvin increase in temperature, there is a specific fuel consumption increase of 0.0074 kg/kWh, 0.0051 kg/kWh, 0.0061 kg/kWh, and 0.0057 kg/kWh for the simple gas turbine, first, second, and third modifications respectively. These findings will aid in accurately evaluating local power generating plants, particularly in hotter regions, for installing gas turbine inlet air cooling (GTIAC) systems.Keywords: Aspen HYSYS software, Brayton Cycle, modified gas turbine, power plant, simple gas turbine, thermal efficiency.
Procedia PDF Downloads 31646 Recommendations to Improve Classification of Grade Crossings in Urban Areas of Mexico
Authors: Javier Alfonso Bonilla-Chávez, Angélica Lozano
Abstract:
In North America, more than 2,000 people annually die in accidents related to railroad tracks. In 2020, collisions at grade crossings were the main cause of deaths related to railway accidents in Mexico. Railway networks have constant interaction with motor transport users, cyclists, and pedestrians, mainly in grade crossings, where is the greatest vulnerability and risk of accidents. Usually, accidents at grade crossings are directly related to risky behavior and non-compliance with regulations by motorists, cyclists, and pedestrians, especially in developing countries. Around the world, countries classify these crossings in different ways. In Mexico, according to their dangerousness (high, medium, or low), types A, B and C have been established, recommending for each one different type of auditive and visual signaling and gates, as well as horizontal and vertical signaling. This classification is based in a weighting, but regrettably, it is not explained how the weight values were obtained. A review of the variables and the current approach for the grade crossing classification is required, since it is inadequate for some crossings. In contrast, North America (USA and Canada) and European countries consider a broader classification so that attention to each crossing is addressed more precisely and equipment costs are adjusted. Lack of a proper classification, could lead to cost overruns in the equipment and a deficient operation. To exemplify the lack of a good classification, six crossings are studied, three located in the rural area of Mexico and three in Mexico City. These cases show the need of: improving the current regulations, improving the existing infrastructure, and implementing technological systems, including informative signals with nomenclature of the involved crossing and direct telephone line for reporting emergencies. This implementation is unaffordable for most municipal governments. Also, an inventory of the most dangerous grade crossings in urban and rural areas must be obtained. Then, an approach for improving the classification of grade crossings is suggested. This approach must be based on criteria design, characteristics of adjacent roads or intersections which can influence traffic flow through the crossing, accidents related to motorized and non-motorized vehicles, land use and land management, type of area, and services and economic activities in the zone where the grade crossings is located. An expanded classification of grade crossing in Mexico could reduce accidents and improve the efficiency of the railroad.Keywords: accidents, grade crossing, railroad, traffic safety
Procedia PDF Downloads 108645 Genetic Advance versus Environmental Impact toward Sustainable Protein, Wet Gluten and Zeleny Sedimentation in Bread and Durum Wheat
Authors: Gordana Branković, Dejan Dodig, Vesna Pajić, Vesna Kandić, Desimir Knežević, Nenad Đurić
Abstract:
The wheat grain quality properties are influenced by genotype, environmental conditions and genotype × environment interaction (GEI). The increasing request of more nutritious wheat products will direct future breeding programmes. Therefore, the aim of investigation was to determine: i) variability of the protein content (PC), wet gluten content (WG) and Zeleny sedimentation volume (ZS); ii) components of variance, heritability in a broad sense (hb2), and expected genetic advance as percent of mean (GAM) for PC, WG, and ZS; iii) correlations between PC, WG, ZS, and most important agronomic traits; in order to assess expected breeding success versus environmental impact for these quality traits. The plant material consisted of 30 genotypes of bread wheat (Triticum aestivum L. ssp. aestivum) and durum wheat (Triticum durum Desf.). The trials were sown at the three test locations in Serbia: Rimski Šančevi, Zemun Polje and Padinska Skela during 2010-2011 and 2011-2012. The experiments were set as randomized complete block design with four replications. The plot consisted of five rows of 1 m2 (5 × 0.2 m × 1 m). PC, WG and ZS were determined by the use of Near infrared spectrometry (NIRS) with the Infraneo analyser (Chopin Technologies, France). PC, WG and ZS, in bread wheat, were in the range 13.4-16.4%, 22.8-30.3%, and 39.4-67.1 mL, respectively, and in durum wheat, in the range 15.3-18.1%, 28.9-36.3%, 37.4-48.3 mL, respectively. The dominant component of variance for PC, WG, and ZS, in bread wheat, was genotype with the genetic variance/GEI variance (VG/VG × E) relation of 3.2, 2.9 and 1.0, respectively, and in durum wheat was GEI with the VG/VG × E relation of 0.70, 0.69 and 0.49, respectively. hb2 and GAM values for PC, WG and ZS, in bread wheat, were 94.9% and 12.6%, 93.7% and 18.4%, and 86.2% and 28.1%, respectively, and in durum wheat, 80.7% and 7.6%, 79.7% and 10.2%, and 74% and 11.2%, respectively. The most consistent through six environments, statistically significant correlations, for bread wheat, were between PC and spike length (-0.312 to -0.637); PC, WG, ZS and grain number per spike (-0.320 to -0.620; -0.369 to -0.567; -0.301 to -0.378, respectively); PC and grain thickness (0.338 to 0.566), and for durum wheat, were between PC, WG, ZS and yield (-0.290 to -0.690; -0.433 to -0.753; -0.297 to -0.660, respectively); PC and plant height (-0.314 to -0.521); PC, WG and spike length (-0.298 to -0.597; -0.293 to -0.627, respectively); PC, WG and grain thickness (0.260 to 0.575; 0.269 to 0.498, respectively); PC, WG and grain vitreousness (0.278 to 0.665; 0.357 to 0.690, respectively). Breeding success can be anticipated for ZS in bread wheat due to coupled high values for hb2 and GAM, suggesting existence of additive genetic effects, and also for WG in bread wheat, due to very high hb2 and medium high GAM. The small, and medium, negative correlations between PC, WG, ZS, and yield or yield components, indicate difficulties to select simultaneously for high quality and yield, depending on linkage for particular genetic arrangements to be broken by recombination.Keywords: bread and durum wheat, genetic advance, protein and wet gluten content, Zeleny sedimentation volume
Procedia PDF Downloads 253644 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards
Authors: Hanna Schübel, Ivo Wallimann-Helmer
Abstract:
In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility
Procedia PDF Downloads 118643 The Implementation of a Nurse-Driven Palliative Care Trigger Tool
Authors: Sawyer Spurry
Abstract:
Problem: Palliative care providers at an academic medical center in Maryland stated medical intensive care unit (MICU) patients are often referred late in their hospital stay. The MICU has performed well below the hospital quality performance metric of 80% of patients who expire with expected outcomes should have received a palliative care consult within 48 hours of admission. Purpose: The purpose of this quality improvement (QI) project is to increase palliative care utilization in the MICU through the implementation of a Nurse-Driven PalliativeTriggerTool to prompt the need for specialty palliative care consult. Methods: MICU nursing staff and providers received education concerning the implications of underused palliative care services and the literature data supporting the use of nurse-driven palliative care tools as a means of increasing utilization of palliative care. A MICU population specific criteria of palliative triggers (Palliative Care Trigger Tool) was formulated by the QI implementation team, palliative care team, and patient care services department. Nursing staff were asked to assess patients daily for the presence of palliative triggers using the Palliative Care Trigger Tool and present findings during bedside rounds. MICU providers were asked to consult palliative medicinegiven the presence of palliative triggers; following interdisciplinary rounds. Rates of palliative consult, given the presence of triggers, were collected via electronic medical record e-data pull, de-identified, and recorded in the data collection tool. Preliminary Results: Over 140 MICU registered nurses were educated on the palliative trigger initiative along with 8 nurse practitioners, 4 intensivists, 2 pulmonary critical care fellows, and 2 palliative medicine physicians. Over 200 patients were admitted to the MICU and screened for palliative triggers during the 15-week implementation period. Primary outcomes showed an increase in palliative care consult rates to those patients presenting with triggers, a decreased mean time from admission to palliative consult, and increased recognition of unmet palliative care needs by MICU nurses and providers. Conclusions: Anticipatory findings of this QI project would suggest a positive correlation between utilizing palliative care trigger criteria and decreased time to palliative care consult. The direct outcomes of effective palliative care results in decreased length of stay, healthcare costs, and moral distress, as well as improved symptom management and quality of life (QOL).Keywords: palliative care, nursing, quality improvement, trigger tool
Procedia PDF Downloads 194642 A Study on the Effect of the Work-Family Conflict on Work Engagement: A Mediated Moderation Model of Emotional Exhaustion and Positive Psychology Capital
Authors: Sungeun Hyun, Sooin Lee, Gyewan Moon
Abstract:
Work-Family Conflict has been an active research area for the past decades. Work-Family Conflict harms individuals and organizations, it is ultimately expected to bring the cost of losses to the company in the long run. WFC has mainly focused on effects of organizational effectiveness and job attitude such as Job Satisfaction, Organizational Commitment, and Turnover Intention variables. This study is different from consequence variable with previous research. For this purpose, we selected the positive job attitude 'Work Engagement' as a consequence of WFC. This research has its primary research purpose in identifying the negative effects of the Work-Family Conflict, and started out from the recognition of the problem that the research on the direct relationship on the influence of the WFC on Work Engagement is lacking. Based on the COR(Conservation of resource theory) and JD-R(Job Demand- Resource model), the empirical study model to examine the negative effects of WFC with Emotional Exhaustion as the link between WFC and Work Engagement was suggested and validated. Also, it was analyzed how much Positive Psychological Capital may buffer the negative effects arising from WFC within this relationship, and the Mediated Moderation model controlling the indirect effect influencing the Work Engagement by the Positive Psychological Capital mediated by the WFC and Emotional Exhaustion was verified. Data was collected by using questionnaires distributed to 500 employees engaged manufacturing, services, finance, IT industry, education services, and other sectors, of which 389 were used in the statistical analysis. The data are analyzed by statistical package, SPSS 21.0, SPSS macro and AMOS 21.0. The hierarchical regression analysis, SPSS PROCESS macro and Bootstrapping method for hypothesis testing were conducted. Results showed that all hypotheses are supported. First, WFC showed a negative effect on Work Engagement. Specifically, WIF appeared to be on more negative effects than FIW. Second, Emotional exhaustion found to mediate the relationship between WFC and Work Engagement. Third, Positive Psychological Capital showed to moderate the relationship between WFC and Emotional Exhaustion. Fourth, the effect of mediated moderation through the integration verification, Positive Psychological Capital demonstrated to buffer the relationship among WFC, Emotional Exhastion, and Work Engagement. Also, WIF showed a more negative effects than FIW through verification of all hypotheses. Finally, we discussed the theoretical and practical implications on research and management of the WFC, and proposed limitations and future research directions of research.Keywords: emotional exhaustion, positive psychological capital, work engagement, work-family conflict
Procedia PDF Downloads 222641 Reasons for Food Losses and Waste in Basic Production of Meat Sector in Poland
Authors: Sylwia Laba, Robert Laba, Krystian Szczepanski, Mikolaj Niedek, Anna Kaminska-Dworznicka
Abstract:
Meat and its products are considered food products, having the most unfavorable effect on the environment that requires rational management of these products and waste, originating throughout the whole chain of manufacture, processing, transport, and trade of meat. From the economic and environmental viewpoints, it is important to limit the losses and food wastage and the food waste in the whole meat sector. The link to basic production includes obtaining raw meat, i.e., animal breeding, management, and transport of animals to the slaughterhouse. Food is any substance or product, intended to be consumed by humans. It was determined (for the needs of the present studies) when the raw material is considered as a food. It is the moment when the animals are prepared to loading with the aim to be transported to a slaughterhouse and utilized for food purposes. The aim of the studies was to determine the reasons for loss generation in the basic production of the meat sector in Poland during the years 2017 – 2018. The studies on food losses and waste in the meat sector in basic production were carried out in two areas: red meat i.e., pork and beef and poultry meat. The studies of basic production were conducted in the period of March-May 2019 at the territory of the whole country on a representative trial of 278 farms, including 102 pork production, 55–beef production, and 121 poultry meat production. The surveys were carried out with the utilization of questionnaires by the PAPI (Paper & Pen Personal Interview) method; the pollsters conducted direct questionnaire interviews. Research results indicate that it is followed that any losses were not recorded during the preparation, loading, and transport of the animals to the slaughterhouse in 33% of the visited farms. In the farms where the losses were indicated, the crushing and suffocations, occurring during the production of pigs, beef cattle and poultry, were the main reasons for these losses. They constituted ca. 40% of the reported reasons. The stress generated by loading and transport caused 16 – 17% (depending on the season of the year) of the loss reasons. In the case of poultry production, in 2017, additionally, 10.7% of losses were caused by inappropriate conditions of loading and transportation, while in 2018 – 11.8%. The diseases were one of the reasons for the losses in pork and beef production (7% of the losses). The losses and waste, generated during livestock production and in meat processing and trade cannot be managed or recovered. They have to be disposed of. It is, therefore, important to prevent and minimize the losses throughout the whole production chain. It is possible to introduce the appropriate measures, connected mainly with the appropriate conditions and methods of animal loading and transport.Keywords: food losses, food waste, livestock production, meat sector
Procedia PDF Downloads 144640 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 117639 Origins of the Tattoo: Decoding the Ancient Meanings of Terrestrial Body Art to Establish a Connection between the Natural World and Humans Today
Authors: Sangeet Anand
Abstract:
Body art and tattooing have long been practiced as a form of self-expression for centuries, and this study studies and analyzes the pertinence of tattoo culture in our everyday lives and ancient past. Individuals of different cultures represent ideas, practices, and elements of their cultures through symbolic representation. These symbols come in all shapes and sizes and can be as simple as the makeup you put on every day to something more permanent such as a tattoo. In the long run, these individuals who choose to display art on their bodies are seeking to express their individuality. In addition, these visuals are ultimately a reflection of our own appropriate cultures deem as beautiful, important, and powerful to the human eye. They make us known to the world and give us a plausible identity in an ever-changing world. We have lived through and seen a rise in hippie culture today. This type of bodily decoration displayed by this fad has made it seem as though body art is a visual language that is relatively new. But quite to the contrary, it is not. Through cultural symbolic exploration, we can answer key questions to ideas that have been raised for centuries. Through careful, in-depth interviews, this study takes a broad subject matter-art, and symbolism-and culminates it into a deeper philosophical connection between the world and its past. The basic methodologies used in this sociocultural study include interview questionnaires and textual analysis, which encompass a subject and interviewer as well as source material. The major findings of this study contain a distinct connection between cultural heritage and the day-to-day likings of an individual. The participant that was studied during this project demonstrated a clear passion for hobbies that were practiced even by her ancestors. We can conclude, through these findings, that there is a deeper cultural connection between modern day humans, the first humans, and the surrounding environments. Our symbols today are a direct reflection of the elements of nature that our human ancestors were exposed to, and, through cultural acceptance, we can adorn ourselves with these representations to help others identify our pasts. Body art embraces the different aspects of different cultures and holds significance, tells stories, and persists, even as the human population rapidly integrates. With this pattern, our human descendents will continue to represent their cultures and identities in the future. Body art is an integral element in understanding how and why people identify with certain aspects of life over others and broaden the scope for conducting more analysis cross-culturally.Keywords: natural, symbolism, tattoo, terrestrial
Procedia PDF Downloads 107638 Hepatoprotective Action of Emblica officinalis Linn. against Radiation and Lead Induced Changes in Swiss Albino Mice
Authors: R. K. Purohit
Abstract:
Ionizing radiation induces cellular damage through direct ionization of DNA and other cellular targets and indirectly via reactive oxygen species which may include effects from epigenetic changes. So there is a need of hour is to search for an ideal radioprotector which could minimize the deleterious and damaging effects caused by ionizing radiation. Radioprotectors are agents which reduce the radiation effects on cell when applied prior to exposure of radiation. The aim of this study was to access the efficacy of Emblica officinalis in reducing radiation and lead induced changes in mice liver. For the present experiment, healthy male Swiss albino mice (6-8 weeks) were selected and maintained under standard conditions of temperature and light. Fruit extract of Emblica was fed orally at the dose of 0.01 ml/animal/day. The animal were divided into seven groups according to the treatment i.e. lead acetate solution as drinking water (group-II) or exposed to 3.5 or 7.0 Gy gamma radiation (group-III) or combined treatment of radiation and lead acetate (group-IV). The animals of experimental groups were administered Emblica extract seven days prior to radiation or lead acetate treatment (group V, VI and VII) respectively. The animals from all the groups were sacrificed by cervical dislocation at each post-treatment intervals of 1, 2, 4, 7, 14 and 28 days. After sacrificing the animals pieces of liver were taken out and some of them were kept at -20°C for different biochemical parameters. The histopathological changes included cytoplasmic degranulation, vacuolation, hyperaemia, pycnotic and crenated nuclei. The changes observed in the control groups were compared with the respective experimental groups. An increase in the value of total proteins, glycogen, acid phosphtase, alkaline phosphatase activity and RNA was observed up to day-14 in the non drug treated group and day 7 in the Emblica treated groups, thereafter value declined up to day-28 without reaching to normal. The value of cholesterol and DNA showed a decreasing trend up to day -14 in non drug treated groups and day-7 in drug treated groups, thereafter value elevated up to day-28. The biochemical parameters were observed in the form of increase or decrease in the values. The changes were found dose dependent. After combined treatment of radiation and lead acetate synergistic effect were observed. The liver of Emblica treated animals exhibited less severe damage as compared to non-drug treated animals at all the corresponding intervals. An early and fast recovery was also noticed in Emblica pretreated animals. Thus, it appears that Emblica is potent enough to check lead and radiation induced heptic lesion in Swiss albino mice.Keywords: radiation, lead , emblica, mice, liver
Procedia PDF Downloads 321637 Personality Composition in Senior Management Teams: The Importance of Homogeneity in Dynamic Managerial Capabilities
Authors: Shelley Harrington
Abstract:
As a result of increasingly dynamic business environments, the creation and fostering of dynamic capabilities, [those capabilities that enable sustained competitive success despite of dynamism through the awareness and reconfiguration of internal and external competencies], supported by organisational learning [a dynamic capability] has gained increased and prevalent momentum in the research arena. Presenting findings funded by the Economic Social Research Council, this paper investigates the extent to which Senior Management Team (SMT) personality (at the trait and facet level) is associated with the creation of dynamic managerial capabilities at the team level, and effective organisational learning/knowledge sharing within the firm. In doing so, this research highlights the importance of micro-foundations in organisational psychology and specifically dynamic capabilities, a field which to date has largely ignored the importance of psychology in understanding these important and necessary capabilities. Using a direct measure of personality (NEO PI-3) at the trait and facet level across 32 high technology and finance firms in the UK, their CEOs (N=32) and their complete SMTs [N=212], a new measure of dynamic managerial capabilities at the team level was created and statistically validated for use within the work. A quantitative methodology was employed with regression and gap analysis being used to show the empirical foundations of personality being positioned as a micro-foundation of dynamic capabilities. The results of this study found that personality homogeneity within the SMT was required to strengthen the dynamic managerial capabilities of sensing, seizing and transforming, something which was required to reflect strong organisational learning at middle management level [N=533]. In particular, it was found that the greater the difference [t-score gaps] between the personality profiles of a Chief Executive Officer (CEO) and their complete, collective SMT, the lower the resulting self-reported nature of dynamic managerial capabilities. For example; the larger the difference between a CEOs level of dutifulness, a facet contributing to the definition of conscientiousness, and their SMT’s level of dutifulness, the lower the reported level of transforming, a capability fundamental to strategic change in a dynamic business environment. This in turn directly questions recent trends, particularly in upper echelons research highlighting the need for heterogeneity within teams. In doing so, it successfully positions personality as a micro-foundation of dynamic capabilities, thus contributing to recent discussions from within the strategic management field calling for the need to empirically explore dynamic capabilities at such a level.Keywords: dynamic managerial capabilities, senior management teams, personality, dynamism
Procedia PDF Downloads 269636 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity
Authors: Justus Enninga
Abstract:
Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.Keywords: degrowth, green political theory, polycentricity, institutional robustness
Procedia PDF Downloads 183635 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model
Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge
Abstract:
Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model
Procedia PDF Downloads 131634 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads
Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li
Abstract:
Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability
Procedia PDF Downloads 225633 The Emergence of Memory at the Nanoscale
Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann
Abstract:
Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.Keywords: memories, memdevices, memristors, nonequilibrium states
Procedia PDF Downloads 97632 Exploring Coping Strategies among Caregivers of Children Who Have Survived Cancer
Authors: Noor Ismael, Somaya Malkawi, Sherin Al Awady, Taleb Ismael
Abstract:
Background/Significance: Cancer is a serious health condition that affects individuals’ quality of life during and after the course of this condition. Children who have survived cancer and their caregivers may deal with residual physical, cognitive or social disabilities. There is little research on caregivers’ health and wellbeing after cancer. To the authors’ best knowledge; there is no specific research about how caregivers cope with everyday stressors after cancer. Therefore, this study aimed to explore the coping strategies that caregivers of children who have survived cancer utilize to overcome everyday stressors. Methods: This study utilized a descriptive survey design. The sample consisted of 103 caregivers, who visited the health and wellness clinic at a national cancer center (additional demographics are presented in the results). The sample included caregivers of children who were off cancer treatments for at least two years from the beginning of data collection. The institution’s internal review board approved this study. Caregivers who agreed to participate completed the survey. The survey collected caregiver reported demographic information and the Brief COPE which measures caregivers' frequency of engaging in certain coping strategies. The Brief COPE consisted of 14 coping sub-scales, which are self-distraction, active coping, denial, substance use, use of emotional support, use of instrumental support, behavioral disengagement, venting, positive reframing, planning, humor, acceptance, religion, and self-blame. Data analyses included calculating sub-scales’ scores for the fourteen coping strategies and analysis of frequencies of demographics and coping strategies. Results: The 103 caregivers who participated in this study were 62% mothers, 80% married, 45% finished high school, 50% do not work outside the house, and 60% have low family income. Result showed that religious coping (66%) and acceptance (60%) were the most utilized coping strategies, followed by positive reframing (45%), active coping (44%) and planning (43%). The least utilized coping strategies in our sample were humor (5%), behavioral disengagement (8%), and substance-use (10%). Conclusions: Caregivers of children who have survived cancer mostly utilize religious coping and acceptance in dealing with everyday stressors. Because these coping strategies do not directly solve stressors like active coping and planning coping strategies, it is important to support caregivers in choosing and implementing effective coping strategies. Knowing from our results that some caregivers may utilize substance use as a coping strategy, which has negative health effects on caregivers and their children, there must be direct interventions that target these caregivers and their families.Keywords: caregivers, cancer, stress, coping
Procedia PDF Downloads 169631 Fillet Chemical Composition of Sharpsnout Seabream (Diplodus puntazzo) from Wild and Cage-Cultured Conditions
Authors: Oğuz Taşbozan, Celal Erbaş, Şefik Surhan Tabakoğlu, Mahmut Ali Gökçe
Abstract:
Polyunsaturated fatty acids (PUFAs) and particularly the levels and ratios of ω-3 and ω-6 fatty acids are important for biological functions in humans and recognized as essential components of human diet. According to the terms of many different points of view, the nutritional composition of fish in culture conditions and caught from wild are wondered by the consumers. Therefore the aim of this study was to investigate the chemical composition of cage-cultured and wild sharpsnout seabream which has been preferred by the consumers as an economical important fish species in Turkey. The fish were caught from wild and obtained from cage-cultured commercial companies. Eight fish were obtained for each group, and their average weights of the samples were 245.8±13.5 g for cultured, 149.4±13.3 g for wild samples. All samples were stored in freezer (-18 °C) and analyses were carried out in triplicates, using homogenized boneless fish fillets. Proximate compositions (protein, ash, moisture and lipid) were determined. The fatty acid composition was analyzed by a GC Clarous 500 with auto sampler (Perkin–Elmer, USA). Proximate compositions of cage-cultured and wild samples of sharpsnout seabream were found statistical differences in terms of proximate composition between the groups. The saturated fatty acid (SFA), monounsaturated fatty acid (MUFA) and PUFA amounts of cultured and wild sharpsnout seabream were significantly different. ω3/ω6 ratio was higher in the cultured group. Especially in protein level and lipid level of cultured samples was significantly higher than wild counterparts. One of the reasons for this, cultured species exposed to continuous feeding. This situation had a direct effect on their body lipid content. The fatty acid composition of fish differs depending on a variety of factors including species, diet, environmental factors and whether they are farmed or wild. The higher levels of MUFA in the cultured fish may be explained with the high content of monoenoic fatty acids in the feed of cultured fish as in some other species. The ω3/ω6 ratio is a good index for comparing the relative nutritional value of fish oils. In our study, the cultured sharpsnout seabream appears to be better nutritious in terms of ω3/ω6. Acknowledgement: This work was supported by the Scientific Research Project Unit of the University of Cukurova, Turkey under grant no FBA-2016-5780.Keywords: Diplodus puntazo, cage cultured, PUFA, fatty acid
Procedia PDF Downloads 266630 Terrestrial Laser Scans to Assess Aerial LiDAR Data
Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani
Abstract:
The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy
Procedia PDF Downloads 100