Search results for: case report
846 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing
Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang
Abstract:
Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture
Procedia PDF Downloads 141845 Social Mobility and Urbanization: Case Study of Well-Educated Urban Migrant's Life Experience in the Era of China's New Urbanization Project
Authors: Xu Heng
Abstract:
Since the financial crisis of 2008 and the resulting Great Recession, the number of China’s unemployed college graduate reached over 500 thousand in 2011. Following the severe situation of college graduate employment, there has been growing public concern about college graduates, especially those with the less-privileged background, and their working and living condition in metropolises. Previous studies indicate that well-educated urban migrants with less-privileged background tend to obtain temporary occupation with less financial income and lower social status. Those vulnerable young migrants are described as ‘Ant Tribe’ by some scholars. However, since the implementation of a new urbanization project, together with the relaxed Hukou system and the acceleration of socio-economic development in middle/small cities, some researchers described well-educated urban migrant’s situation and the prospect of upward social mobility in urban areas in an overly optimistic light. In order to shed more lights on the underlying tensions encountered by China’s well-educated urban migrants in their upward social mobility pursuit, this research mainly focuses on 10 well-educated urban migrants’ life trajectories between their university-to-work transition and their current situation. All selected well-educated urban migrants are young adults with rural background who have already received higher education qualification from first-tier universities of Wuhan City (capital of Hubei Province). Drawing on the in-depth interviews with 10 participants and Inspired by Lahire’s Theory of Plural Actor, this study yields the following preliminary findings; 1) For those migrants who move to super-mega cities (i.e., Beijing, Shenzhen, Guangzhou) or stay in Wuhan after college graduation, their inadequacies of economic and social capital are the structural factors which negatively influence their living condition and further shape their plan for career development. The incompatibility between the sub-fields of urban life and the disposition, which generated from their early socialization, is the main cause for marginalized position in the metropolises. 2) For those migrants who move back to middle/small cities located in their hometown regions, the inconsistency between the disposition, which generated from college life, and the organizational habitus of the workplace is the main cause for their sense of ‘fish out of water’, even though they have obtained the stable occupation of local government or state-owned enterprise. On the whole, this research illuminates how the underlying the structural forces shape well-educated urban migrants’ life trajectories and hinder their upward social mobility under the context of new urbanization project.Keywords: life trajectory, social mobility, urbanization, well-educated urban migrant
Procedia PDF Downloads 215844 Rational Approach to Analysis and Construction of Curved Composite Box Girders in Bridges
Authors: Dongming Feng, Fangyin Zhang, Liling Cao
Abstract:
Horizontally curved steel-concrete composite box girders are extensively used in highway bridges. They consist of reinforced concrete deck on top of prefabricated steel box section beam which exhibits a high torsional rigidity to resist torsional effects induced by the curved structural geometry. This type of structural system is often constructed in two stages. The composite section will take the tension mainly by the steel box and, the compression by the concrete deck. The steel girders are delivered in large pre-fabricated U-shaped sections that are designed for ease of construction. They are then erected on site and overlaid by cast-in-place reinforced concrete deck. The functionality of the composite section is not achieved until the closed section is formed by fully cured concrete. Since this kind of composite section is built in two stages, the erection of the open steel box presents some challenges to contractors. When the reinforced concrete slab is cast-in-place, special care should be taken on bracings that can prevent the open U-shaped steel box from global and local buckling. In the case of multiple steel boxes, the design detailing should pay enough attention to the installation requirement of the bracings connecting adjacent steel boxes to prevent the global buckling. The slope in transverse direction and grade in longitudinal direction will result in some local deformation of the steel boxes that affect the connection of the bracings. During the design phase, it is common for engineers to model the curved composite box girder using one-dimensional beam elements. This is adequate to analyze the global behavior, however, it is unable to capture the local deformation which affects the installation of the field bracing connection. The presence of the local deformation may become a critical component to control the construction tolerance, and overlooking this deformation will produce inadequate structural details that eventually cause misalignment in field and erection failure. This paper will briefly describe the construction issues we encountered in real structures, investigate the difference between beam element modeling and shell/solid element modeling, and their impact on the different construction stages. P-delta effect due to the slope and curvature of the composite box girder is analyzed, and the secondary deformation is compared to the first-order response and evaluated for its impact on installation of lateral bracings. The paper will discuss the rational approach to prepare construction documents and recommendations are made on the communications between engineers, erectors, and fabricators to smooth out construction process.Keywords: buckling, curved composite box girder, stage construction, structural detailing
Procedia PDF Downloads 122843 The Elimination of Fossil Fuel Subsidies from the Road Transportation Sector and the Promotion of Electro Mobility: The Ecuadorian Case
Authors: Henry Acurio, Alvaro Corral, Juan Fonseca
Abstract:
In Ecuador, subventions on fossil fuels for the road transportation sector have always been part of its economy throughout time, mainly because of demagogy and populism from political leaders. It is clearly seen that the government cannot maintain the subsidies anymore due to its commercial balance and its general state budget; subsidies are a key barrier to implementing the use of cleaner technologies. However, during the last few months, the elimination of subsidies has been done gradually with the purpose of reaching international prices. It is expected that with this measure, the population will opt for other means of transportation, and in a certain way, it will promote the use of private electric vehicles and public, e.g., taxis and buses (urban transport). Considering the three main elements of sustainable development, an analysis of the social, economic, and environmental impacts of eliminating subsidies will be generated at the country level. To achieve this, four scenarios will be developed in order to determine how the subsidies will contribute to the promotion of electro-mobility: 1) A Business as Usual (BAU) scenario; 2) the introduction of 10 000 electric vehicles by 2025; 3) the introduction of 100 000 electric vehicles by 2030; 4) the introduction of 750 000 electric vehicles by 2040 (for all the scenarios, buses, taxis, lightweight duty vehicles, and private vehicles will be introduced, as it is established in the National Electro Mobility Strategy for Ecuador). The Low Emissions Analysis Platform (LEAP) will be used, and it will be suitable to determine the cost for the government in terms of importing derivatives for fossil fuels and the cost of electricity to power the electric fleet that can be changed. The elimination of subventions generates fiscal resources for the state that can be used to develop other kinds of projects that will benefit Ecuadorian society. It will definitely change the energy matrix, and it will provide energy security for the country; it will be an opportunity for the government to incentivize a greater introduction of renewable energies, e.g., solar, wind, and geothermal. At the same time, it will also reduce greenhouse gas emissions (GHG) from the transportation sector, considering its mitigation potential, which as a result, will ameliorate the inhabitant quality of life by improving the quality of air, therefore reducing respiratory diseases associated with exhaust emissions, consequently, achieving sustainability, the Sustainable Development Goals (SDGs), and complying with the agreements established in the Paris Agreement COP 21 in 2015. Electro-mobility in Latin America and the Caribbean can only be achieved by the implementation of the right policies by the central government, which need to be accompanied by a National Urban Mobility Policy (NUMP), and can encompass a greater vision to develop holistic, sustainable transport systems at local governments.Keywords: electro mobility, energy, policy, sustainable transportation
Procedia PDF Downloads 82842 The Affordances and Challenges of Online Learning and Teaching for Secondary School Students
Authors: Hahido Samaras
Abstract:
In many cases, especially with the pandemic playing a major role in fast-tracking the growth of the digital industry, online learning has become a necessity or even a standard educational model nowadays, reliably overcoming barriers such as location, time and cost and frequently combined with a face-to-face format (e.g., in blended learning). This being the case, it is evident that students in many parts of the world, as well as their parents, will increasingly need to become aware of the pros and cons of online versus traditional courses. This fast-growing mode of learning, accelerated during the years of the pandemic, presents an abundance of exciting options especially matched for a large number of secondary school students in remote places of the world where access to stimulating educational settings and opportunities for a variety of learning alternatives are scarce, adding advantages such as flexibility, affordability, engagement, flow and personalization of the learning experience. However, online learning can also present several challenges, such as a lack of student motivation and social interactions in natural settings, digital literacy, and technical issues, to name a few. Therefore, educational researchers will need to conduct further studies focusing on the benefits and weaknesses of online learning vs. traditional learning, while instructional designers propose ways of enhancing student motivation and engagement in virtual environments. Similarly, teachers will be required to become more and more technology-capable, at the same time developing their knowledge about their students’ particular characteristics and needs so as to match them with the affordances the technology offers. And, of course, schools, education programs, and policymakers will have to invest in powerful tools and advanced courses for online instruction. By developing digital courses that incorporate intentional opportunities for community-building and interaction in the learning environment, as well as taking care to include built-in design principles and strategies that align learning outcomes with learning assignments, activities, and assessment practices, rewarding academic experiences can derive for all students. This paper raises various issues regarding the effectiveness of online learning on students by reviewing a large number of research studies related to the usefulness and impact of online learning following the COVID-19-induced digital education shift. It also discusses what students, teachers, decision-makers, and parents have reported about this mode of learning to date. Best practices are proposed for parties involved in the development of online learning materials, particularly for secondary school students, as there is a need for educators and developers to be increasingly concerned about the impact of virtual learning environments on student learning and wellbeing.Keywords: blended learning, online learning, secondary schools, virtual environments
Procedia PDF Downloads 100841 Correlation between Body Mass Dynamics and Weaning in Eurasian Lynx (Lynx lynx L, 1758)
Authors: A. S. Fetisova, M. N. Erofeeva, G. S. Alekseeva, K. A. Volobueva, M. D. Kim, S. V. Naidenko
Abstract:
Weaning is characterized by the transition from milk to solid food. In some species, such changes in diet are fast and gradual in others. The reasons for the weaning start are understandable. Changes in milk composition and decrease in maternity behavior push cubs to search for additional sources of nutrients. In nature, females have many opportunities to wean offspring in case of a lack of resources. In contrast, in controlled conditions the possibility of delayed weaning exists. The delay of weaning can lead to overspending of maternal resources. In addition, the main causes of weaning end are not so obvious. Near the weaning end behavior of offspring depends on many factors: intensity of maternal behavior, reduction of milk abundance, brood size, physiological status, and body mass. During the pre-weaning period dynamic of body mass is strongly connected with milk intake. Based on that fact could body mass be one of the signals for end of milk feeding? It is known that some animals usually wean their offspring when juveniles achieved body mass in some proportion to the adult weight. In turn, we put forward the hypothesis that decrease in growth rates causes the delay of weaning in Eurasian lynxes (Lynx lynx). To explore the hypothesis, we compared the dynamic of body mass with duration of milk suckling. Firstly, to get information about duration of suckling we visually observed 8 lynx broods from 30 to 120 days postpartum. During each 4-hour observation we registered the start and the end of suckling acts and then calculate the total duration of this behavior. To get the dynamic of body mass kittens were weighed once a week. Duration of suckling varied from 3076,19 ± 1408,60 to 422,54 ± 285,38 seconds when body mass gain changed from 247,35 ± 26,49 to 289,41 ± 122,35 grams. Results of Kendall Tau correlation test (N= 96; p< 0,05) showed a negative correlation (τ= -0,36) between duration of suckling and body mass of lynx kittens. In general duration of suckling increases in response to decrease in body mass gain with slight delay. In early weaning from 30 to 58 days duration of suckling decreases gradually as does the body mass gain. During the weaning period the negative correlation between suckling time and body mass becomes tighter. Although throughout the weaning consumption of solid food begins to prevail over the milk intake, the correlation persists until the end of weaning (90-105 days) and after it. In that way weaning in Eurasian lynxes is not a part of ontogenesis controlled only by maternal behavior. It seems to be a flexible process influenced by various factors including changes in growth rates. It is necessary to continue investigations to determine the critical value of body mass which marks the safe moment to stop milk feeding. Understanding such details of ontogenesis is very important to organize procedures aimed at the reproduction of mammals ex situ and the conservation of endangered species.Keywords: body mass, lynx, milk feeding, weaning
Procedia PDF Downloads 20840 Capital Accumulation and Unemployment in Namibia, Nigeria and South Africa
Authors: Abubakar Dikko
Abstract:
The research investigates the causes of unemployment in Namibia, Nigeria and South Africa, and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria and South Africa are great African nations battling with high unemployment rates. In 2013, the countries recorded high unemployment rates of 16.9%, 23.9% and 24.9% respectively. Most of the unemployed in these economies comprises of youth. Roughly about 40% working age South Africans has jobs, whereas in Nigeria and Namibia is less than that. Unemployment in Africa has wide implications on households which has led to extensive poverty and inequality, and created a rampant criminality. Recently in South Africa there has been a case of xenophobic attacks which were caused by the citizens of the country as a result of unemployment. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes that there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). The African countries with low level of capital accumulation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.Keywords: capital accumulation, unemployment, NAIRU, Post-Keynesian economics
Procedia PDF Downloads 263839 Fostering Diversity, Equity, and Inclusion: Case of Higher Education Institutions in Kazakhstan
Authors: Gainiya Tazhina
Abstract:
Higher education systems of many countries have increased diversity and ensured equal rights and opportunities for inclusive students in the last decades. Issues of diversity-equity-inclusion (DEI) in Kazakhstani higher education began to be considered in legislation in 2021-2023. The adoption of the Road Map of the Ministry of Education and Science for universities’ inclusivity indicated strategies for change. The paper traces how this government initiative is being implemented in universities across the country. Content analysis of legislative documents, media publications, surveys of students, staff and interviews with leaders have demonstrated the inconsistency of these strategic decisions. Thus, the Road Map required that by 2023 conditions for promoting and ensuring inclusive education and barrier-free environments should be created in 60% -100% of Kazakhstani universities, including spaces inside academic buildings and dormitories in a short period of time. (March 2023-August 2025). Educational programs and curricula have not been adapted to the needs of students with special education needs (SEN); teachers do not have the skills and methods to work with students with SEN, students from minority groups, and international students. 60% of universities have not created a barrier-free environment on campuses due to the high cost of elevators, tactile tiles and assistive devices. Only 1% of school-disabled graduates enter universities due to the unwillingness of universities to educate people with disabilities. At the same time, universities do not adapt their educational programs and services to the needs of inclusive students; their needs are not identified; they study under the same conditions as regular students. Accordingly, teaching staff does not have the knowledge and skills to teach inclusive students; university lecturers misunderstand or oversimplify the social phenomena of ‘inclusion’ and ‘diversity’. The situation is more acute with the creation of a barrier-free architectural environment on university campuses. Recent reports indicate that these reforms have not been implemented to date, proven controversial in practice due to the inconsistency of national research on inclusion in higher education. Widely announced reforms have not produced the expected results leading to distortions at the local level. Inconsistent policies, contradictory legislative acts without expertise of needs and developing specific implementation criteria, without training specialists and indicators for achieving reforms are doomed to failure and mistrust of society. Based on the results of this research, recommendations have been developed: (1) to overcome inconsistencies in legislation regarding DEI in higher education; (2) to encourage initiatives in universities' inclusive environments; (3) to develop projects that will promote public awareness of DEI.Keywords: diversity-equity-inclusion, Kazakhstani universities, reforms, legislation, accessibility
Procedia PDF Downloads 13838 Pregnancy Outcome in Women with HIV Infection from a Tertiary Care Centre of India
Authors: Kavita Khoiwal, Vatsla Dadhwal, K. Aparna Sharma, Dipika Deka, Plabani Sarkar
Abstract:
Introduction: About 2.4 million (1.93 - 3.04 million) people are living with HIV/AIDS in India. Of all HIV infections, 39% (9,30,000) are among women. 5.4% of infections are from mother to child transmission (MTCT), 25,000 infected children are born every year. Besides the risk of mother to child transmission of HIV, these women are at risk of the higher adverse pregnancy outcome. The objectives of the study were to compare the obstetric and neonatal outcome in women who are HIV positive with low-risk HIV negative women and effect of antiretroviral drugs on preterm birth and IUGR. Materials and Methods: This is a retrospective case record analysis of 212 HIV-positive women delivering between 2002 to 2015, in a tertiary health care centre which was compared with 238 HIV-negative controls. Women who underwent medical termination of pregnancy and abortion were excluded from the study. Obstetric outcome analyzed were pregnancy induced hypertension, HIV positive intrauterine growth restriction, preterm birth, anemia, gestational diabetes and intrahepatic cholestasis of pregnancy. Neonatal outcome analysed were birth weight, apgar score, NICU admission and perinatal transmission.HIV-positiveOut of 212 women, 204 received antiretroviral therapy (ART) to prevent MTCT, 27 women received single dose nevirapine (sdNVP) or sdNVP tailed with 7 days of zidovudine and lamivudine (ZDV + 3TC), 15 received ZDV, 82 women received duovir and 80 women received triple drug therapy depending upon the time period of presentation. Results: Mean age of 212 HIV positive women was 25.72+3.6 years, 101 women (47.6 %) were primigravida. HIV positive status was diagnosed during pregnancy in 200 women while 12 women were diagnosed prior to conception. Among 212 HIV positive women, 20 (9.4 %) women had preterm delivery (< 37 weeks), 194 women (91.5 %) delivered by cesarean section and 18 women (8.5 %) delivered vaginally. 178 neonates (83.9 %) received exclusive top feeding and 34 neonates (16.03 %) received exclusive breast feeding. When compared to low risk HIV negative women (n=238), HIV positive women were more likely to deliver preterm (OR 1.27), have anemia (OR 1.39) and intrauterine growth restriction (OR 2.07). Incidence of pregnancy induced hypertension, diabetes mellitus and ICP was not increased. Mean birth weight was significantly lower in HIV positive women (2593.60+499 gm) when compared to HIV negative women (2919+459 gm). Complete follow up is available for 148 neonates till date, rest are under evaluation. Out of these 7 neonates found to have HIV positive status. Risk of preterm birth (P value = 0.039) and IUGR (P value = 0.739) was higher in HIV positive women who did not receive any ART during pregnancy than women who received ART. Conclusion: HIV positive pregnant women are at increased risk of adverse pregnancy outcome. Multidisciplinary team approach and use of highly active antiretroviral therapy can optimize the maternal and perinatal outcome.Keywords: antiretroviral therapy, HIV infection, IUGR, preterm birth
Procedia PDF Downloads 261837 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 37836 Augusto De Campos Translator: The Role of Translation in Brazilian Concrete Poetry Project
Authors: Juliana C. Salvadori, Jose Carlos Felix
Abstract:
This paper aims at discussing the role literary translation has played in Brazilian Concrete Poetry Movement – an aesthetic, critical and pedagogical project which conceived translation as poiesis, i.e., as both creative and critic work in which the potency (dynamic) of literary work is unfolded in the interpretive and critic act (energeia) the translating practice demands. We argue that translation, for concrete poets, is conceived within the framework provided by the reinterpretation –or deglutition– of Oswald de Andrade’s anthropophagy – a carefully selected feast from which the poets pick and model their Paideuma. As a case study, we propose to approach and analyze two of Augusto de Campos’s long-term translation projects: the translation of Emily Dickinson’s and E. E. Cummings’s works to Brazilian readers. Augusto de Campos is a renowned poet, translator, critic and one of the founding members of Brazilian Concrete Poetry movement. Since the 1950s he has produced a consistent body of translated poetry from English-speaking poets in which the translator has explored creative translation processes – transcreation, as concrete poets have named it. Campos’s translation project regarding E. E. Cummings’s poetry comprehends a span of forty years: it begins in 1956 with 10 poems and unfolds in 4 works – 20 poem(a)s, 40 poem(a)s, Poem(a)s, re-edited in 2011. His translations of Dickinson’s poetry are published in two works: O Anticrítico (1986), in which he translated 10 poems, and Emily Dickinson Não sou Ninguém (2008), in which the poet-translator added 35 more translated poems. Both projects feature bilingual editions: contrary to common sense, Campos translations aim at being read as such: the target readers, to fully enjoy the experience, must be proficient readers of English and, also, acquainted with the poets in translation – Campos expects us to perform translation criticism, as Antoine Berman has proposed, by assessing the choices he, as both translator and poet, has presented in order to privilege aesthetic information (verse lines, word games, etc.). To readers not proficient in English, his translations play a pedagogycal role of educating and preparing them to read both the target poet works as well as concrete poetry works – the detailed essays and prefaces in which the translator emphasizes the selection of works translated and strategies adopted enlighten his project as translator: for Cummings, it has led to the oblieraton of the more traditional and lyrical/romantic examples of his poetry while highlighting the more experimental aspects and poems; for Dickinson, his project has highligthed the more hermetic traits of her poems. To the domestic canons of both poets in Brazilian literary system, we analyze Campos’ contribution in this work.Keywords: translation criticism, Augusto de Campos, E. E. Cummings, Emily Dickinson
Procedia PDF Downloads 295835 Stability of a Biofilm Reactor Able to Degrade a Mixture of the Organochlorine Herbicides Atrazine, Simazine, Diuron and 2,4-Dichlorophenoxyacetic Acid to Changes in the Composition of the Supply Medium
Authors: I. Nava-Arenas, N. Ruiz-Ordaz, C. J. Galindez-Mayer, M. L. Luna-Guido, S. L. Ruiz-López, A. Cabrera-Orozco, D. Nava-Arenas
Abstract:
Among the most important herbicides, the organochlorine compounds are of considerable interest due to their recalcitrance to the chemical, biological, and photolytic degradation, their persistence in the environment, their mobility, and their bioacummulation. The most widely used herbicides in North America are primarily 2,4-dichlorophenoxyacetic acid (2,4-D), the triazines (atrazine and simazine), and to a lesser extent diuron. The contamination of soils and water bodies frequently occurs by mixtures of these xenobiotics. For this reason, in this work, the operational stability to changes in the composition of the medium supplied to an aerobic biofilm reactor was studied. The reactor was packed with fragments of volcanic rock that retained a complex microbial film, able to degrade a mixture of organochlorine herbicides atrazine, simazine, diuron and 2,4-D, and whose members have microbial genes encoding the main catabolic enzymes atzABCD, tfdACD and puhB. To acclimate the attached microbial community, the biofilm reactor was fed continuously with a mineral minimal medium containing the herbicides (in mg•L-1): diuron, 20.4; atrazine, 14.2, simazine, 11.4, and 2,4-D, 59.7, as carbon and nitrogen sources. Throughout the bioprocess, removal efficiencies of 92-100% for herbicides, 78-90% for COD, 92-96% for TOC and 61-83% for dehalogenation were reached. In the microbial community, the genes encoding catabolic enzymes of different herbicides tfdACD, puhB and, occasionally, the genes atzA and atzC were detected. After the acclimatization, the triazine herbicides were eliminated from the mixture formulation. Volumetric loading rates of the mixture 2,4-D and diuron were continuously supplied to the reactor (1.9-21.5 mg herbicides •L-1 •h-1). Along the bioprocess, the removal efficiencies obtained were 86-100% for the mixture of herbicides, 63-94% for for COD, 90-100% for COT, and dehalogenation values of 63-100%. It was also observed that the genes encoding the enzymes in the catabolism of both herbicides, tfdACD and puhB, were consistently detected; and, occasionally, the atzA and atzC. Subsequently, the triazine herbicide atrazine and simazine were restored to the medium supply. Different volumetric charges of this mixture were continuously fed to the reactor (2.9 to 12.6 mg herbicides •L-1 •h-1). During this new treatment process, removal efficiencies of 65-95% for the mixture of herbicides, 63-92% for COD, 66-89% for TOC and 73-94% of dehalogenation were observed. In this last case, the genes tfdACD, puhB and atzABC encoding for the enzymes involved in the catabolism of the distinct herbicides were consistently detected. The atzD gene, encoding the cyanuric hydrolase enzyme, could not be detected, though it was determined that there was partial degradation of cyanuric acid. In general, the community in the biofilm reactor showed some catabolic stability, adapting to changes in loading rates and composition of the mixture of herbicides, and preserving their ability to degrade the four herbicides tested; although, there was a significant delay in the response time to recover to degradation of the herbicides.Keywords: biodegradation, biofilm reactor, microbial community, organochlorine herbicides
Procedia PDF Downloads 435834 The Problems of Women over 65 with Incontinence Diagnosis: A Case Study in Turkey
Authors: Birsel Canan Demirbag, Kıymet Yesilcicek Calik, Hacer Kobya Bulut
Abstract:
Objective: This study was conducted to evaluate the problems of women over 65 with incontinence diagnosis. Methods: This descriptive study was conducted with women over 65 with incontinence diagnosis in four Family Health Centers in a city in Eastern Black Sea region between November 1, and December 20, 2015. 203, 107, 178, 180 women over 65 were registered in these centers and 262 had incontinence diagnosis at least once and had an ongoing complaint. 177 women were volunteers for the study. During home visits and using face-to-face survey methodology, participants were given socio-demographic characteristics survey, Sandvik severity scale, Incontinence Quality of Life Scale, Urogenital Distress Inventory and a questionnaire including challenges experienced due to incontinence developed by the researcher. Data were analyzed with SPSS program using percentages, numbers, Chi-square, Man-Whitney U and t test with 95% confidence interval and a significance level p <0.05. Findings: 67 ± 1.4 was the mean age, 2.05 ± 0.04 was parity, 44.5 ± 2.12 was menopause age, 66.3% were primary school graduates, 45.7% had deceased spouse, 44.4% lived in a large family, 67.2% had their own room, 77.8% had income, 89.2% could meet self- care, 73.2% had a diagnosis of mixed incontinence, 87.5% suffered for 6-20 years % 78.2 had diuretics, antidepressants and heart medicines, 20.5% had urinary fecal cases, 80.5% had bladder training at least once, 90.1% didn’t have bladder diary calendar/control training programs, 31.1% had hysterectomy for prolapse, 97.1'i% was treated with lower urinary tract infection at least once, 66.3% saw a doctor to get drug in the last three months, 76.2 could not go out alone, 99.2 % had at least one chronic disease, 87.6 % had constipation complain, 2.9% had chronic cough., 45.1% fell due to a sudden rise for toilet. Incontinence Impact Questionnaire Average score was (QOL) 54.3 ± 21.1, Sandvik score was 12.1 ± 2.5, Urogenital Distress Inventory was 47.7 ± 9.2. Difficulties experienced due to incontinence were 99.5% feeling of unhappiness, 67.1% constant feeling of urine smell due to failing to change briefs frequently, % 87.2 move away from social life, 89.7 unable to use pad, 99.2% feeling of disturbing households / other individuals, 87.5% feel dizziness/fall due to sudden rise, 87.4% feeling of others’ imperceptions about the situation, % 94.3 insomnia, 78.2 lack of assistance, 84.7% couldn’t afford urine protection briefs. Results: With this study, it was found out that there were a lot of unsolved issues at individual and community level affecting the life quality of women with incontinence. In accordance with this common problem in women, to facilitate daily life it is obvious that regular home care training programs at institutional level in our country will be effective.Keywords: health problems, incontinence, incontinence quality of life questionnaire, old age, urinary urogenital distress inventory, Sandviken severity, women
Procedia PDF Downloads 321833 Development of Method for Detecting Low Concentration of Organophosphate Pesticides in Vegetables Using near Infrared Spectroscopy
Authors: Atchara Sankom, Warapa Mahakarnchanakul, Ronnarit Rittiron, Tanaboon Sajjaanantakul, Thammasak Thongket
Abstract:
Vegetables are frequently contaminated with pesticides residues resulting in the most food safety concern among agricultural products. The objective of this work was to develop a method to detect the organophosphate (OP) pesticides residues in vegetables using Near Infrared (NIR) spectroscopy technique. Low concentration (ppm) of OP pesticides in vegetables were investigated. The experiment was divided into 2 sections. In the first section, Chinese kale spiked with different concentrations of chlorpyrifos pesticide residues (0.5-100 ppm) was chosen as the sample model to demonstrate the appropriate conditions of sample preparation, both for a solution or solid sample. The spiked samples were extracted with acetone. The sample extracts were applied as solution samples, while the solid samples were prepared by the dry-extract system for infrared (DESIR) technique. The DESIR technique was performed by embedding the solution sample on filter paper (GF/A) and then drying. The NIR spectra were measured with the transflectance mode over wavenumber regions of 12,500-4000 cm⁻¹. The QuEChERS method followed by gas chromatography-mass spectrometry (GC-MS) was performed as the standard method. The results from the first section showed that the DESIR technique with NIR spectroscopy demonstrated good accurate calibration result with R² of 0.93 and RMSEP of 8.23 ppm. However, in the case of solution samples, the prediction regarding the NIR-PLSR (partial least squares regression) equation showed poor performance (R² = 0.16 and RMSEP = 23.70 ppm). In the second section, the DESIR technique coupled with NIR spectroscopy was applied to the detection of OP pesticides in vegetables. Vegetables (Chinese kale, cabbage and hot chili) were spiked with OP pesticides (chlorpyrifos ethion and profenofos) at different concentrations ranging from 0.5 to 100 ppm. Solid samples were prepared (based on the DESIR technique), then samples were scanned by NIR spectrophotometer at ambient temperature (25+2°C). The NIR spectra were measured as in the first section. The NIR- PLSR showed the best calibration equation for detecting low concentrations of chlorpyrifos residues in vegetables (Chinese kale, cabbage and hot chili) according to the prediction set of R2 and RMSEP of 0.85-0.93 and 8.23-11.20 ppm, respectively. For ethion residues, the best calibration equation of NIR-PLSR showed good indexes of R² and RMSEP of 0.88-0.94 and 7.68-11.20 ppm, respectively. As well as the results for profenofos pesticide, the NIR-PLSR also showed the best calibration equation for detecting the profenofos residues in vegetables according to the good index of R² and RMSEP of 0.88-0.97 and 5.25-11.00 ppm, respectively. Moreover, the calibration equation developed in this work could rapidly predict the concentrations of OP pesticides residues (0.5-100 ppm) in vegetables, and there was no significant difference between NIR-predicted values and actual values (data from GC-MS) at a confidence interval of 95%. In this work, the proposed method using NIR spectroscopy involving the DESIR technique has proved to be an efficient method for the screening detection of OP pesticides residues at low concentrations, and thus increases the food safety potential of vegetables for domestic and export markets.Keywords: NIR spectroscopy, organophosphate pesticide, vegetable, food safety
Procedia PDF Downloads 150832 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat
Authors: M. Venegas, M. De Vega, N. García-Hernando
Abstract:
Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy
Procedia PDF Downloads 286831 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution
Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit
Abstract:
Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics
Procedia PDF Downloads 44830 Techno-Economic Analysis of 1,3-Butadiene and ε-Caprolactam Production from C6 Sugars
Authors: Iris Vural Gursel, Jonathan Moncada, Ernst Worrell, Andrea Ramirez
Abstract:
In order to achieve the transition from a fossil to bio-based economy, biomass needs to replace fossil resources in meeting the world’s energy and chemical needs. This calls for development of biorefinery systems allowing cost-efficient conversion of biomass to chemicals. In biorefinery systems, feedstock is converted to key intermediates called platforms which are converted to wide range of marketable products. The C6 sugars platform stands out due to its unique versatility as precursor for multiple valuable products. Among the different potential routes from C6 sugars to bio-based chemicals, 1,3-butadiene and ε-caprolactam appear to be of great interest. Butadiene is an important chemical for the production of synthetic rubbers, while caprolactam is used in production of nylon-6. In this study, ex-ante techno-economic performance of 1,3-butadiene and ε-caprolactam routes from C6 sugars were assessed. The aim is to provide insight from an early stage of development into the potential of these new technologies, and the bottlenecks and key cost-drivers. Two cases for each product line were analyzed to take into consideration the effect of possible changes on the overall performance of both butadiene and caprolactam production. Conceptual process design for the processes was developed using Aspen Plus based on currently available data from laboratory experiments. Then, operating and capital costs were estimated and an economic assessment was carried out using Net Present Value (NPV) as indicator. Finally, sensitivity analyses on processing capacity and prices was done to take into account possible variations. Results indicate that both processes perform similarly from an energy intensity point of view ranging between 34-50 MJ per kg of main product. However, in terms of processing yield (kg of product per kg of C6 sugar), caprolactam shows higher yield by a factor 1.6-3.6 compared to butadiene. For butadiene production, with the economic parameters used in this study, for both cases studied, a negative NPV (-642 and -647 M€) was attained indicating economic infeasibility. For the caprolactam production, one of the cases also showed economic infeasibility (-229 M€), but the case with the higher caprolactam yield resulted in a positive NPV (67 M€). Sensitivity analysis indicated that the economic performance of caprolactam production can be improved with the increase in capacity (higher C6 sugars intake) reflecting benefits of the economies of scale. Furthermore, humins valorization for heat and power production was considered and found to have a positive effect. Butadiene production was found sensitive to the price of feedstock C6 sugars and product butadiene. However, even at 100% variation of the two parameters, butadiene production remained economically infeasible. Overall, the caprolactam production line shows higher economic potential in comparison to that of butadiene. The results are useful in guiding experimental research and providing direction for further development of bio-based chemicals.Keywords: bio-based chemicals, biorefinery, C6 sugars, economic analysis, process modelling
Procedia PDF Downloads 152829 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 80828 Excess Body Fat as a Store Toxin Affecting the Glomerular Filtration and Excretory Function of the Liver in Patients after Renal Transplantation
Authors: Magdalena B. Kaziuk, Waldemar Kosiba, Marek J. Kuzniewski
Abstract:
Introduction: Adipose tissue is a typical place for storage water-insoluble toxins in the body. It's connective tissue, where the intercellular substance consist of fat, which level in people with low physical activity should be 18-25% for women and 13-18% for men. Due to the fat distribution in the body we distinquish two types of obesity: android (visceral, abdominal) and gynoidal (gluteal-femoral, peripheral). Abdominal obesity increases the risk of complications of the cardiovascular system diseases, and impaired renal and liver function. Through the influence on disorders of metabolism, lipid metabolism, diabetes and hypertension, leading to emergence of the metabolic syndrome. So thus, obesity will especially overload kidney function in patients after transplantation. Aim: An attempt was made to estimate the impact of amount fat tissue on transplanted kidney function and excretory function of the liver in patients after Ktx. Material and Methods: The study included 108 patients (50 females, 58 male, age 46.5 +/- 12.9 years) with active kidney transplant after more than 3 months from the transplantation. An analysis of body composition was done by using electrical bioimpedance (BIA) and anthropometric measurements. Estimated basal metabolic rate (BMR), muscle mass, total body water content and the amount of body fat. Information about physical activity were obtained during clinical examination. Nutritional status, and type of obesity were determined by using indicators: Waist to Height Ratio (WHR) and Waist to Hip Ratio (WHR). Excretory functions of the transplanted kidney was rated by calculating the estimated renal glomerular filtration rate (eGFR) using the MDRD formula. Liver function was rated by total bilirubin and alanine aminotransferase levels ALT concentration in serum. In our patients haemolitic uremic syndrome (HUS) was excluded. Results: In 19.44% of patients had underweight, 22.37% of the respondents were with normal weight, 11.11% had overweight, and the rest were with obese (49.08%). People with android stature have a lower eGFR compared with those with the gynoidal stature (p = 0.004). All patients with obesity had higher amount of body fat from a few to several percent. The higher amount of body fat percentage, the lower eGFR had patients (p <0.001). Elevated ALT levels significantly correlated with a high fat content (p <0.02). Conclusion: Increased amount of body fat, particularly in the case of android obesity can be a predictor of kidney and liver damage. Due to that obese patients should have more frequent control of diagnostic functions of these organs and the intensive dietary proceedings, pharmacological and regular physical activity adapted to the current physical condition of patients after transplantation.Keywords: obesity, body fat, kidney transplantation, glomerular filtration rate, liver function
Procedia PDF Downloads 461827 Life Cycle Datasets for the Ornamental Stone Sector
Authors: Isabella Bianco, Gian Andrea Blengini
Abstract:
The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.Keywords: life cycle assessment, LCA datasets, ornamental stone, stone environmental impact
Procedia PDF Downloads 233826 Comparison between the Quadratic and the Cubic Linked Interpolation on the Mindlin Plate Four-Node Quadrilateral Finite Elements
Authors: Dragan Ribarić
Abstract:
We employ the so-called problem-dependent linked interpolation concept to develop two cubic 4-node quadrilateral Mindlin plate finite elements with 12 external degrees of freedom. In the problem-independent linked interpolation, the interpolation functions are independent of any problem material parameters and the rotation fields are not expressed in terms of the nodal displacement parameters. On the contrary, in the problem-dependent linked interpolation, the interpolation functions depend on the material parameters and the rotation fields are expressed in terms of the nodal displacement parameters. Two cubic 4-node quadrilateral plate elements are presented, named Q4-U3 and Q4-U3R5. The first one is modelled with one displacement and two rotation degrees of freedom in every of the four element nodes and the second element has five additional internal degrees of freedom to get polynomial completeness of the cubic form and which can be statically condensed within the element. Both elements are able to pass the constant-bending patch test exactly as well as the non-zero constant-shear patch test on the oriented regular mesh geometry in the case of cylindrical bending. In any mesh shape, the elements have the correct rank and only the three eigenvalues, corresponding to the solid body motions are zero. There are no additional spurious zero modes responsible for instability of the finite element models. In comparison with the problem-independent cubic linked interpolation implemented in Q9-U3, the nine-node plate element, significantly less degrees of freedom are employed in the model while retaining the interpolation conformity between adjacent elements. The presented elements are also compared to the existing problem-independent quadratic linked-interpolation element Q4-U2 and to the other known elements that also use the quadratic or the cubic linked interpolation, by testing them on several benchmark examples. Simple functional upgrading from the quadratic to the cubic linked interpolation, implemented in Q4-U3 element, showed no significant improvement compared to the quadratic linked form of the Q4-U2 element. Only when the additional bubble terms are incorporated in the displacement and rotation function fields, which complete the full cubic linked interpolation form, qualitative improvement is fulfilled in the Q4-U3R5 element. Nevertheless, the locking problem exists even for the both presented elements, like in all pure displacement elements when applied to very thin plates modelled by coarse meshes. But good and even slightly better performance can be noticed for the Q4-U3R5 element when compared with elements from the literature, if the model meshes are moderately dense and the plate thickness not extremely thin. In some cases, it is comparable to or even better than Q9-U3 element which has as many as 12 more external degrees of freedom. A significant improvement can be noticed in particular when modeling very skew plates and models with singularities in the stress fields as well as circular plates with distorted meshes.Keywords: Mindlin plate theory, problem-independent linked interpolation, problem-dependent interpolation, quadrilateral displacement-based plate finite elements
Procedia PDF Downloads 312825 Therapeutic Challenges in Treatment of Adults Bacterial Meningitis Cases
Authors: Sadie Namani, Lindita Ajazaj, Arjeta Zogaj, Vera Berisha, Bahrije Halili, Luljeta Hasani, Ajete Aliu
Abstract:
Background: The outcome of bacterial meningitis is strongly related to the resistance of bacterial pathogens to the initial antimicrobial therapy. The objective of the study was to analyze the initial antimicrobial therapy, the resistance of meningeal pathogens and the outcome of adults bacterial meningitis cases. Materials/methods: This prospective study enrolled 46 adults older than 16 years of age, treated for bacterial meningitis during the years 2009 and 2010 at the infectious diseases clinic in Prishtinë. Patients are categorized into specific age groups: > 16-26 years of age (10 patients), > 26-60 years of age (25 patients) and > 60 years of age (11 patients). All p-values < 0.05 were considered statistically significant. Data were analyzed using Stata 7.1 and SPSS 13. Results: During the two year study period 46 patients (28 males) were treated for bacterial meningitis. 33 patients (72%) had a confirmed bacterial etiology; 13 meningococci, 11 pneumococci, 7 gram-negative bacilli (Ps. aeruginosa 2, Proteus sp. 2, Acinetobacter sp. 2 and Klebsiella sp. 1 case) and 2 staphylococci isolates were found. Neurological complications developed in 17 patients (37%) and the overall mortality rate was 13% (6 deaths). Neurological complications observed were: cerebral abscess (7/46; 15.2%), cerebral edema (4/46; 8.7%); haemiparesis (3/46; 6.5%); recurrent seizures (2/46; 4.3%), and single cases of thrombosis sinus cavernosus, facial nerve palsy and decerebration (1/46; 2.1%). The most common meningeal pathogens were meningococcus in the youngest age group, gram negative-bacilli in second age group and pneumococcus in eldery age group. Initial single-agent antibiotic therapy (ceftriaxone) was used in 17 patients (37%): in 60% of patients in the youngest age group and in 44% of cases in the second age group. 29 patients (63%) were treated with initial dual-agent antibiotic therapy; ceftriaxone in combination with vancomycin or ampicillin. Ceftriaxone and ampicillin were the most commonly used antibiotics for the initial empirical therapy in adults > 50 years of age. All adults > 60 years of age were treated with the initial dual-agent antibiotic therapy as in this age group was recorded the highest mortality rate (M=27%) and adverse outcome (64%). Resistance of pathogens to antimicrobics was recorded in cases caused by gram-negative bacilli and was associated with greater risk for developing neurological complications (p=0.09). None of the gram-negative bacilli were resistant to carbapenems; all were resistant to ampicillin while 5/7 isolates were resistant to cefalosporins. Resistance of meningococci and pneumococci to beta-lactams was not recorded. There were no statistical differences in the occurrence of neurological complications (p > 0.05), resistance of meningeal pathogens to antimicrobics (p > 0.05) and the inital antimicrobial therapy (one vs. two antibiotics) concerning group-ages in adults. Conclusions: The initial antibiotic therapy with ceftriaxone alone or in combination with vancomycin or ampicillin did not cover cases caused by gram-negative bacilli.Keywords: adults, bacterial meningitis, outcomes, therapy
Procedia PDF Downloads 173824 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire
Authors: Vinay A. Sharma, Shiva Prasad H. C.
Abstract:
The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.Keywords: decision-making, leadership, logic, strategic management
Procedia PDF Downloads 108823 Stress and Overload in Mothers and Fathers of Hospitalized Children: A Comparative Study
Authors: Alessandra Turini Bolsoni Silva, Nilson Rogério Da Silva
Abstract:
The hospitalization process for long periods and the experience of invasive and painful clinical procedures can trigger a set of stressors in children, family members and professionals, leading to stress. Mothers are, in general, the main caregivers and, therefore, have a high degree of sadness and stress with an impact on mental health. However, the father, in the face of the mother's absence, needs to assume other responsibilities such as domestic activities and healthy children in addition to work activities. In addition, he has to deal with changes in family and work relationships during the child's hospitalization, with disagreements and changes in the relationship with the partner, changes in the relationship with the children, and finding it difficult to reconcile the new tasks as a caregiver and work. A consequence of the hospitalization process is the interruption of the routine activities of both the child and the family members responsible for the care, who can go through stressful moments due to the consequences of family breakdown, attention focused only on the child and sleepless nights. In this sense, both the mother and the father can have their health affected by their child's hospitalization. The present study aims to compare the prevalence of stress and overload in mothers and fathers of hospitalized children, as well as possible associations with activities related to care. The participants were 10 fathers and 10 mothers of children hospitalized in a hospital located in a medium-sized city in the interior of São Paulo. Three instruments were used for data collection: 1) Script to characterize the participants; 2) The Lipp Stress Symptom Inventory (ISSL, 2000) 3) Zarit Burden Interview Protocol – ZBT. Contact was made with the management of the hospital in order to present the objectives of the project, then authorization was requested for the participation of the parents; after an agreement, the time and place were convenient for the participant to carry out the interview. Thus, they signed the Free and Informed Consent Term. Data were analyzed according to the instrument application manuals and organized in Figures and Tables. The results revealed that fathers and mothers have their family and professional routine affected by the hospitalization of their children, with the consequent presence of stress and overload indicators. However, the study points to a greater presence of stress and overload in mothers due to their role as the main caregiver, often interrupting their professional life to exercise care. In the case of the father, the routine is changed due to taking on household chores and taking care of the other children, with the professional life being less affected. It is hoped that the data can guide future interventions that promote and develop strategies that favor care and, at the same time, preserve the health of caregivers and that include mothers and fathers, considering that both are affected, albeit in a different way.Keywords: stress, overload, caregivers, parents
Procedia PDF Downloads 66822 The Conflict of Grammaticality and Meaningfulness of the Corrupt Words: A Cross-lingual Sociolinguistic Study
Authors: Jayashree Aanand, Gajjam
Abstract:
The grammatical tradition in Sanskrit literature emphasizes the importance of the correct use of Sanskrit words or linguistic units (sādhu śabda) that brings the meritorious values, denying the attribution of the same religious merit to the incorrect use of Sanskrit words (asādhu śabda) or the vernacular or corrupt forms (apa-śabda or apabhraṁśa), even though they may help in communication. The current research, the culmination of the doctoral research on sentence definition, studies the difference among the comprehension of both correct and incorrect word forms in Sanskrit and Marathi languages in India. Based on the total of 19 experiments (both web-based and classroom-controlled) on approximately 900 Indian readers, it is found that while the incorrect forms in Sanskrit are comprehended with lesser accuracy than the correct word forms, no such difference can be seen for the Marathi language. It is interpreted that the incorrect word forms in the native language or in the language which is spoken daily (such as Marathi) will pose a lesser cognitive load as compared to the language that is not spoken on a daily basis but only used for reading (such as Sanskrit). The theoretical base for the research problem is as follows: among the three main schools of Language Science in ancient India, the Vaiyākaraṇas (Grammarians) hold that the corrupt word forms do have their own expressive power since they convey meaning, while as the Mimāṁsakas (the Exegesists) and the Naiyāyikas (the Logicians) believe that the corrupt forms can only convey the meaning indirectly, by recalling their association and similarity with the correct forms. The grammarians argue that the vernaculars that are born of the speaker’s inability to speak proper Sanskrit are regarded as degenerate versions or fallen forms of the ‘divine’ Sanskrit language and speakers who could not use proper Sanskrit or the standard language were considered as Śiṣṭa (‘elite’). The different ideas of different schools strictly adhere to their textual dispositions. For the last few years, sociolinguists have agreed that no variety of language is inherently better than any other; they are all the same as long as they serve the need of people that use them. Although the standard form of a language may offer the speakers some advantages, the non-standard variety is considered the most natural style of speaking. This is visible in the results. If the incorrect word forms incur the recall of the correct word forms in the reader as the theory suggests, it would have added one extra step in the process of sentential cognition leading to more cognitive load and less accuracy. This has not been the case for the Marathi language. Although speaking and listening to the vernaculars is the common practice and reading the vernacular is not, Marathi readers have readily and accurately comprehended the incorrect word forms in the sentences, as against the Sanskrit readers. The primary reason being Sanskrit is spoken and also read in the standard form only and the vernacular forms in Sanskrit are not found in the conversational data.Keywords: experimental sociolinguistics, grammaticality and meaningfulness, Marathi, Sanskrit
Procedia PDF Downloads 126821 Deep Convolutional Neural Network for Detection of Microaneurysms in Retinal Fundus Images at Early Stage
Authors: Goutam Kumar Ghorai, Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, G. Sarkar, Ashis K. Dhara
Abstract:
Diabetes mellitus is one of the most common chronic diseases in all countries and continues to increase in numbers significantly. Diabetic retinopathy (DR) is damage to the retina that occurs with long-term diabetes. DR is a major cause of blindness in the Indian population. Therefore, its early diagnosis is of utmost importance towards preventing progression towards imminent irreversible loss of vision, particularly in the huge population across rural India. The barriers to eye examination of all diabetic patients are socioeconomic factors, lack of referrals, poor access to the healthcare system, lack of knowledge, insufficient number of ophthalmologists, and lack of networking between physicians, diabetologists and ophthalmologists. A few diabetic patients often visit a healthcare facility for their general checkup, but their eye condition remains largely undetected until the patient is symptomatic. This work aims to focus on the design and development of a fully automated intelligent decision system for screening retinal fundus images towards detection of the pathophysiology caused by microaneurysm in the early stage of the diseases. Automated detection of microaneurysm is a challenging problem due to the variation in color and the variation introduced by the field of view, inhomogeneous illumination, and pathological abnormalities. We have developed aconvolutional neural network for efficient detection of microaneurysm. A loss function is also developed to handle severe class imbalance due to very small size of microaneurysms compared to background. The network is able to locate the salient region containing microaneurysms in case of noisy images captured by non-mydriatic cameras. The ground truth of microaneurysms is created by expert ophthalmologists for MESSIDOR database as well as private database, collected from Indian patients. The network is trained from scratch using the fundus images of MESSIDOR database. The proposed method is evaluated on DIARETDB1 and the private database. The method is successful in detection of microaneurysms for dilated and non-dilated types of fundus images acquired from different medical centres. The proposed algorithm could be used for development of AI based affordable and accessible system, to provide service at grass root-level primary healthcare units spread across the country to cater to the need of the rural people unaware of the severe impact of DR.Keywords: retinal fundus image, deep convolutional neural network, early detection of microaneurysms, screening of diabetic retinopathy
Procedia PDF Downloads 142820 Energy Strategies for Long-Term Development in Kenya
Authors: Joseph Ndegwa
Abstract:
Changes are required if energy systems are to foster long-term growth. The main problems are increasing access to inexpensive, dependable, and sufficient energy supply while addressing environmental implications at all levels. Policies can help to promote sustainable development by providing adequate and inexpensive energy sources to underserved regions, such as liquid and gaseous fuels for cooking and electricity for household and commercial usage. Promoting energy efficiency. Increased utilization of new renewables. Spreading and implementing additional innovative energy technologies. Markets can achieve many of these goals with the correct policies, pricing, and regulations. However, if markets do not work or fail to preserve key public benefits, tailored government policies, programs, and regulations can achieve policy goals. The main strategies for promoting sustainable energy systems are simple. However, they need a broader recognition of the difficulties we confront, as well as a firmer commitment to specific measures. Making markets operate better by minimizing pricing distortions, boosting competition, and removing obstacles to energy efficiency are among the measures. Complementing the reform of the energy industry with policies that promote sustainable energy. Increasing investments in renewable energy. Increasing the rate of technical innovation at each level of the energy innovation chain. Fostering technical leadership in underdeveloped nations by transferring technology and enhancing institutional and human capabilities. promoting more international collaboration. Governments, international organizations, multilateral financial institutions, and civil society—including local communities, business and industry, non-governmental organizations (NGOs), and consumers—all have critical enabling roles to play in the problem of sustainable energy. Partnerships based on integrated and cooperative approaches and drawing on real-world experience will be necessary. Setting the required framework conditions and ensuring that public institutions collaborate effectively and efficiently with the rest of society are common themes across all industries and geographical areas in order to achieve sustainable development. Powerful tools for sustainable development include energy. However, significant policy adjustments within the larger enabling framework will be necessary to refocus its influence in order to achieve that aim. Many of the options currently accessible will be lost or the price of their ultimate realization (where viable) will grow significantly if such changes don't take place during the next several decades and aren't started right enough. In any case, it would seriously impair the capacity of future generations to satisfy their demands.Keywords: sustainable development, reliable, price, policy
Procedia PDF Downloads 65819 Genetics of Pharmacokinetic Drug-Drug Interactions of Most Commonly Used Drug Combinations in the UK: Uncovering Unrecognised Associations
Authors: Mustafa Malki, Ewan R. Pearson
Abstract:
Tools utilized by health care practitioners to flag potential adverse drug reactions secondary to drug-drug interactions ignore individual genetic variation, which has the potential to markedly alter the severity of these interactions. To our best knowledge, there have been limited published studies on the impact of genetic variation on drug-drug interactions. Therefore, our aim in this project is the discovery of previously unrecognized, clinically important drug-drug-gene interactions (DDGIs) within the list of most commonly used drug combinations in the UK. The UKBB database was utilized to identify the top most frequently prescribed drug combinations in the UK with at least one route of interaction (over than 200 combinations were identified). We have recognised 37 common and unique interacting genes considering all of our drug combinations. Out of around 600 potential genetic variants found in these 37 genes, 100 variants have met the selection criteria (common variant with minor allele frequency ≥ 5%, independence, and has passed HWE test). The association between these variants and the use of each of our top drug combinations has been tested with a case-control analysis under the log-additive model. As the data is cross-sectional, drug intolerance has been identified from the genotype distribution as presented by the lower percentage of patients carrying the risky allele and on the drug combination compared to those free of these risk factors and vice versa with drug tolerance. In GoDARTs database, the same list of common drug combinations identified by the UKBB was utilized here with the same list of candidate genetic variants but with the addition of 14 new SNPs so that we have a total of 114 variants which have met the selection criteria in GoDARTs. From the list of the top 200 drug combinations, we have selected 28 combinations where the two drugs in each combination are known to be used chronically. For each of our 28 combinations, three drug response phenotypes have been identified (drug stop/switch, dose decrease, or dose increase of any of the two drugs during their interaction). The association between each of the three phenotypes belonging to each of our 28 drug combinations has been tested against our 114 candidate genetic variants. The results show replication of four findings between both databases : (1) Omeprazole +Amitriptyline +rs2246709 (A > G) variant in CYP3A4 gene (p-values and ORs with the UKBB and GoDARTs respectively = 0.048,0.037,0.92,and 0.52 (dose increase phenotype)) (2) Simvastatin + Ranitidine + rs9332197 (T > C) variant in CYP2C9 gene (0.024,0.032,0.81, and 5.75 (drug stop/switch phenotype)) (3) Atorvastatin + Doxazosin + rs9282564 (T > C) variant in ABCB1 gene (0.0015,0.0095,1.58,and 3.14 (drug stop/switch phenotype)) (4) Simvastatin + Nifedipine + rs2257401 (C > G) variant in CYP3A7 gene (0.025,0.019,0.77,and 0.30 (drug stop/switch phenotype)). In addition, some other non-replicated, but interesting, significant findings were detected. Our work also provides a great source of information for researchers interested in DD, DG, or DDG interactions studies as it has highlighted the top common drug combinations in the UK with recognizing 114 significant genetic variants related to drugs' pharmacokinetic.Keywords: adverse drug reactions, common drug combinations, drug-drug-gene interactions, pharmacogenomics
Procedia PDF Downloads 163818 The Effects of Irregular Immigration Originating from Syria on Turkey's Security Issues
Authors: Muzaffer Topgul, Hasan Atac
Abstract:
After the September 11 attacks, fight against terrorism has risen to higher levels in security concepts of the countries. The following reactions of some nation states have led to the formation of unstable areas in different parts of the World. Especially, in Iraq and Syria, the influences of radical groups have risen with the weakening of the central governments. Turkey (with the geographical proximity to the current crisis) has become a stop on the movement of people who were displaced because of terrorism. In the process, the policies of the Syrian regime resulted in a civil war which is still going on since 2011, and remain as an unresolved crisis. With the extension of the problem, changes occurred in foreign policies of the World Powers; moreover, the ongoing effects of the riots, conflicts of interests of foreign powers, conflicts in the region because of the activities of radical groups increased instability within the country. This case continues to affect the security of Turkey, particularly illegal immigration. It has exceeded the number of two million Syrians who took refuge in Turkey due to the civil war, while continuing uncertainty about the legal status of asylum seekers, besides the security problems of asylum-seekers themselves, there are problems in education, health and communication (language) as well. In this study, we will evaluate the term of immigration through the eyes of national and international law, place the disorganized and illegal immigration in security sphere, and define the elements/components of irregular migration within the changing security concept. Ultimately, this article will assess the effects of the Syrian refuges to Turkey’s short-term, mid-term, and long-term security in the light of the national and international data flows and solutions will be presented to the ongoing problem. While explaining the security problems the data and the donnees obtained from the nation and international corporations will be examined thorough the human security dimensions such as living conditions of the immigrants, the ratio of the genders, especially birth rate occasions, the education circumstances of the immigrant children, the effects of the illegal passing on the public order. In addition, the demographic change caused by the immigrants will be analyzed, the changing economical conditions where the immigrants mostly accumulate, and their participation in public life will be worked on and the economical obstacles sourcing due to irregular immigration will be clarified. By the entire datum gathered from the educational, cultural, social, economic, demographical extents, the regional factors affecting the migration and the role of irregular migration in Turkey’s future security will be revealed by implication to current knowledge sources.Keywords: displaced people, human security, irregular migration, refugees
Procedia PDF Downloads 308817 Synthesis of Functionalized-2-Aryl-2, 3-Dihydroquinoline-4(1H)-Ones via Fries Rearrangement of Azetidin-2-Ones
Authors: Parvesh Singh, Vipan Kumar, Vishu Mehra
Abstract:
Quinoline-4-ones represent an important class of heterocyclic scaffolds that have attracted significant interest due to their various biological and pharmacological activities. This heterocyclic unit also constitutes an integral component in drugs used for the treatment of neurodegenerative diseases, sleep disorders and in antibiotics viz. norfloxacin and ciprofloxacin. The synthetic accessibility and possibility of fictionalization at varied positions in quinoline-4-ones exemplifies an elegant platform for the designing of combinatorial libraries of functionally enriched scaffolds with a range of pharmacological profles. They are also considered to be attractive precursors for the synthesis of medicinally imperative molecules such as non-steroidal androgen receptor antagonists, antimalarial drug Chloroquine and martinellines with antibacterial activity. 2-Aryl-2,3-dihydroquinolin-4(1H)-ones are present in many natural and non-natural compounds and are considered to be the aza-analogs of favanones. The β-lactam class of antibiotics is generally recognized to be a cornerstone of human health care due to the unparalleled clinical efficacy and safety of this type of antibacterial compound. In addition to their biological relevance as potential antibiotics, β-lactams have also acquired a prominent place in organic chemistry as synthons and provide highly efficient routes to a variety of non-protein amino acids, such as oligopeptides, peptidomimetics, nitrogen-heterocycles, as well as biologically active natural and unnatural products of medicinal interest such as indolizidine alkaloids, paclitaxel, docetaxel, taxoids, cyptophycins, lankacidins, etc. A straight forward route toward the synthesis of quinoline-4-ones via the triflic acid assisted Fries rearrangement of N-aryl-βlactams has been reported by Tepe and co-workers. The ring expansion observed in this case was solely attributed to the inherent ring strain in β-lactam ring because -lactam failed to undergo rearrangement under reaction conditions. Theabovementioned protocol has been recently extended by our group for the synthesis of benzo[b]-azocinon-6-ones via a tandem Michael addition–Fries rearrangement of sorbyl anilides as well as for the single-pot synthesis of 2-aryl-quinolin-4(3H)-ones through the Fries rearrangement of 3-dienyl-βlactams. In continuation with our synthetic endeavours with the β-lactam ring and in view of the lack of convenient approaches for the synthesis of C-3 functionalized quinolin-4(1H)-ones, the present work describes the single-pot synthesis of C-3 functionalized quinolin-4(1H)-ones via the trific acid promoted Fries rearrangement of C-3 vinyl/isopropenyl substituted β-lactams. In addition, DFT calculations and MD simulations were performed to investigate the stability profles of synthetic compounds.Keywords: dihydroquinoline, fries rearrangement, azetidin-2-ones, quinoline-4-ones
Procedia PDF Downloads 250