Search results for: order logit model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27145

Search results for: order logit model

715 Computational Fluid Dynamics (CFD) Calculations of the Wind Turbine with an Adjustable Working Surface

Authors: Zdzislaw Kaminski, Zbigniew Czyz, Krzysztof Skiba

Abstract:

This paper discusses the CFD simulation of a flow around a rotor of a Vertical Axis Wind Turbine. Numerical simulation, unlike experiments, enables us to validate project assumptions when it is designed and avoid a costly preparation of a model or a prototype for a bench test. CFD simulation enables us to compare characteristics of aerodynamic forces acting on rotor working surfaces and define operational parameters like torque or power generated by a turbine assembly. This research focused on the rotor with the blades capable of modifying their working surfaces, i.e. absorbing wind kinetic energy. The operation of this rotor is based on adjusting angular aperture α of the top and bottom parts of the blades mounted on an axis. If this angular aperture α increases, the working surface which absorbs wind kinetic energy also increases. The operation of turbines is characterized by parameters like the angular aperture of blades, power, torque, speed for a given wind speed. These parameters have an impact on the efficiency of assemblies. The distribution of forces acting on the working surfaces in our turbine changes according to the angular velocity of the rotor. Moreover, the resultant force from the force acting on an advancing blade and retreating blade should be as high as possible. This paper is part of the research to improve an efficiency of a rotor assembly. Therefore, using simulation, the courses of the above parameters were studied in three full rotations individually for each of the blades for three angular apertures of blade working surfaces, i.e. 30 °, 60 °, 90 °, at three wind speeds, i.e. 4 m / s, 6 m / s, 8 m / s and rotor speeds ranging from 100 to 500 rpm. Finally, there were created the characteristics of torque coefficients and power as a function of time for each blade separately and for the entire rotor. Accordingly, the correlation between the turbine rotor power as a function of wind speed for varied values of rotor rotational speed. By processing this data, the correlation between the power of the turbine rotor and its rotational speed for each of the angular aperture of the working surfaces was specified. Finally, the optimal values, i.e. of the highest output power for given wind speeds were read. The research results in receiving the basic characteristics of turbine rotor power as a function of wind speed for the three angular apertures of the blades. Given the nature of rotor operation, the growth in the output turbine can be estimated if angular aperture of the blades increases. The controlled adjustment of angle α enables a smooth adjustment of power generated by a turbine rotor. If wind speed is significant, this type of adjustment enables this output power to remain at the same level (by reducing angle α) with no risk of damaging a construction. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: computational fluid dynamics, numerical analysis, renewable energy, wind turbine

Procedia PDF Downloads 213
714 A Constructionist View of Projects, Social Media and Tacit Knowledge in a College Classroom: An Exploratory Study

Authors: John Zanetich

Abstract:

Designing an educational activity that encourages inquiry and collaboration is key to engaging students in meaningful learning. Educational Information and Communications Technology (EICT) plays an important role in facilitating cooperative and collaborative learning in the classroom. The EICT also facilitates students’ learning and development of the critical thinking skills needed to solve real world problems. Projects and activities based on constructivism encourage students to embrace complexity as well as find relevance and joy in their learning. It also enhances the students’ capacity for creative and responsible real-world problem solving. Classroom activities based on constructivism offer students an opportunity to develop the higher–order-thinking skills of defining problems and identifying solutions. Participating in a classroom project is an activity for both acquiring experiential knowledge and applying new knowledge to practical situations. It also provides an opportunity for students to integrate new knowledge into a skill set using reflection. Classroom projects can be developed around a variety of learning objects including social media, knowledge management and learning communities. The construction of meaning through project-based learning is an approach that encourages interaction and problem-solving activities. Projects require active participation, collaboration and interaction to reach the agreed upon outcomes. Projects also serve to externalize the invisible cognitive and social processes taking place in the activity itself and in the student experience. This paper describes a classroom project designed to elicit interactions by helping students to unfreeze existing knowledge, to create new learning experiences, and then refreeze the new knowledge. Since constructivists believe that students construct their own meaning through active engagement and participation as well as interactions with others. knowledge management can be used to guide the exchange of both tacit and explicit knowledge in interpersonal interactions between students and guide the construction of meaning. This paper uses an action research approach to the development of a classroom project and describes the use of technology, social media and the active use of tacit knowledge in the college classroom. In this project, a closed group Facebook page becomes the virtual classroom where interaction is captured and measured using engagement analytics. In the virtual learning community, the principles of knowledge management are used to identify the process and components of the infrastructure of the learning process. The project identifies class member interests and measures student engagement in a learning community by analyzing regular posting on the Facebook page. These posts are used to foster and encourage interactions, reflect a student’s interest and serve as reaction points from which viewers of the post convert the explicit information in the post to implicit knowledge. The data was collected over an academic year and was provided, in part, by the Google analytic reports on Facebook and self-reports of posts by members. The results support the use of active tacit knowledge activities, knowledge management and social media to enhance the student learning experience and help create the knowledge that will be used by students to construct meaning.

Keywords: constructivism, knowledge management, tacit knowledge, social media

Procedia PDF Downloads 212
713 Expression of Selected miRNAs in Placenta of the Intrauterine Restricted Growth Fetuses in Cattle

Authors: Karolina Rutkowska, Hubert Pausch, Jolanta Oprzadek, Krzysztof Flisikowski

Abstract:

The placenta is one of the most important organs that plays a crucial role in the fetal growth and development. Placenta dysfunction is one of the primary cause of the intrauterine growth restriction (IUGR). Cattle have the cotyledonary placenta which consists of two anatomical parts: fetal and maternal. In the case of cattle during the first months of pregnancy, it is very easy to separate maternal caruncle from fetal cotyledon tissue, easier in fact than removing an ordinary glove from one's hand. Which in fact make easier to conduct tissue-specific molecular studies. Typically, animal models for the study of IUGR are created using surgical methods and malnutrition of the pregnant mother or in the case of mice by genetic modifications. However, proposed cattle model with MIMT1Del/WT deletion is unique because it was created without any surgical methods what significantly distinguish it from the other animal models. The primary objective of the study was to identify differential expression of selected miRNAs in the placenta from normal and intrauterine growth restricted fetuses. There was examined the expression of miRNA in the fetal and maternal part of the placenta from 24 fetuses (12 samples from the fetal part of the placenta and 12 samples from maternal part of the placenta). In the study, there was done miRNAs sequencing in the placenta of MIMT1Del/WT fetuses and MIMT1WT/WT fetuses. Then, there were selected miRNAs that are involved in fetal growth and development. Analysis of miRNAs expression was conducted on ABI7500 machine. miRNAs expression was analyzed by reverse-transcription polymerase chain reaction (RT-PCR). As the reference gene was used SNORD47. The results were expressed as 2ΔΔCt: ΔΔCt = (Ctij − CtSNORD47j) − (Cti1 − CtSNORD471). Where Ctij and CtSNORD47j are the Ct values for gene i and for SNORD47 in a sample (named j); Cti1 and CtSNORD471 are the Ct values in sample 1. Differences between groups were evaluated with analysis of variance by using One-Way ANOVA. Bonferroni’s tests were used for interpretation of the data. All normalised miRNA expression values are expressed on a value of natural logarithm. The data were expressed as least squares mean with standard errors. Significance was declared when P < 0.05. The study shows that miRNAs expression depends on the part of the placenta where they origin (fetal or maternal) and on the genotype of the animal. miRNAs offer a particularly new approach to study IUGR. Corresponding tissue samples were collected according to the standard veterinary protocols according to the European Union Normative for Care and Use of Experimental Animals. All animal experiments were approved by the Animal Ethics Committee of the State Provincial Office of Southern Finland (ESAVI-2010-08583/YM-23).

Keywords: placenta, intrauterine growth restriction, miRNA, cattle

Procedia PDF Downloads 311
712 Global Winners versus Local Losers: Globalization Identity and Tradition in Spanish Club Football

Authors: Jim O'brien

Abstract:

Contemporary global representation and consumption of La Liga across a plethora of media platform outlets has resulted in significant implications for the historical, political and cultural developments which shaped the development of Spanish club football. This has established and reinforced a hierarchy of a small number of teams belonging to or aspiring to belong to a cluster of global elite clubs seeking to imitate the blueprint of the English Premier League in respect of corporate branding and marketing in order to secure a global fan base through success and exposure in La Liga itself and through the Champions League. The synthesis between globalization, global sport and the status of high profile clubs has created radical change within the folkloric iconography of Spanish football. The main focus of this paper is to critically evaluate the consequences of globalization on the rich tapestry at the core of the game’s distinctive history in Spain. The seminal debate underpinning the study considers whether the divergent aspects of globalization have acted as a malevolent force, eroding tradition, causing financial meltdown and reducing much of the fabric of club football to the status of by standers, or have promoted a renaissance of these traditions, securing their legacies through new fans and audiences. The study draws on extensive sources on the history, politics and culture of Spanish football, in both English and Spanish. It also uses primary and archive material derived from interviews and fieldwork undertaken with scholars, media professionals and club representatives in Spain. The paper has four main themes. Firstly, it contextualizes the key historical, political and cultural forces which shaped the landscape of Spanish football from the late nineteenth century. The seminal notions of region, locality and cultural divergence are pivotal to this discourse. The study then considers the relationship between football, ethnicity and identity as a barometer of continuity and change, suggesting that tradition is being reinvented and re-framed to reflect the shifting demographic and societal patterns within the Spanish state. Following on from this, consideration is given to the paradoxical function of ‘El Clasico’ and the dominant duopoly of the FC Barcelona – Real Madrid axis in both eroding tradition in the global nexus of football’s commodification and in protecting historic political rivalries. To most global consumers of La Liga, the mega- spectacle and hyperbole of ‘El Clasico’ is the essence of Spanish football, with cultural misrepresentation and distortion catapulting the event to the global media audience. Finally, the paper examines La Liga as a sporting phenomenon in which elite clubs, cult managers and galacticos serve as commodities on the altar of mass consumption in football’s global entertainment matrix. These processes accentuate a homogenous mosaic of cultural conformity which obscures local, regional and national identities and paradoxically fuses the global with the local to maintain the distinctive hue of La Liga, as witnessed by the extraordinary successes of Athletico Madrid and FC Eibar in recent seasons.

Keywords: Spanish football, globalization, cultural identity, tradition, folklore

Procedia PDF Downloads 299
711 Capital Accumulation and Unemployment in Namibia, Nigeria and South Africa

Authors: Abubakar Dikko

Abstract:

The research investigates the causes of unemployment in Namibia, Nigeria and South Africa, and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria and South Africa are great African nations battling with high unemployment rates. In 2013, the countries recorded high unemployment rates of 16.9%, 23.9% and 24.9% respectively. Most of the unemployed in these economies comprises of youth. Roughly about 40% working age South Africans has jobs, whereas in Nigeria and Namibia is less than that. Unemployment in Africa has wide implications on households which has led to extensive poverty and inequality, and created a rampant criminality. Recently in South Africa there has been a case of xenophobic attacks which were caused by the citizens of the country as a result of unemployment. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes that there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). The African countries with low level of capital accumulation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.

Keywords: capital accumulation, unemployment, NAIRU, Post-Keynesian economics

Procedia PDF Downloads 257
710 A Bayesian Approach for Health Workforce Planning in Portugal

Authors: Diana F. Lopes, Jorge Simoes, José Martins, Eduardo Castro

Abstract:

Health professionals are the keystone of any health system, by delivering health services to the population. Given the time and cost involved in training new health professionals, the planning process of the health workforce is particularly important as it ensures a proper balance between the supply and demand of these professionals and it plays a central role on the Health 2020 policy. In the past 40 years, the planning of the health workforce in Portugal has been conducted in a reactive way lacking a prospective vision based on an integrated, comprehensive and valid analysis. This situation may compromise not only the productivity and the overall socio-economic development but the quality of the healthcare services delivered to patients. This is even more critical given the expected shortage of the health workforce in the future. Furthermore, Portugal is facing an aging context of some professional classes (physicians and nurses). In 2015, 54% of physicians in Portugal were over 50 years old, and 30% of all members were over 60 years old. This phenomenon associated to an increasing emigration of young health professionals and a change in the citizens’ illness profiles and expectations must be considered when planning resources in healthcare. The perspective of sudden retirement of large groups of professionals in a short time is also a major problem to address. Another challenge to embrace is the health workforce imbalances, in which Portugal has one of the lowest nurse to physician ratio, 1.5, below the European Region and the OECD averages (2.2 and 2.8, respectively). Within the scope of the HEALTH 2040 project – which aims to estimate the ‘Future needs of human health resources in Portugal till 2040’ – the present study intends to get a comprehensive dynamic approach of the problem, by (i) estimating the needs of physicians and nurses in Portugal, by specialties and by quinquenium till 2040; (ii) identifying the training needs of physicians and nurses, in medium and long term, till 2040, and (iii) estimating the number of students that must be admitted into medicine and nursing training systems, each year, considering the different categories of specialties. The development of such approach is significantly more critical in the context of limited budget resources and changing health care needs. In this context, this study presents the drivers of the healthcare needs’ evolution (such as the demographic and technological evolution, the future expectations of the users of the health systems) and it proposes a Bayesian methodology, combining the best available data with experts opinion, to model such evolution. Preliminary results considering different plausible scenarios are presented. The proposed methodology will be integrated in a user-friendly decision support system so it can be used by politicians, with the potential to measure the impact of health policies, both at the regional and the national level.

Keywords: bayesian estimation, health economics, health workforce planning, human health resources planning

Procedia PDF Downloads 249
709 Bridging Educational Research and Policymaking: The Development of Educational Think Tank in China

Authors: Yumei Han, Ling Li, Naiqing Song, Xiaoping Yang, Yuping Han

Abstract:

Educational think tank is agreeably regarded as significant part of a nation’s soft power to promote the scientific and democratic level of educational policy making, and it plays critical role of bridging educational research in higher institutions and educational policy making. This study explores the concept, functions and significance of educational think tank in China, and conceptualizes a three dimensional framework to analyze the approaches of transforming research-based higher institutions into effective educational think tanks to serve educational policy making in the nation wide. Since 2014, the Ministry of Education P.R. China has been promoting the strategy of developing new type of educational think tanks in higher institutions, and such a strategy has been put into the agenda for the 13th Five Year Plan for National Education Development released in 2017.In such context, increasing scholars conduct studies to put forth strategies of promoting the development and transformation of new educational think tanks to serve educational policy making process. Based on literature synthesis, policy text analysis, and analysis of theories about policy making process and relationship between educational research and policy-making, this study constructed a three dimensional conceptual framework to address the following questions: (a) what are the new features of educational think tanks in the new era comparing traditional think tanks, (b) what are the functional objectives of the new educational think tanks, (c) what are the organizational patterns and mechanism of the new educational think tanks, (d) in what approaches traditional research-based higher institutions can be developed or transformed into think tanks to effectively serve the educational policy making process. The authors adopted case study approach on five influential education policy study centers affiliated with top higher institutions in China and applied the three dimensional conceptual framework to analyze their functional objectives, organizational patterns as well as their academic pathways that researchers use to contribute to the development of think tanks to serve education policy making process.Data was mainly collected through interviews with center administrators, leading researchers and academic leaders in the institutions. Findings show that: (a) higher institution based think tanks mainly function for multi-level objectives, providing evidence, theoretical foundations, strategies, or evaluation feedbacks for critical problem solving or policy-making on the national, provincial, and city/county level; (b) higher institution based think tanks organize various types of research programs for different time spans to serve different phases of policy planning, decision making, and policy implementation; (c) in order to transform research-based higher institutions into educational think tanks, the institutions must promote paradigm shift that promotes issue-oriented field studies, large data mining and analysis, empirical studies, and trans-disciplinary research collaborations; and (d) the five cases showed distinguished features in their way of constructing think tanks, and yet they also exposed obstacles and challenges such as independency of the think tanks, the discourse shift from academic papers to consultancy report for policy makers, weakness in empirical research methods, lack of experience in trans-disciplinary collaboration. The authors finally put forth implications for think tank construction in China and abroad.

Keywords: education policy-making, educational research, educational think tank, higher institution

Procedia PDF Downloads 156
708 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City

Authors: Berhanu Keno Terfa

Abstract:

Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.

Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.

Procedia PDF Downloads 29
707 Municipalities as Enablers of Citizen-Led Urban Initiatives: Possibilities and Constraints

Authors: Rosa Nadine Danenberg

Abstract:

In recent years, bottom-up urban development has started growing as an alternative to conventional top-down planning. In large proportions, citizens and communities initiate small-scale interventions; suddenly seeming to form a trend. As a result, more and more cities are witnessing not only the growth of but also an interest in these initiatives, as they bear the potential to reshape urban spaces. Such alternative city-making efforts cause new dynamics in urban governance, with inevitable consequences for the controlled city planning and its administration. The emergence of enabling relationships between top-down and bottom-up actors signals an increasingly common urban practice. Various case studies show that an enabling relationship is possible, yet, how it can be optimally realized stays rather underexamined. Therefore, the seemingly growing worldwide phenomenon of ‘municipal bottom-up urban development’ necessitates an adequate governance structure. As such, the aim of this research is to contribute knowledge to how municipalities can enable citizen-led urban initiatives from a governance innovation perspective. Empirical case-study research in Stockholm and Istanbul, derived from interviews with founders of four citizen-led urban initiatives and one municipal representative in each city, provided valuable insights to possibilities and constraints for enabling practices. On the one hand, diverging outcomes emphasize the extreme oppositional features of both cases (Stockholm and Istanbul). Firstly, both cities’ characteristics are drastically different. Secondly, the ideologies and motifs for the initiatives to emerge vary widely. Thirdly, the major constraints for citizen-led urban initiatives to relate to the municipality are considerably different. Two types of municipality’s organizational structures produce different underlying mechanisms which demonstrate the constraints. The first municipal organizational structure is steered by bureaucracy (Stockholm). It produces an administrative division that brings up constraints such as the lack of responsibility, transparency and continuity by municipal representatives. The second structure is dominated by municipal politics and governmental hierarchy (Istanbul). It produces informality, lack of transparency and a fragmented civil society. In order to cope with the constraints produced by both types of organizational structures, the initiatives have adjusted their organization to the municipality’s underlying structures. On the other hand, this paper has in fact also come to a rather unifying conclusion. Interestingly, the suggested possibilities for an enabling relationship underline converging new urban governance arrangements. This could imply that for the two varying types of municipality’s organizational structures there is an accurate governance structure. Namely, the combination of a neighborhood council with a municipal guide, with allowance for the initiatives to adopt a politicizing attitude is found as coinciding. Especially its combination appears key to redeem varying constraints. A municipal guide steers the initiatives through bureaucratic struggles, is supported by coproduction methods, while it balances out municipal politics. Next, a neighborhood council, that is politically neutral and run by local citizens, can function as an umbrella for citizen-led urban initiatives. What is crucial is that it should cater for a more entangled relationship between municipalities and initiatives with enhanced involvement of the initiatives in decision-making processes and limited involvement of prevailing constraints pointed out in this research.

Keywords: bottom-up urban development, governance innovation, Istanbul, Stockholm

Procedia PDF Downloads 215
706 Life-Cycle Assessment of Residential Buildings: Addressing the Influence of Commuting

Authors: J. Bastos, P. Marques, S. Batterman, F. Freire

Abstract:

Due to demands of a growing urban population, it is crucial to manage urban development and its associated environmental impacts. While most of the environmental analyses have addressed buildings and transportation separately, both the design and location of a building affect environmental performance and focusing on one or the other can shift impacts and overlook improvement opportunities for more sustainable urban development. Recently, several life-cycle (LC) studies of residential buildings have integrated user transportation, focusing exclusively on primary energy demand and/or greenhouse gas emissions. Additionally, most papers considered only private transportation (mainly car). Although it is likely to have the largest share both in terms of use and associated impacts, exploring the variability associated with mode choice is relevant for comprehensive assessments and, eventually, for supporting decision-makers. This paper presents a life-cycle assessment (LCA) of a residential building in Lisbon (Portugal), addressing building construction, use and user transportation (commuting with private and public transportation). Five environmental indicators or categories are considered: (i) non-renewable primary energy (NRE), (ii) greenhouse gas intensity (GHG), (iii) eutrophication (EUT), (iv) acidification (ACID), and (v) ozone layer depletion (OLD). In a first stage, the analysis addresses the overall life-cycle considering the statistical model mix for commuting in the residence location. Then, a comparative analysis compares different available transportation modes to address the influence mode choice variability has on the results. The results highlight the large contribution of transportation to the overall LC results in all categories. NRE and GHG show strong correlation, as the three LC phases contribute with similar shares to both of them: building construction accounts for 6-9%, building use for 44-45%, and user transportation for 48% of the overall results. However, for other impact categories there is a large variation in the relative contribution of each phase. Transport is the most significant phase in OLD (60%); however, in EUT and ACID building use has the largest contribution to the overall LC (55% and 64%, respectively). In these categories, transportation accounts for 31-38%. A comparative analysis was also performed for four alternative transport modes for the household commuting: car, bus, motorcycle, and company/school collective transport. The car has the largest results in all impact categories. When compared to the overall LC with commuting by car, mode choice accounts for a variability of about 35% in NRE, GHG and OLD (the categories where transportation accounted for the largest share of the LC), 24% in EUT and 16% in ACID. NRE and GHG show a strong correlation because all modes have internal combustion engines. The second largest results for NRE, GHG and OLD are associated with commuting by motorcycle; however, for ACID and EUT this mode has better performance than bus and company/school transport. No single transportation mode performed best in all impact categories. Integrated assessments of buildings are needed to avoid shifts of impacts between life-cycle phases and environmental categories, and ultimately to support decision-makers.

Keywords: environmental impacts, LCA, Lisbon, transport

Procedia PDF Downloads 361
705 The Potential of On-Demand Shuttle Services to Reduce Private Car Use

Authors: B. Mack, K. Tampe-Mai, E. Diesch

Abstract:

Findings of an ongoing discrete choice study of future transport mode choice will be presented. Many urban centers face the triple challenge of having to cope with ever increasing traffic congestion, environmental pollution, and greenhouse gas emission brought about by private car use. In principle, private car use may be diminished by extending public transport systems like bus lines, trams, tubes, and trains. However, there are limits to increasing the (perceived) spatial and temporal flexibility and reducing peak-time crowding of classical public transport systems. An emerging new type of system, publicly or privately operated on-demand shuttle bus services, seem suitable to ameliorate the situation. A fleet of on-demand shuttle busses operates without fixed stops and schedules. It may be deployed efficiently in that each bus picks up passengers whose itineraries may be combined into an optimized route. Crowding may be minimized by limiting the number of seats and the inter-seat distance for each bus. The study is conducted as a discrete choice experiment. The choice between private car, public transport, and shuttle service is registered as a function of several push and pull factors (financial costs, travel time, walking distances, mobility tax/congestion charge, and waiting time/parking space search time). After the completion of the discrete choice items, the study participant is asked to rate the three modes of transport with regard to the pull factors of comfort, safety, privacy, and opportunity to engage in activities like reading or surfing the internet. These ratings are entered as additional predictors into the discrete choice experiment regression model. The study is conducted in the region of Stuttgart in southern Germany. N=1000 participants are being recruited. Participants are between 18 and 69 years of age, hold a driver’s license, and live in the city or the surrounding region of Stuttgart. In the discrete choice experiment, participants are asked to assume they lived within the Stuttgart region, but outside of the city, and were planning the journey from their apartment to their place of work, training, or education during the peak traffic time in the morning. Then, for each item of the discrete choice experiment, they are asked to choose between the transport modes of private car, public transport, and on-demand shuttle in the light of particular values of the push and pull factors studied. The study will provide valuable information on the potential of switching from private car use to the use of on-demand shuttles, but also on the less desirable potential of switching from public transport to on-demand shuttle services. Furthermore, information will be provided on the modulation of these switching potentials by pull and push factors.

Keywords: determinants of travel mode choice, on-demand shuttle services, private car use, public transport

Procedia PDF Downloads 181
704 Roads and Agriculture: Impacts of Connectivity in Peru

Authors: Julio Aguirre, Yohnny Campana, Elmer Guerrero, Daniel De La Torre Ugarte

Abstract:

A well-developed transportation network is a necessary condition for a country to derive full benefits from good trade and macroeconomic policies. Road infrastructure plays a key role in the economic development of rural areas of developing countries; where agriculture is the main economic activity. The ability to move agricultural production from the place of production to the market, and then to the place of consumption, greatly influence the economic value of farming activities, and of the resources involved in the production process, i.e., labor and land. Consequently, investment in transportation networks contributes to enhance or overcome the natural advantages or disadvantages that topography and location have imposed over the agricultural sector. This is of particular importance when dealing with countries, like Peru, with a great topographic diversity. The objective of this research is to estimate the impacts of road infrastructure on the performance of the agricultural sector. Specific variables of interest are changes in travel time, shifts of production for self-consumption to production for the market, changes in farmers income, and impacts on the diversification of the agricultural sector. In the study, a cross-section model with instrumental variables is the central methodological instrument. The data is obtained from agricultural and transport geo-referenced databases, and the instrumental variable specification utilized is based on the Kruskal algorithm. The results show that the expansion of road connectivity reduced farmers' travel time by an average of 3.1 hours and the proportion of output sold in the market increases by up to 40 percentage points. The increase in connectivity has an unexpected increase in the districts index of diversification of agricultural production. The results are robust to the inclusion of year and region fixed-effects, and to control for geography (i.e., slope and altitude), population variables, and mining activity. Other results are also very eloquent. For example, a clear positive impact can be seen in access to local markets, but this does not necessarily correlate with an increase in the production of the sector. This can be explained by the fact that agricultural development not only requires provision of roads but additional complementary infrastructure and investments intended to provide the necessary conditions so that producers can offer quality products (improved management practices, timely maintenance of irrigation infrastructure, transparent management of water rights, among other factors). Therefore, complementary public goods are needed to enhance the effects of roads on the welfare of the population, beyond enabling them to increase their access to markets.

Keywords: agriculture devolepment, market access, road connectivity, regional development

Procedia PDF Downloads 201
703 Development of Method for Detecting Low Concentration of Organophosphate Pesticides in Vegetables Using near Infrared Spectroscopy

Authors: Atchara Sankom, Warapa Mahakarnchanakul, Ronnarit Rittiron, Tanaboon Sajjaanantakul, Thammasak Thongket

Abstract:

Vegetables are frequently contaminated with pesticides residues resulting in the most food safety concern among agricultural products. The objective of this work was to develop a method to detect the organophosphate (OP) pesticides residues in vegetables using Near Infrared (NIR) spectroscopy technique. Low concentration (ppm) of OP pesticides in vegetables were investigated. The experiment was divided into 2 sections. In the first section, Chinese kale spiked with different concentrations of chlorpyrifos pesticide residues (0.5-100 ppm) was chosen as the sample model to demonstrate the appropriate conditions of sample preparation, both for a solution or solid sample. The spiked samples were extracted with acetone. The sample extracts were applied as solution samples, while the solid samples were prepared by the dry-extract system for infrared (DESIR) technique. The DESIR technique was performed by embedding the solution sample on filter paper (GF/A) and then drying. The NIR spectra were measured with the transflectance mode over wavenumber regions of 12,500-4000 cm⁻¹. The QuEChERS method followed by gas chromatography-mass spectrometry (GC-MS) was performed as the standard method. The results from the first section showed that the DESIR technique with NIR spectroscopy demonstrated good accurate calibration result with R² of 0.93 and RMSEP of 8.23 ppm. However, in the case of solution samples, the prediction regarding the NIR-PLSR (partial least squares regression) equation showed poor performance (R² = 0.16 and RMSEP = 23.70 ppm). In the second section, the DESIR technique coupled with NIR spectroscopy was applied to the detection of OP pesticides in vegetables. Vegetables (Chinese kale, cabbage and hot chili) were spiked with OP pesticides (chlorpyrifos ethion and profenofos) at different concentrations ranging from 0.5 to 100 ppm. Solid samples were prepared (based on the DESIR technique), then samples were scanned by NIR spectrophotometer at ambient temperature (25+2°C). The NIR spectra were measured as in the first section. The NIR- PLSR showed the best calibration equation for detecting low concentrations of chlorpyrifos residues in vegetables (Chinese kale, cabbage and hot chili) according to the prediction set of R2 and RMSEP of 0.85-0.93 and 8.23-11.20 ppm, respectively. For ethion residues, the best calibration equation of NIR-PLSR showed good indexes of R² and RMSEP of 0.88-0.94 and 7.68-11.20 ppm, respectively. As well as the results for profenofos pesticide, the NIR-PLSR also showed the best calibration equation for detecting the profenofos residues in vegetables according to the good index of R² and RMSEP of 0.88-0.97 and 5.25-11.00 ppm, respectively. Moreover, the calibration equation developed in this work could rapidly predict the concentrations of OP pesticides residues (0.5-100 ppm) in vegetables, and there was no significant difference between NIR-predicted values and actual values (data from GC-MS) at a confidence interval of 95%. In this work, the proposed method using NIR spectroscopy involving the DESIR technique has proved to be an efficient method for the screening detection of OP pesticides residues at low concentrations, and thus increases the food safety potential of vegetables for domestic and export markets.

Keywords: NIR spectroscopy, organophosphate pesticide, vegetable, food safety

Procedia PDF Downloads 146
702 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat

Authors: M. Venegas, M. De Vega, N. García-Hernando

Abstract:

Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.

Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy

Procedia PDF Downloads 279
701 The Biomechanical Assessment of Balance and Gait for Stroke Patients and the Implications in the Diagnosis and Rehabilitation

Authors: A. Alzahrani, G. Arnold, W. Wang

Abstract:

Background: Stroke commonly occurs in middle-aged and elderly populations, and the diagnosis of early stroke is still difficult. Patients who have suffered a stroke have different balance and gait patterns from healthy people. Advanced techniques of motion analysis have been routinely used in the clinical assessment of cerebral palsy. However, so far, little research has been done on the direct diagnosis of early stroke patients using motion analysis. Objectives: The aim of this study was to investigate whether patients with stroke have different balance and gait from healthy people and which biomechanical parameters could be used to predict and diagnose potential patients who are at a potential risk to stroke. Methods: Thirteen patients with stroke were recruited as subjects whose gait and balance was analysed. Twenty normal subjects at the matched age participated in this study as a control group. All subjects’ gait and balance were collected using Vicon Nexus® to obtain the gait parameters, kinetic, and kinematic parameters of the hip, knee, and ankle joints in three planes of both limbs. Participants stood on force platforms to perform a single leg balance test. Then, they were asked to walk along a 10 m walkway at their comfortable speed. Participants performed 6 trials of single-leg balance for each side and 10 trials of walking. From the recorded trials, three good ones were analysed using the Vicon Plug-in-Gait model to obtain gait parameters, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Result: The temporal-spatial variables of Stroke subjects were compared with the healthy subjects; it was found that there was a significant difference (p < 0.05) between the groups. The step length, speed, cadence were lower in stroke subjects as compared to the healthy groups. The stroke patients group showed significantly decreased in gait speed (mean and SD: 0.85 ± 0.33 m/s), cadence ( 96.71 ± 16.14 step/min), and step length (0.509 ± 017 m) in compared to healthy people group whereas the gait speed was 1.2 ± 0.11 m/s, cadence 112 ± 8.33 step/min, and step length 0.648 ± 0.43 m. Moreover, it was observed that patients with stroke have significant differences in the ankle, hip, and knee joints’ kinematics in the sagittal and coronal planes. Also, the result showed that there was a significant difference between groups in the single-leg balance test, e.g., maintaining single-leg stance time in the stroke patients showed shorter duration (5.97 ± 6.36 s) in compared to healthy people group (14.36 ± 10.20 s). Conclusion: Our result showed that there are significantly differences between stroke patients and healthy subjects in the various aspects of gait analysis and balance test, as a consequences of these findings some of the biomechanical parameters such as joints kinematics, gait parameters, and single-leg stance balance test could be used in clinical practice to predict and diagnose potential patients who are at a high risk of further stroke.

Keywords: gait analysis, kinetics, kinematics, single-leg stance, Stroke

Procedia PDF Downloads 138
700 The Valuable Triad of Adipokine Indices to Differentiate Pediatric Obesity from Metabolic Syndrome: Chemerin, Progranulin, Vaspin

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity is associated with cardiovascular disease risk factors and metabolic syndrome (MetS). In this study, associations between adipokines and adipokine as well as obesity indices were evaluated. Plasma adipokine levels may exhibit variations according to body adipose tissue mass. Besides, upon consideration of obesity as an inflammatory disease, adipokines may play some roles in this process. The ratios of proinflammatory adipokines to adiponectin may act as highly sensitive indicators of body adipokine status. The aim of the study is to present some adipokine indices, which are thought to be helpful for the evaluation of childhood obesity and also to determine the best discriminators in the diagnosis of MetS. 80 prepubertal children (aged between 6-9.5 years) included in the study were divided into three groups; 30 children with normal weight (NW), 25 morbid obese (MO) children and 25 MO children with MetS. Physical examinations were performed. Written informed consent forms were obtained from the parents. The study protocol was approved by Ethics Committee of Namik Kemal University Medical Faculty. Anthropometric measurements, such as weight, height, waist circumference (C), hip C, head C, neck C were recorded. Values for body mass index (BMI), diagnostic obesity notation model assessment Index-II (D2 index) as well as waist-to-hip, head-to-neck ratios were calculated. Adiponectin, resistin, leptin, chemerin, vaspin, progranulin assays were performed by ELISA. Adipokine-to-adiponectin ratios were obtained. SPSS Version 20 was used for the evaluation of data. p values ≤ 0.05 were accepted as statistically significant. Values of BMI and D2 index, waist-to-hip, head-to-neck ratios did not differ between MO and MetS groups (p ≥ 0.05). Except progranulin (p ≤ 0.01), similar patterns were observed for plasma levels of each adipokine. There was not any difference in vaspin as well as resistin levels between NW and MO groups. Significantly increased leptin-to-adiponectin, chemerin-to-adiponectin and vaspin-to-adiponectin values were noted in MO in comparison with those of NW. The most valuable adipokine index was progranulin-to-adiponectin (p ≤ 0.01). This index was strongly correlated with vaspin-to-adiponectin ratio in all groups (p ≤ 0.05). There was no correlation between vaspin-to-adiponectin and chemerin-to--adiponectin in NW group. However, a correlation existed in MO group (r = 0.486; p ≤ 0.05). Much stronger correlation (r = 0.609; p ≤ 0.01) was observed in MetS group between these two adipokine indices. No correlations were detected between vaspin and progranulin as well as vaspin and chemerin levels. Correlation analyses showed a unique profile confined to MetS children. Adiponectin was found to be correlated with waist-to-hip (r = -0.435; p ≤ 0.05) as well as head-to-neck (r = 0.541; p ≤ 0.05) ratios only in MetS children. In this study, it has been investigated if adipokine indices have priority over adipokine levels. In conclusion, vaspin-to-adiponectin, progranulin-to-adiponectin, chemerin-to-adiponectin along with waist-to-hip and head-to-neck ratios were the optimal combinations. Adiponectin, waist-to-hip, head-to-neck, vaspin-to-adiponectin, chemerin-to-adiponectin ratios had appropriate discriminatory capability for MetS children.

Keywords: adipokine indices, metabolic syndrome, obesity indices, pediatric obesity

Procedia PDF Downloads 202
699 Sustainable Crop Production: Greenhouse Gas Management in Farm Value Chain

Authors: Aswathaman Vijayan, Manish Jha, Ullas Theertha

Abstract:

Climate change and Global warming have become an issue for both developed and developing countries and perhaps the biggest threat to the environment. We at ITC Limited believe that a company’s performance must be measured by its Triple Bottom Line contribution to building economic, social and environmental capital. This Triple Bottom Line strategy focuses on - Embedding sustainability in business practices, Investing in social development and Adopting a low carbon growth path with a cleaner environment approach. The Agri Business Division - ILTD operates in the tobacco crop growing regions of Andhra Pradesh and Karnataka province of India. The Agri value chain of the company comprises of two distinct phases: First phase is Agricultural operations undertaken by ITC trained farmers and the second phase is Industrial operations which include marketing and processing of the agricultural produce. This research work covers the Greenhouse Gas (GHG) management strategy of ITC in the Agricultural operations undertaken by the farmers. The agriculture sector adds considerably to global GHG emissions through the use of carbon-based energies, use of fertilizers and other farming operations such as ploughing. In order to minimize the impact of farming operations on the environment, ITC has a taken a big leap in implementing system and process in reducing the GHG impact in farm value chain by partnering with the farming community. The company has undertaken a unique three-pronged approach for GHG management at the farm value chain: 1) GHG inventory at farm value chain: Different sources of GHG emission in the farm value chain were identified and quantified for the baseline year, as per the IPCC guidelines for greenhouse gas inventories. The major sources of emission identified are - emission due to nitrogenous fertilizer application during seedling production and main-field; emission due to diesel usage for farm machinery; emission due to fuel consumption and due to burning of crop residues. 2) Identification and implementation of technologies to reduce GHG emission: Various methodologies and technologies were identified for each GHG emission source and implemented at farm level. The identified methodologies are – reducing the consumption of chemical fertilizer usage at the farm through site-specific nutrient recommendation; Usage of sharp shovel for land preparation to reduce diesel consumption; implementation of energy conservation technologies to reduce fuel requirement and avoiding burning of crop residue by incorporation in the main field. These identified methodologies were implemented at farm level, and the GHG emission was quantified to understand the reduction in GHG emission. 3) Social and farm forestry for CO2 sequestration: In addition, the company encouraged social and farm forestry in the waste lands to convert it into green cover. The plantations are carried out with fast growing trees viz., Eucalyptus, Casuarina, and Subabul at the rate of 10,000 Ha of land per year. The above approach minimized considerable amount of GHG emission at the farm value chain benefiting farmers, community, and environment at a whole. In addition, the CO₂ stock created by social and farm forestry program has made the farm value chain to become environment-friendly.

Keywords: CO₂ sequestration, farm value chain, greenhouse gas, ITC limited

Procedia PDF Downloads 292
698 Using True Life Situations in a Systems Theory Perspective as Sources of Creativity: A Case Study of how to use Everyday Happenings to produce Creative Outcomes in Novel and Screenplay Writing

Authors: Rune Bjerke

Abstract:

Psychologists incline to see creativity as a mental and psychological process. However, creativity is as well results of cultural and social interactions. Therefore, creativity is not a product of individuals in isolation, but of social systems. Creative people get ideas from the influence of others and the immediate cultural environment – a space of knowledge, situations, and practices. Therefore, in this study we apply the systems theory in practice to activate creativity processes in the production of our novel and screenplay writing. We, as storytellers actively seek to get into situations in our everyday lives, our systems, to generate ideas. Within our personal systems, we have the potential to induce situations to realise ideas to our texts, which may be accepted by our gate-keepers and can become socially validated. This is our method of writing – get into situations, get ideas to texts, and test them with family and friends in our social systems. Example of novel text as an outcome of our method is as follows: “Is it a matter of obviousness or had I read it somewhere, that the one who increases his knowledge increases his pain? And also, the other way around, with increased pain, knowledge increases, I thought. Perhaps such a chain of effects explains why the rebel August Strindberg wrote seven plays in ten months after the divorce with Siri von Essen. Shortly after, he tried painting. Neither the seven theatre plays were shown, nor the paintings were exhibited. I was standing in front of Munch's painting Women in Three Stages with chaotic mental images of myself crumpled in a church and a laughing x-girlfriend watching my suffering. My stomach was turning at unpredictable intervals and the subsequent vomiting almost suffocated me. Love grief at the worst. Was it this pain Strindberg felt? Despite the failure of his first plays, the pain must have triggered a form of creative energy that turned pain into ideas. Suffering, thoughts, feelings, words, text, and then, the reader experience. Maybe this negative force can be transformed into something positive, I asked myself. The question eased my pain. At that moment, I forgot the damp, humid air in the Munch Museum. Is it the similar type of Strindberg-pain that could explain the recurring, depressive themes in Munch's paintings? Illness, death, love and jealousy. As a beginning art student at the master's level, I had decided to find the answer. Was it the same with Munch's pain, as with Strindberg - a woman behind? There had to be women in the case of Munch - therefore, the painting “Women in Three Stages”? Who are they, what personality types are they – the women in red, black and white dresses from left to the right?” We, the writers, are using persons, situations and elements in our systems, in a systems theory perspective, to prompt creative ideas. A conceptual model is provided to advance creativity theory.

Keywords: creativity theory, systems theory, novel writing, screenplay writing, sources of creativity in social systems

Procedia PDF Downloads 114
697 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution

Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit

Abstract:

Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.

Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics

Procedia PDF Downloads 38
696 Commissioning, Test and Characterization of Low-Tar Biomass Gasifier for Rural Applications and Small-Scale Plant

Authors: M. Mashiur Rahman, Ulrik Birk Henriksen, Jesper Ahrenfeldt, Maria Puig Arnavat

Abstract:

Using biomass gasification to make producer gas is one of the promising sustainable energy options available for small scale plant and rural applications for power and electricity. Tar content in producer gas is the main problem if it is used directly as a fuel. A low-tar biomass (LTB) gasifier of approximately 30 kW capacity has been developed to solve this. Moving bed gasifier with internal recirculation of pyrolysis gas has been the basic principle of the LTB gasifier. The gasifier focuses on the concept of mixing the pyrolysis gases with gasifying air and burning the mixture in separate combustion chamber. Five tests were carried out with the use of wood pellets and wood chips separately, with moisture content of 9-34%. The LTB gasifier offers excellent opportunities for handling extremely low-tar in the producer gas. The gasifiers producer gas had an extremely low tar content of 21.2 mg/Nm³ (avg.) and an average lower heating value (LHV) of 4.69 MJ/Nm³. Tar content found in different tests in the ranges of 10.6-29.8 mg/Nm³. This low tar content makes the producer gas suitable for direct use in internal combustion engine. Using mass and energy balances, the average gasifier capacity and cold gas efficiency (CGE) observed 23.1 kW and 82.7% for wood chips, and 33.1 kW and 60.5% for wood pellets, respectively. Average heat loss in term of higher heating value (HHV) observed 3.2% of thermal input for wood chips and 1% for wood pellets, where heat loss was found 1% of thermal input in term of enthalpy. Thus, the LTB gasifier performs better compared to typical gasifiers in term of heat loss. Equivalence ratio (ER) in the range of 0.29 to 0.41 gives better performance in terms of heating value and CGE. The specific gas production yields at the above ER range were in the range of 2.1-3.2 Nm³/kg. Heating value and CGE changes proportionally with the producer gas yield. The average gas compositions (H₂-19%, CO-19%, CO₂-10%, CH₄-0.7% and N₂-51%) obtained for wood chips are higher than the typical producer gas composition. Again, the temperature profile of the LTB gasifier observed relatively low temperature compared to typical moving bed gasifier. The average partial oxidation zone temperature of 970°C observed for wood chips. The use of separate combustor in the partial oxidation zone substantially lowers the bed temperature to 750°C. During the test, the engine was started and operated completely with the producer gas. The engine operated well on the produced gas, and no deposits were observed in the engine afterwards. Part of the producer gas flow was used for engine operation, and corresponding electrical power was found to be 1.5 kW continuously, and maximum power of 2.5 kW was also observed, while maximum generator capacity is 3 kW. A thermodynamic equilibrium model is good agreement with the experimental results and correctly predicts the equilibrium bed temperature, gas composition, LHV of the producer gas and ER with the experimental data, when the heat loss of 4% of the energy input is considered.

Keywords: biomass gasification, low-tar biomass gasifier, tar elimination, engine, deposits, condensate

Procedia PDF Downloads 112
695 Acrylamide Concentration in Cakes with Different Caloric Sweeteners

Authors: L. García, N. Cobas, M. López

Abstract:

Acrylamide, a probable carcinogen, is formed in high-temperature processed food (>120ºC) when the free amino acid asparagine reacts with reducing sugars, mainly glucose and fructose. Cane juices' repeated heating would potentially form acrylamide during brown sugar production. This study aims to determine if using panela in yogurt cake preparation increases acrylamide formation. A secondary aim is to analyze the acrylamide concentration in four cake confections with different caloric sweetener ingredients: beet sugar (BS), cane sugar (CS), panela (P), and a panela and chocolate mix (PC). The doughs were obtained by combining ingredients in a planetary mixer. A model system made up of flour (25%), caloric sweeteners (25 %), eggs (23%), yogurt (15.7%), sunflower oil (9.4%), and brewer's yeast (2 %) was applied to BS, CS and P cakes. The ingredients of PC cakes varied: flour (21.5 %), panela chocolate (21.5 %), eggs (25.9 %), yogurt (18 %), sunflower oil (10.8 %), and brewer’s yeast (2.3 %). The preparations were baked for 45' at 180 ºC. Moisture was estimated by AOAC. Protein was determined by the Kjeldahl method. Ash percentage was calculated by weight loss after pyrolysis (≈ 600 °C). Fat content was measured using liquid-solid extraction in hydrolyzed raw ingredients and final confections. Carbohydrates were determined by difference and total sugars by the Luff-Schoorl method, based on the iodometric determination of copper ions. Finally, acrylamide content was determined by LC-MS by the isocratic system (phase A: 97.5 % water with 0.1% formic acid; phase B: 2.5 % methanol), using a standard internal procedure. Statistical analysis was performed using SPSS v.23. One-way variance analysis determined differences between acrylamide content and compositional analysis, with caloric sweeteners as fixed effect. Significance levels were determined by applying Duncan's t-test (p<0.05). P cakes showed a lower energy value than the other baked products; sugar content was similar to BS and CS, with 6.1 % mean crude protein. Acrylamide content in caloric sweeteners was similar to previously reported values. However, P and PC showed significantly higher concentrations, probably explained by the applied procedure. Acrylamide formation depends on both reducing sugars and asparagine concentration and availability. Beet sugar samples did not present acrylamide concentrations within the detection and quantification limit. However, the highest acrylamide content was measured in the BS. This may be due to the higher concentration of reducing sugars and asparagine in other raw ingredients. The cakes made with panela, cane sugar, or panela with chocolate did not differ in acrylamide content. The lack of asparagine measures constitutes a limitation. Cakes made with panela showed lower acrylamide formation than products elaborated with beet or cane sugar.

Keywords: beet sugar, cane sugar, panela, yogurt cake

Procedia PDF Downloads 65
694 SLAPP Suits: An Encroachment On Human Rights Of A Global Proportion And What Can Be Done About It

Authors: Laura Lee Prather

Abstract:

A functioning democracy is defined by various characteristics, including freedom of speech, equality, human rights, rule of law and many more. Lawsuits brought to intimidate speakers, drain the resources of community members, and silence journalists and others who speak out in support of matters of public concern are an abuse of the legal system and an encroachment of human rights. The impact can have a broad chilling effect, deterring others from speaking out against abuse. This article aims to suggest ways to address this form of judicial harassment. In 1988, University of Denver professors George Pring and Penelope Canan coined the term “SLAPP” when they brought to light a troubling trend of people getting sued for speaking out about matters of public concern. Their research demonstrated that thousands of people engaging in public debate and citizen involvement in government have been and will be the targets of multi-million-dollar lawsuits for the purpose of silencing them and dissuading others from speaking out in the future. SLAPP actions chill information and harm the public at large. Professors Pring and Canan catalogued a tsunami of SLAPP suits filed by public officials, real estate developers and businessmen against environmentalists, consumers, women’s rights advocates and more. SLAPPs are now seen in every region of the world as a means to intimidate people into silence and are viewed as a global affront to human rights. Anti-SLAPP laws are the antidote to SLAPP suits and while commonplace in the United States are only recently being considered in the EU and the UK. This researcher studied more than thirty years of Anti-SLAPP legislative policy in the U.S., the call for evidence and resultant EU Commission’s Anti-SLAPP Directive and Member States Recommendations, the call for evidence by the UK Ministry of Justice, response and Model Anti-SLAPP law presented to UK Parliament, as well as, conducted dozens of interviews with NGO’s throughout the EU, UK, and US to identify varying approaches to SLAPP lawsuits, public policy, and support for SLAPP victims. This paper identifies best practices taken from the US, EU and UK that can be implemented globally to help combat SLAPPs by: (1) raising awareness about SLAPPs, how to identify them, and recognizing habitual abusers of the court system; (2) engaging governments in the policy discussion in combatting SLAPPs and supporting SLAPP victims; (3) educating judges in recognizing SLAPPs an general training on encroachment of human rights; (4) and holding lawyers accountable for ravaging the rule of law.

Keywords: Anti-SLAPP Laws and Policy, Comparative media law and policy, EU Anti-SLAPP Directive and Member Recommendations, International Human Rights of Freedom of Expression

Procedia PDF Downloads 67
693 The Development of Congeneric Elicited Writing Tasks to Capture Language Decline in Alzheimer Patients

Authors: Lise Paesen, Marielle Leijten

Abstract:

People diagnosed with probable Alzheimer disease suffer from an impairment of their language capacities; a gradual impairment which affects both their spoken and written communication. Our study aims at characterising the language decline in DAT patients with the use of congeneric elicited writing tasks. Within these tasks, a descriptive text has to be written based upon images with which the participants are confronted. A randomised set of images allows us to present the participants with a different task on every encounter, thus allowing us to avoid a recognition effect in this iterative study. This method is a revision from previous studies, in which participants were presented with a larger picture depicting an entire scene. In order to create the randomised set of images, existing pictures were adapted following strict criteria (e.g. frequency, AoA, colour, ...). The resulting data set contained 50 images, belonging to several categories (vehicles, animals, humans, and objects). A pre-test was constructed to validate the created picture set; most images had been used before in spoken picture naming tasks. Hence the same reaction times ought to be triggered in the typed picture naming task. Once validated, the effectiveness of the descriptive tasks was assessed. First, the participants (n=60 students, n=40 healthy elderly) performed a typing task, which provided information about the typing speed of each individual. Secondly, two descriptive writing tasks were carried out, one simple and one complex. The simple task contains 4 images (1 animal, 2 objects, 1 vehicle) and only contains elements with high frequency, a young AoA (<6 years), and fast reaction times. Slow reaction times, a later AoA (≥ 6 years) and low frequency were criteria for the complex task. This task uses 6 images (2 animals, 1 human, 2 objects and 1 vehicle). The data were collected with the keystroke logging programme Inputlog. Keystroke logging tools log and time stamp keystroke activity to reconstruct and describe text production processes. The data were analysed using a selection of writing process and product variables, such as general writing process measures, detailed pause analysis, linguistic analysis, and text length. As a covariate, the intrapersonal interkey transition times from the typing task were taken into account. The pre-test indicated that the new images lead to similar or even faster reaction times compared to the original images. All the images were therefore used in the main study. The produced texts of the description tasks were significantly longer compared to previous studies, providing sufficient text and process data for analyses. Preliminary analysis shows that the amount of words produced differed significantly between the healthy elderly and the students, as did the mean length of production bursts, even though both groups needed the same time to produce their texts. However, the elderly took significantly more time to produce the complex task than the simple task. Nevertheless, the amount of words per minute remained comparable between simple and complex. The pauses within and before words varied, even when taking personal typing abilities (obtained by the typing task) into account.

Keywords: Alzheimer's disease, experimental design, language decline, writing process

Procedia PDF Downloads 272
692 Developing a Roadmap by Integrating of Environmental Indicators with the Nitrogen Footprint in an Agriculture Region, Hualien, Taiwan

Authors: Ming-Chien Su, Yi-Zih Chen, Nien-Hsin Kao, Hideaki Shibata

Abstract:

The major component of the atmosphere is nitrogen, yet atmospheric nitrogen has limited availability for biological use. Human activities have produced different types of nitrogen related compounds such as nitrogen oxides from combustion, nitrogen fertilizers from farming, and the nitrogen compounds from waste and wastewater, all of which have impacted the environment. Many studies have indicated the N-footprint is dominated by food, followed by housing, transportation, and goods and services sectors. To solve the impact issues from agricultural land, nitrogen cycle research is one of the key solutions. The study site is located in Hualien County, Taiwan, a major rice and food production area of Taiwan. Importantly, environmentally friendly farming has been promoted for years, and an environmental indicator system has been established by previous authors based on the concept of resilience capacity index (RCI) and environmental performance index (EPI). Nitrogen management is required for food production, as excess N causes environmental pollution. Therefore it is very important to develop a roadmap of the nitrogen footprint, and to integrate it with environmental indicators. The key focus of the study thus addresses (1) understanding the environmental impact caused by the nitrogen cycle of food products and (2) uncovering the trend of the N-footprint of agricultural products in Hualien, Taiwan. The N-footprint model was applied, which included both crops and energy consumption in the area. All data were adapted from government statistics databases and crosschecked for consistency before modeling. The actions involved with agricultural production were evaluated and analyzed for nitrogen loss to the environment, as well as measuring the impacts to humans and the environment. The results showed that rice makes up the largest share of agricultural production by weight, at 80%. The dominant meat production is pork (52%) and poultry (40%); fish and seafood were at similar levels to pork production. The average per capita food consumption in Taiwan is 2643.38 kcal capita−1 d−1, primarily from rice (430.58 kcal), meats (184.93 kcal) and wheat (ca. 356.44 kcal). The average protein uptake is 87.34 g capita−1 d−1, and 51% is mainly from meat, milk, and eggs. The preliminary results showed that the nitrogen footprint of food production is 34 kg N per capita per year, congruent with the results of Shibata et al. (2014) for Japan. These results provide a better understanding of the nitrogen demand and loss in the environment, and the roadmap can furthermore support the establishment of nitrogen policy and strategy. Additionally, the results serve to develop a roadmap of the nitrogen cycle of an environmentally friendly farming area, thus illuminating the nitrogen demand and loss of such areas.

Keywords: agriculture productions, energy consumption, environmental indicator, nitrogen footprint

Procedia PDF Downloads 299
691 Genetics of Pharmacokinetic Drug-Drug Interactions of Most Commonly Used Drug Combinations in the UK: Uncovering Unrecognised Associations

Authors: Mustafa Malki, Ewan R. Pearson

Abstract:

Tools utilized by health care practitioners to flag potential adverse drug reactions secondary to drug-drug interactions ignore individual genetic variation, which has the potential to markedly alter the severity of these interactions. To our best knowledge, there have been limited published studies on the impact of genetic variation on drug-drug interactions. Therefore, our aim in this project is the discovery of previously unrecognized, clinically important drug-drug-gene interactions (DDGIs) within the list of most commonly used drug combinations in the UK. The UKBB database was utilized to identify the top most frequently prescribed drug combinations in the UK with at least one route of interaction (over than 200 combinations were identified). We have recognised 37 common and unique interacting genes considering all of our drug combinations. Out of around 600 potential genetic variants found in these 37 genes, 100 variants have met the selection criteria (common variant with minor allele frequency ≥ 5%, independence, and has passed HWE test). The association between these variants and the use of each of our top drug combinations has been tested with a case-control analysis under the log-additive model. As the data is cross-sectional, drug intolerance has been identified from the genotype distribution as presented by the lower percentage of patients carrying the risky allele and on the drug combination compared to those free of these risk factors and vice versa with drug tolerance. In GoDARTs database, the same list of common drug combinations identified by the UKBB was utilized here with the same list of candidate genetic variants but with the addition of 14 new SNPs so that we have a total of 114 variants which have met the selection criteria in GoDARTs. From the list of the top 200 drug combinations, we have selected 28 combinations where the two drugs in each combination are known to be used chronically. For each of our 28 combinations, three drug response phenotypes have been identified (drug stop/switch, dose decrease, or dose increase of any of the two drugs during their interaction). The association between each of the three phenotypes belonging to each of our 28 drug combinations has been tested against our 114 candidate genetic variants. The results show replication of four findings between both databases : (1) Omeprazole +Amitriptyline +rs2246709 (A > G) variant in CYP3A4 gene (p-values and ORs with the UKBB and GoDARTs respectively = 0.048,0.037,0.92,and 0.52 (dose increase phenotype)) (2) Simvastatin + Ranitidine + rs9332197 (T > C) variant in CYP2C9 gene (0.024,0.032,0.81, and 5.75 (drug stop/switch phenotype)) (3) Atorvastatin + Doxazosin + rs9282564 (T > C) variant in ABCB1 gene (0.0015,0.0095,1.58,and 3.14 (drug stop/switch phenotype)) (4) Simvastatin + Nifedipine + rs2257401 (C > G) variant in CYP3A7 gene (0.025,0.019,0.77,and 0.30 (drug stop/switch phenotype)). In addition, some other non-replicated, but interesting, significant findings were detected. Our work also provides a great source of information for researchers interested in DD, DG, or DDG interactions studies as it has highlighted the top common drug combinations in the UK with recognizing 114 significant genetic variants related to drugs' pharmacokinetic.

Keywords: adverse drug reactions, common drug combinations, drug-drug-gene interactions, pharmacogenomics

Procedia PDF Downloads 160
690 Biocultural Biographies and Molecular Memories: A Study of Neuroepigenetics and How Trauma Gets under the Skull

Authors: Elsher Lawson-Boyd

Abstract:

In the wake of the Human Genome Project, the life sciences have undergone some fascinating changes. In particular, conventional beliefs relating to gene expression are being challenged by advances in postgenomic sciences, especially by the field of epigenetics. Epigenetics is the modification of gene expression without changes in the DNA sequence. In other words, epigenetics dictates that gene expression, the process by which the instructions in DNA are converted into products like proteins, is not solely controlled by DNA itself. Unlike gene-centric theories of heredity that characterized much of the 20th Century (where the genes were considered as having almost god-like power to create life), gene expression in epigenetics insists on environmental ‘signals’ or ‘exposures’, a point that radically deviates from gene-centric thinking. Science and Technology Studies (STS) scholars have shown that epigenetic research is having vast implications for the ways in which chronic, non-communicable diseases are conceptualized, treated, and governed. However, to the author’s knowledge, there have not yet been any in-depth sociological engagements with neuroepigenetics that examine how the field is affecting mental health and trauma discourse. In this paper, the author discusses preliminary findings from a doctoral ethnographic study on neuroepigenetics, trauma, and embodiment. Specifically, this study investigates the kinds of causal relations neuroepigenetic researchers are making between experiences of trauma and the development of mental illnesses like complex post-traumatic stress disorder (PTSD), both throughout a human’s lifetime and across generations. Using qualitative interviews and nonparticipant observation, the author focuses on two public-facing research centers based in Melbourne: Florey Institute of Neuroscience and Mental Health (FNMH), and Murdoch Children’s Research Institute (MCRI). Preliminary findings indicate that a great deal of ambiguity characterizes this infant field, particularly when animal-model experiments are employed and the results are translated into human frameworks. Nevertheless, researchers at the FNMH and MCRI strongly suggest that adverse and traumatic life events have a significant effect on gene expression, especially when experienced during early development. Furthermore, they predict that neuroepigenetic research will have substantial implications for the ways in which mental illnesses like complex PTSD are diagnosed and treated. These preliminary findings shed light on why medical and health sociologists have good reason to be chiming in, engaging with and de-black-boxing ideations emerging from postgenomic sciences, as they may indeed have significant effects for vulnerable populations not only in Australia but other developing countries in the Global South.

Keywords: genetics, mental illness, neuroepigenetics, trauma

Procedia PDF Downloads 123
689 Catchment Nutrient Balancing Approach to Improve River Water Quality: A Case Study at the River Petteril, Cumbria, United Kingdom

Authors: Nalika S. Rajapaksha, James Airton, Amina Aboobakar, Nick Chappell, Andy Dyer

Abstract:

Nutrient pollution and their impact on water quality is a key concern in England. Many water quality issues originate from multiple sources of pollution spread across the catchment. The river water quality in England has improved since 1990s and wastewater effluent discharges into rivers now contain less phosphorus than in the past. However, excess phosphorus is still recognised as the prevailing issue for rivers failing Water Framework Directive (WFD) good ecological status. To achieve WFD Phosphorus objectives, Wastewater Treatment Works (WwTW) permit limits are becoming increasingly stringent. Nevertheless, in some rural catchments, the apportionment of Phosphorus pollution can be greater from agricultural runoff and other sources such as septic tanks. Therefore, the challenge of meeting the requirements of watercourses to deliver WFD objectives often goes beyond water company activities, providing significant opportunities to co-deliver activities in wider catchments to reduce nutrient load at source. The aim of this study was to apply the United Utilities' Catchment Systems Thinking (CaST) strategy and pilot an innovative permitting approach - Catchment Nutrient Balancing (CNB) in a rural catchment in Cumbria (the River Petteril) in collaboration with the regulator and others to achieve WFD objectives and multiple benefits. The study area is mainly agricultural land, predominantly livestock farms. The local ecology is impacted by significant nutrient inputs which require intervention to meet WFD obligations. There are a range of Phosphorus inputs into the river, including discharges from wastewater assets but also significantly from agricultural contributions. Solely focusing on the WwTW discharges would not have resolved the problem hence in order to address this issue effectively, a CNB trial was initiated at a small WwTW, targeting the removal of a total of 150kg of Phosphorus load, of which 13kg were to be reduced through the use of catchment interventions. Various catchment interventions were implemented across selected farms in the upstream of the catchment and also an innovative polonite reactive filter media was implemented at the WwTW as an alternative to traditional Phosphorus treatment methods. During the 3 years of this trial, the impact of the interventions in the catchment and the treatment works were monitored. In 2020 and 2022, it respectively achieved a 69% and 63% reduction in the phosphorus level in the catchment against the initial reduction target of 9%. Phosphorus treatment at the WwTW had a significant impact on overall load reduction. The wider catchment impact, however, was seven times greater than the initial target when wider catchment interventions were also established. While it is unlikely that all the Phosphorus load reduction was delivered exclusively from the interventions implemented though this project, this trial evidenced the enhanced benefits that can be achieved with an integrated approach, that engages all sources of pollution within the catchment - rather than focusing on a one-size-fits-all solution. Primarily, the CNB approach and the act of collaboratively engaging others, particularly the agriculture sector is likely to yield improved farm and land management performance and better compliance, which can lead to improved river quality as well as wider benefits.

Keywords: agriculture, catchment nutrient balancing, phosphorus pollution, water quality, wastewater

Procedia PDF Downloads 61
688 Forum Shopping in Biotechnology Law: Understanding Conflict of Laws in Protecting GMO-Based Inventions as Part of a Patent Portfolio in the Greater China Region

Authors: Eugene C. Lim

Abstract:

This paper seeks to examine the extent to which ‘forum shopping’ is available to patent filers seeking protection of GMO (genetically modified organisms)-based inventions in Hong Kong. Under Hong Kong’s current re-registration system for standard patents, an inventor must first seek patent protection from one of three Designated Patent Offices (DPO) – those of the People’s Republic of China (PRC), the Europe Union (EU) (designating the UK), or the United Kingdom (UK). The ‘designated patent’ can then be re-registered by the successful patentee in Hong Kong. Interestingly, however, the EU and the PRC do not adopt a harmonized approach toward the patenting of GMOs, and there are discrepancies in their interpretation of the phrase ‘animal or plant variety’. In view of these divergences, the ability to effectively manage ‘conflict of law’ issues is an important priority for multinational biotechnology firms with a patent portfolio in the Greater China region. Generally speaking, both the EU and the PRC exclude ‘animal and plant varieties’ from the scope of patentable subject matter. However, in the EU, Article 4(2) of the Biotechnology Directive allows a genetically modified plant or animal to be patented if its ‘technical feasibility is not limited to a specific variety’. This principle has allowed for certain ‘transgenic’ mammals, such as the ‘Harvard Oncomouse’, to be the subject of a successful patent grant in the EU. There is no corresponding provision on ‘technical feasibility’ in the patent legislation of the PRC. Although the PRC has a sui generis system for protecting plant varieties, its patent legislation allows the patenting of non-biological methods for producing transgenic organisms, not the ‘organisms’ themselves. This might lead to a situation where an inventor can obtain patent protection in Hong Kong over transgenic life forms through the re-registration of a patent from a more ‘biotech-friendly’ DPO, even though the subject matter in question might not be patentable per se in the PRC. Through a comparative doctrinal analysis of legislative provisions, cases and court interpretations, this paper argues that differences in the protection afforded to GMOs do not generally prejudice the ability of global MNCs to obtain patent protection in Hong Kong. Corporations which are able to first obtain patents for GMO-based inventions in Europe can generally use their European patent as the basis for re-registration in Hong Kong, even if such protection might not be available in the PRC itself. However, the more restrictive approach to GMO-based patents adopted in the PRC would be more acutely felt by enterprises and inventors based in mainland China. The broader scope of protection offered to GMO-based patents in Europe might not be available in Hong Kong to mainland Chinese patentees under the current re-registration model for standard patents, unless they have the resources to apply for patent protection as well from another (European) DPO as the basis for re-registration.

Keywords: biotechnology, forum shopping, genetically modified organisms (GMOs), greater China region, patent portfolio

Procedia PDF Downloads 325
687 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 257
686 Comprehensive, Up-to-Date Climate System Change Indicators, Trends and Interactions

Authors: Peter Carter

Abstract:

Comprehensive climate change indicators and trends inform the state of the climate (system) with respect to present and future climate change scenarios and the urgency of mitigation and adaptation. With data records now going back for many decades, indicator trends can complement model projections. They are provided as datasets by several climate monitoring centers, reviewed by state of the climate reports, and documented by the IPCC assessments. Up-to-date indicators are provided here. Rates of change are instructive, as are extremes. The indicators include greenhouse gas (GHG) emissions (natural and synthetic), cumulative CO2 emissions, atmospheric GHG concentrations (including CO2 equivalent), stratospheric ozone, surface ozone, radiative forcing, global average temperature increase, land temperature increase, zonal temperature increases, carbon sinks, soil moisture, sea surface temperature, ocean heat content, ocean acidification, ocean oxygen, glacier mass, Arctic temperature, Arctic sea ice (extent and volume), northern hemisphere snow cover, permafrost indices, Arctic GHG emissions, ice sheet mass, sea level rise, and stratospheric and surface ozone. Global warming is not the most reliable single metric for the climate state. Radiative forcing, atmospheric CO2 equivalent, and ocean heat content are more reliable. Global warming does not provide future commitment, whereas atmospheric CO2 equivalent does. Cumulative carbon is used for estimating carbon budgets. The forcing of aerosols is briefly addressed. Indicator interactions are included. In particular, indicators can provide insight into several crucial global warming amplifying feedback loops, which are explained. All indicators are increasing (adversely), most as fast as ever and some faster. One particularly pressing indicator is rapidly increasing global atmospheric methane. In this respect, methane emissions and sources are covered in more detail. In their application, indicators used in assessing safe planetary boundaries are included. Indicators are considered with respect to recent published papers on possible catastrophic climate change and climate system tipping thresholds. They are climate-change-policy relevant. In particular, relevant policies include the 2015 Paris Agreement on “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels” and the 1992 UN Framework Convention on Climate change, which has “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”

Keywords: climate change, climate change indicators, climate change trends, climate system change interactions

Procedia PDF Downloads 100