Search results for: sound proof panel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2015

Search results for: sound proof panel

185 Investigating the English Speech Processing System of EFL Japanese Older Children

Authors: Hiromi Kawai

Abstract:

This study investigates the nature of EFL older children’s L2 perceptive and productive abilities using classroom data, in order to find a pedagogical solution to the teaching of L2 sounds at an early stage of learning in a formal school setting. It is still inconclusive whether older children with only EFL formal school instruction at the initial stage of L2 learning are able to attain native-like perception and production in English within the very limited amount of exposure to the target language available. Based on the notion of the lack of study of EFL Japanese children’s acquisition of English segments, the researcher uses a model of L1 speech processing which was developed for investigating L1 English children’s speech and literacy difficulties using a psycholinguistic framework. The model is composed of input channel, output channel, and lexical representation, and examines how a child receives information from spoken or written language, remembers and stores it within the lexical representations and how the child selects and produces spoken or written words. Concerning language universality and language specificity in the language acquisitional process, the aim of finding any sound errors in L1 English children seemed to conform to the author’s intention to find abilities of English sounds in older Japanese children at the novice level of English in an EFL setting. 104 students in Grade 5 (between the ages of 10 and 11 years old) of an elementary school in Tokyo participated in this study. Four tests to measure their perceptive ability and three oral repetition tests to measure their productive ability were conducted with/without reference to lexical representation. All the test items were analyzed to calculate item facility (IF) indices, and correlational analyses and Structural Equation Modeling (SEM) were conducted to examine the relationship between the receptive ability and the productive ability. IF analysis showed that (1) the participants were better at perceiving a segment than producing a segment, (2) they had difficulty in auditory discrimination of paired consonants when one of them does not exist in the Japanese inventory, (3) they had difficulty in both perceiving and producing English vowels, and (4) their L1 loan word knowledge had an influence on their ability to perceive and produce L2 sounds. The result of the Multiple Regression Modeling showed that the two production tests could predict the participants’ auditory ability of real words in English. The result of SEM showed that the hypothesis that perceptive ability affects productive ability was supported. Based on these findings, the author discusses the possible explicit method of teaching English segments to EFL older children in a formal school setting.

Keywords: EFL older children, english segments, perception, production, speech processing system

Procedia PDF Downloads 232
184 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 211
183 A Numerical Studies for Improving the Performance of Vertical Axis Wind Turbine by a Wind Power Tower

Authors: Soo-Yong Cho, Chong-Hyun Cho, Chae-Whan Rim, Sang-Kyu Choi, Jin-Gyun Kim, Ju-Seok Nam

Abstract:

Recently, vertical axis wind turbines (VAWT) have been widely used to produce electricity even in urban. They have several merits such as low sound noise, easy installation of the generator and simple structure without yaw-control mechanism and so on. However, their blades are operated under the influence of the trailing vortices generated by the preceding blades. This phenomenon deteriorates its output power and makes difficulty predicting correctly its performance. In order to improve the performance of VAWT, wind power towers can be applied. Usually, the wind power tower can be constructed as a multi-story building to increase the frontal area of the wind stream. Hence, multiple sets of the VAWT can be installed within the wind power tower, and they can be operated at high elevation. Many different types of wind power tower can be used in the field. In this study, a wind power tower with circular column shape was applied, and the VAWT was installed at the center of the wind power tower. Seven guide walls were used as a strut between the floors of the wind power tower. These guide walls were utilized not only to increase the wind velocity within the wind power tower but also to adjust the wind direction for making a better working condition on the VAWT. Hence, some important design variables, such as the distance between the wind turbine and the guide wall, the outer diameter of the wind power tower, the direction of the guide wall against the wind direction, should be considered to enhance the output power on the VAWT. A numerical analysis was conducted to find the optimum dimension on design variables by using the computational fluid dynamics (CFD) among many prediction methods. The CFD could be an accurate prediction method compared with the stream-tube methods. In order to obtain the accurate results in the CFD, it needs the transient analysis and the full three-dimensional (3-D) computation. However, this full 3-D CFD could be hard to be a practical tool because it requires huge computation time. Therefore, the reduced computational domain is applied as a practical method. In this study, the computations were conducted in the reduced computational domain and they were compared with the experimental results in the literature. It was examined the mechanism of the difference between the experimental results and the computational results. The computed results showed this computational method could be an effective method in the design methodology using the optimization algorithm. After validation of the numerical method, the CFD on the wind power tower was conducted with the important design variables affecting the performance of VAWT. The results showed that the output power of the VAWT obtained using the wind power tower was increased compared to them obtained without the wind power tower. In addition, they showed that the increased output power on the wind turbine depended greatly on the dimension of the guide wall.

Keywords: CFD, performance, VAWT, wind power tower

Procedia PDF Downloads 374
182 From Distance to Contestation: New Dimensions of Women’s Attitudes in Poland Towards Religion and the Church

Authors: Remi Szauer

Abstract:

Introductory, Background, and Importance of the Study: For many years, religiosity in Poland remained at a stable level of religious practice. When the symptoms of secularization and privatization processes appeared in Poland, it was not clearly felt but rather related to the decline in compulsory practices carried out in public, the growing distance of respondents to catholic ethic, and the lack of acceptance regarding the intervention of the Church in legislation and policy. The basic indicators observed over the years kept the picture: more religious women - less religious men. By carrying out own research in the field of religious and moral attitudes in 2019-2021, it was noticed that a reversal of the trend preserved over the years could be observed. The data showed that women under 40 are radically different in their responses than women older than them - especially those over 50: in terms of practices or ties with the Church and many more specific aspects. This became the basis for a careful examination of the responses in the under 40 age cohorts among women. This study is significant because it shows completely new perspectives of women's perception of religiosity and allows us to notice clearly the aspects of social changes mapped in the minds of the surveyed women. Research Methodology: The original survey was carried out using the quantitative method among 2,346 respondents in northern Poland, 1,349 of whom were women. The findings from these observations led to deepening the topic of beliefs of women under 40 compared to other age cohorts of women. Hence, studies were carried out on the general population of women in Poland, which constituted a comparative sample. These were panel studies. The selection of the sample among women was random, respecting the age amounts so that the two statistical groups could be compared. The designated research parameters included: declarations of religious faith, declarations of religious practice, bond with the Church, acceptance of Mariological dogmas, attitude towards the image of women in the Church, and acceptance of selected issues in Catholic ethics. Main Research Findings: Among women under 40, the decline in declarations not only concerning compulsory public practices but also private practices and declarations of religious faith is more pronounced. Not only is the range of indifferent religious attitudes increasing, but also attitudes directly declaring religious disbelief, for which there are important justifications. Women under 40 years of age strongly distance themselves from the institutions of the Church and from accepting Mariological dogmas. Moreover, they note that the image of a woman is marked by stereotyping, favoring the intensification of violence against women, as well as disregarding her potential and agency. Concluding Statement: By analyzing the answers of the female respondents and the data obtained in the research, it can be observed a reevaluation of women's beliefs, which opens the perspective of analyzing the role of religion and the Church in Poland as well as religious socialization.

Keywords: religiosity, morality, gender, feminism, social change

Procedia PDF Downloads 93
181 Learning from TikTok Food Pranks to Promote Food Saving Among Adolescents

Authors: Xuan (Iris) Li, Jenny Zhengye Hou, Greg Hearn

Abstract:

Food waste is a global issue, with an estimated 30% to 50% of food created never being consumed. Therefore, it is vital to reduce food waste and convert wasted food into recyclable outputs. TikTok provides a simple way of creating and duetting videos in just a few steps by using templates with the same sound/vision/caption effects to produce personalized content – this is called a duet, which is revealing to study the impact of TikTok on wasting more food or saving food. The research focuses on examining food-related content on TikTok, with particular attention paid to two distinct themes, food waste pranks and food-saving practices, to understand the potential impacts of these themes on adolescents and their attitudes toward sustainable food consumption practices. Specifically, the analysis explores how TikTok content related to food waste and/or food saving may contribute to the normalization and promotion of either positive or negative food behaviours among young viewers. The research employed content analysis and semi-structured interviews to understand what factors contribute to the difference in popularity between food pranks and food-saving videos and insights from the former can be applied to the latter to increase their communication effectiveness. The first category of food content on TikTok under examination pertains to food waste, including videos featuring pranks and mukbang. These forms of content have the potential to normalize or even encourage food waste behaviours among adolescents, exacerbating the already significant food waste problem. The second category of TikTok food content under examination relates to food saving, for example, videos teaching viewers how to maximize the use of food to reduce waste. This type of content can potentially empower adolescents to act against food waste and foster positive and sustainable food practices in their communities. The initial findings of the study suggest that TikTok content related to pranks appears to be more popular among viewers than content focused on teaching people how to save food. Additionally, these types of videos are gaining fans at a faster rate than content promoting more sustainable food practices. However, we argue there is a great potential for social media platforms like TikTok to play an educative role in promoting positive behaviour change among young people by sharing engaging content suitable to target audiences. This research serves as the first to investigate the potential utility of TikTok in food waste reduction and underscores the important role social media platforms can play in promoting sustainable food practices. The findings will help governments, organizations, and communities promote tailored and effective interventions to reduce food waste and help achieve the United Nations’ sustainable development goal of halving food waste by 2030.

Keywords: food waste reduction, behaviour, social media, TikTok, adolescents

Procedia PDF Downloads 68
180 A Delphi Study to Build Consensus for Tuberculosis Control Guideline to Achieve Who End Tb 2035 Strategy

Authors: Pui Hong Chung, Cyrus Leung, Jun Li, Kin On Kwok, Ek Yeoh

Abstract:

Introduction: Studies for TB control in intermediate tuberculosis burden countries (IBCs) comprise a relatively small proportion in TB control literature, as compared to the effort put in high and low burden counterparts. It currently lacks of consensus in the optimal weapons and strategies we can use to combat TB in IBCs; guidelines of TB control are inadequate and thus posing a great obstacle in eliminating TB in these countries. To fill-in the research and services gap, we need to summarize the findings of the effort in this regard and to seek consensus in terms of policy making for TB control, we have devised a series of scoping and Delphi studies for these purposes. Method: The scoping and Delphi studies are conducted in parallel to feed information for each other. Before the Delphi iterations, we have invited three local experts in TB control in Hong Kong to participate in the pre-assessment round of the Delphi study to comments on the validity, relevance, and clarity of the Delphi questionnaire. Result: Two scoping studies, regarding LTBI control in health care workers in IBCs and TB control in elderly of IBCs respectively, have been conducted. The result of these two studies is used as the foundation for developing the Delphi questionnaire, which tapped on seven areas of question, namely: characteristics of IBCs, adequacy of research and services in LTBI control in IBCs, importance and feasibility of interventions for TB control and prevention in hospital, screening and treatment of LTBI in community, reasons of refusal to/ default from LTBI treatment, medical adherence of LTBI treatment, and importance and feasibility of interventions for TB control and prevention in elderly in IBCs. The local experts also commented on the two scoping studies conducted, thus act as the sixth phase of expert consultation in Arksey and O’Malley framework of scoping studies, to either nourish the scope and strategies used in these studies or to supplement ideas for further scoping or systematic review studies. In the subsequent stage, an international expert panel, comprised of 15 to 20 experts from IBCs in Western Pacific Region, will be recruited to join the two-round anonymous Delphi iterations. Four categories of TB control experts, namely clinicians, policy makers, microbiologists/ laboratory personnel, and public health clinicians will be our target groups. A consensus level of 80% is used to determine the achievement of consensus on particular issues. Key messages: 1. Scoping review and Delphi method are useful to identify gaps and then achieve consensus in research. 2. Lots of resources are put in the high burden countries now. However, the usually neglected intermediate-burden countries with TB is an indispensable part for achieving the ambitious WHO End TB 2035 target.

Keywords: dephi questionnaire, tuberculosis, WHO, latent TB infection

Procedia PDF Downloads 284
179 From Mimetic to Mnemonic: On the Simultaneous Rise of Language and Religion

Authors: Dmitry Usenco

Abstract:

The greatest paradox about the origin of language is the fact that, while language is always taught by adults to children, it can never be learnt properly unless its acquisition occurs during childhood. The question that naturally arises in that respect is as follows: How could language be taught for the first time by a non-speaker, i.e., by someone who did not have the opportunity to master it as a child? Yet the above paradox will appear less unresolvable if we hypothesise that language was originally introduced not as a means of communication but as a relatively modest training/playing technique that was used to develop the learners’ mimetic skills. Its communicative and expressive properties could have been discovered and exploited later – upon the learners’ reaching their adulthood. The importance of mimesis in children’s development is universally recognised. The most common forms of it are onomatopoeia and mime, which consist in reproducing sounds and imitating shapes/movements of externally observed objects. However, in some cases, neither of these exercises can be adequate to the task. An object, especially an inanimate one, may emit no characteristic sounds, making onomatopoeia problematic. In other cases, it may have no easily reproduceable shape, while its movements may depend on the specific way of our interacting with it. On such occasions, onomatopoeia and mime can perhaps be supplemented, or even replaced, by movements of the tongue which can metonymically represent certain aspects of our interaction with the object. This is especially evident with consonants: e.g., a fricative sound can designate the subject’s relatively slow approach to the object or vice versa, while a plosive one can express the relatively abrupt process of grabbing/sticking or parrying/bouncing. From that point of view, a protoword can be regarded as a sophisticated gesture of the tongue but also as a mnemonic sequence that contains encoded instructions about the way to handle the object. When this originally subjective link between the object and its mimetic/mnemonic representation eventually installs itself in the collective mind (however small at first the community might be), the initially nameless object acquires a name, and the first word is created. (Discussing the difference between proper and common names is out of the scope of this paper). In its very beginning, this word has two major applications. It can be used for interhuman communication because it allows us to invoke the presence of a currently absent object. It can also be used for designing, expressing, and memorising our interaction with the object itself. The first usage gives rise to language, the second to religion. By the act of naming, we attach to the object a mental (‘spiritual’) dimension which has an independent existence in our collective mind. By referring to the name (idea/demon/soul) of the object, we perform our first act of spirituality, our first religious observance. This is the beginning of animism – arguably, the most ancient form of religion. To conclude: the rise of religion is simultaneous with the the emergence of language in human evolution.

Keywords: language, religion, origin, acquisition, childhood, adulthood, play, represntation, onomatopoeia, mime, gesture, consonant, simultaneity, spirituality, animism

Procedia PDF Downloads 65
178 Climate Change, Women's Labour Markets and Domestic Work in Mexico

Authors: Luis Enrique Escalante Ochoa

Abstract:

This paper attempts to assess the impacts of Climate change (CC) on inequalities in the labour market. CC will have the most serious effects on some vulnerable economic sectors, such as agriculture, livestock or tourism, but also on the most vulnerable population groups. The objective of this research is to evaluate the impact of CC on the labour market and particularly on Mexican women. Influential documents such as the synthesis reports produced by the Intergovernmental Panel on Climate Change (IPCC) in 2007 and 2014 revived a global effort to counteract the effects of CC, called for an analysis of the impacts on vulnerable socio-economic groups and on economic activities, and for the development of decision-making tools to enable policy and other decisions based on the complexity of the world in relation to climate change, taking into account socio-economic attributes. We follow up this suggestion and determine the impact of CC on vulnerable populations in the Mexican labour market, taking into account two attributes (gender and level of qualification of workers). Most studies have focused on the effects of CC on the agricultural sector, as it is considered a highly vulnerable economic sector to the effects of climate variability. This research seeks to contribute to the existing literature taking into account, in addition to the agricultural sector, other sectors such as tourism, water availability, and energy that are of vital importance to the Mexican economy. Likewise, the effects of climate change will be extended to the labour market and specifically to women who in some cases have been left out. The studies are sceptical about the impact of CC on the female labour market because of the perverse effects on women's domestic work, which are too often omitted from analyses. This work will contribute to the literature by integrating domestic work, which in the case of Mexico is much higher among women than among men (80.9% vs. 19.1%), according to the 2009 time use survey. This study is relevant since it will allow us to analyse impacts of climate change not only in the labour market of the formal economy, but also in the non-market sphere. Likewise, we consider that including the gender dimension is valid for the Mexican economy as it is a country with high degrees of gender inequality in the labour market. In the OECD economic study for Mexico (2017), the low labour participation of Mexican women is highlighted. Although participation has increased substantially in recent years (from 36% in 1990 to 47% in 2017), it remains low compared to the OECD average where women participate around 70% of the labour market. According to Mexico's 2009 time use survey, domestic work represents about 13% of the total time available. Understanding the interdependence between the market and non-market spheres, and the gender division of labour within them is the necessary premise for any economic analysis aimed at promoting gender equality and inclusive growth.

Keywords: climate change, labour market, domestic work, rural sector

Procedia PDF Downloads 121
177 Trade in Value Added: The Case of the Central and Eastern European Countries

Authors: Łukasz Ambroziak

Abstract:

Although the impact of the production fragmentation on trade flows has been examined many times since the 1990s, the research was not comprehensive because of the limitations in traditional trade statistics. Early 2010s the complex databases containing world input-output tables (or indicators calculated on their basis) has made available. It increased the possibilities of examining the production sharing in the world. The trade statistic in value-added terms enables us better to estimate trade changes resulted from the internationalisation and globalisation as well as benefits of the countries from international trade. In the literature, there are many research studies on this topic. Unfortunately, trade in value added of the Central and Eastern European Countries (CEECs) has been so far insufficiently studied. Thus, the aim of the paper is to present changes in value added trade of the CEECs (Bulgaria, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia and Slovenia) in the period of 1995-2011. The concept 'trade in value added' or 'value added trade' is defined as the value added of a country which is directly and indirectly embodied in final consumption of another country. The typical question would be: 'How much value added is created in a country due to final consumption in the other countries?' The data will be downloaded from the World Input-Output Database (WIOD). The structure of this paper is as follows. First, theoretical and methodological aspects related to the application of the input-output tables in the trade analysis will be studied. Second, a brief survey of the empirical literature on this topic will be presented. Third, changes in exports and imports in value added of the CEECs will be analysed. A special attention will be paid to the differences in bilateral trade balances using traditional trade statistics (in gross terms) on one side, and value added statistics on the other. Next, in order to identify factors influencing value added exports and value added imports of the CEECs the generalised gravity model, based on panel data, will be used. The dependent variables will be value added exports and imports. The independent variables will be, among others, the level of GDP of trading partners, the level of GDP per capita of trading partners, the differences in GDP per capita, the level of the FDI inward stock, the geographical distance, the existence (or non-existence) of common border, the membership (or not) in preferential trade agreements or in the EU. For comparison, an estimation will also be made based on exports and imports in gross terms. The initial research results show that the gravity model better explained determinants of trade in value added than gross trade (R2 in the former is higher). The independent variables had the same direction of impact both on value added exports/imports and gross exports/imports. Only value of coefficients differs. The most difference concerned geographical distance. It had smaller impact on trade in value added than gross trade.

Keywords: central and eastern European countries, gravity model, input-output tables, trade in value added

Procedia PDF Downloads 231
176 The Impact of the Virtual Learning Environment on Teacher's Pedagogy and Student's Learning in Primary School Setting

Authors: Noor Ashikin Omar

Abstract:

The rapid growth and advancement in information and communication technology (ICT) at a global scene has greatly influenced and revolutionised interaction amongst society. The use of ICT has become second nature in managing everyday lives, particularly in the education environment. Traditional learning methods of using blackboards and chalks have been largely improved by the use of ICT devices such as interactive whiteboards and computers in school. This paper aims to explore the impacts of virtual learning environments (VLE) on teacher’s pedagogy and student’s learning in primary school settings. The research was conducted in two phases. Phase one of this study comprised a short interview with the school’s senior assistants to examine issues and challenges faced during planning and implementation of FrogVLE in their respective schools. Phase two involved a survey of a number of questionnaires directed to three major stakeholders; the teachers, students and parents. The survey intended to explore teacher’s and student’s perspective and attitude towards the use of VLE as a teaching and learning medium and as a learning experience as a whole. In addition, the survey from parents provided insights on how they feel towards the use of VLE for their child’s learning. Collectively, the two phases enable improved understanding and provided observations on factors that had affected the implementation of the VLE into primary schools. This study offers the voices of the students which were frequently omitted when addressing innovations as well as teachers who may not always be heard. It is also significant in addressing the importance of teacher’s pedagogy on students’ learning and its effects to enable more effective ICT integration with a student-centred approach. Finally, parental perceptions in the implementation of VLE in supporting their children’s learning have been implicated as having a bearing on educational achievement. The results indicate that the all three stakeholders were positive and highly supportive towards the use of VLE in schools. They were able to understand the benefits of moving towards the modern method of teaching using ICT and accept the change in the education system. However, factors such as condition of ICT facilities at schools and homes as well as inadequate professional development for the teachers in both ICT skills and management skills hindered exploitation of the VLE system in order to fully utilise its benefits. Social influences within different communities and cultures and costs of using the technology also has a significant impact. The findings of this study are important to the Malaysian Ministry of Education because it informs policy makers on the impact of the Virtual Learning Environment (VLE) on teacher’s pedagogy and learning of Malaysian primary school children. The information provided to policy makers allows them to make a sound judgement and enables an informed decision making.

Keywords: attitudes towards virtual learning environment (VLE), parental perception, student's learning, teacher's pedagogy

Procedia PDF Downloads 198
175 Monitoring of Wound Healing Through Structural and Functional Mechanisms Using Photoacoustic Imaging Modality

Authors: Souradip Paul, Arijit Paramanick, M. Suheshkumar Singh

Abstract:

Traumatic injury is the leading worldwide health problem. Annually, millions of surgical wounds are created for the sake of routine medical care. The healing of these unintended injuries is always monitored based on visual inspection. The maximal restoration of tissue functionality remains a significant concern of clinical care. Although minor injuries heal well with proper care and medical treatment, large injuries negatively influence various factors (vasculature insufficiency, tissue coagulation) and cause poor healing. Demographically, the number of people suffering from severe wounds and impaired healing conditions is burdensome for both human health and the economy. An incomplete understanding of the functional and molecular mechanism of tissue healing often leads to a lack of proper therapies and treatment. Hence, strong and promising medical guidance is necessary for monitoring the tissue regeneration processes. Photoacoustic imaging (PAI), is a non-invasive, hybrid imaging modality that can provide a suitable solution in this regard. Light combined with sound offers structural, functional and molecular information from the higher penetration depth. Therefore, molecular and structural mechanisms of tissue repair will be readily observable in PAI from the superficial layer and in the deep tissue region. Blood vessel formation and its growth is an essential tissue-repairing components. These vessels supply nutrition and oxygen to the cell in the wound region. Angiogenesis (formation of new capillaries from existing blood vessels) contributes to new blood vessel formation during tissue repair. The betterment of tissue healing directly depends on angiogenesis. Other optical microscopy techniques can visualize angiogenesis in micron-scale penetration depth but are unable to provide deep tissue information. PAI overcomes this barrier due to its unique capability. It is ideally suited for deep tissue imaging and provides the rich optical contrast generated by hemoglobin in blood vessels. Hence, an early angiogenesis detection method provided by PAI leads to monitoring the medical treatment of the wound. Along with functional property, mechanical property also plays a key role in tissue regeneration. The wound heals through a dynamic series of physiological events like coagulation, granulation tissue formation, and extracellular matrix (ECM) remodeling. Therefore tissue elasticity changes, can be identified using non-contact photoacoustic elastography (PAE). In a nutshell, angiogenesis and biomechanical properties are both critical parameters for tissue healing and these can be characterized in a single imaging modality (PAI).

Keywords: PAT, wound healing, tissue coagulation, angiogenesis

Procedia PDF Downloads 91
174 Public Participation for an Effective Flood Risk Management: Building Social Capacities in Ribera Alta Del Ebro, Spain

Authors: Alba Ballester Ciuró, Marc Pares Franzi

Abstract:

While coming decades are likely to see a higher flood risk in Europe and greater socio-economic damages, traditional flood risk management has become inefficient. In response to that, new approaches such as capacity building and public participation have recently been incorporated in natural hazards mitigation policy (i.e. Sendai Framework for Action, Intergovernmental Panel on Climate Change reports and EU Floods Directive). By integrating capacity building and public participation, we present a research concerning the promotion of participatory social capacity building actions for flood risk mitigation at the local level. Social capacities have been defined as the resources and abilities available at individual and collective level that can be used to anticipate, respond to, cope with, recover from and adapt to external stressors. Social capacity building is understood as a process of identifying communities’ social capacities and of applying collaborative strategies to improve them. This paper presents a proposal of systematization of participatory social capacity building process for flood risk mitigation, and its implementation in a high risk of flooding area in the Ebro river basin: Ribera Alta del Ebro. To develop this process, we designed and tested a tool that allows measuring and building five types of social capacities: knowledge, motivation, networks, participation and finance. The tool implementation has allowed us to assess social capacities in the area. Upon the results of the assessment we have developed a co-decision process with stakeholders and flood risk management authorities on which participatory activities could be employed to improve social capacities for flood risk mitigation. Based on the results of this process, and focused on the weaker social capacities, we developed a set of participatory actions in the area oriented to general public and stakeholders: informative sessions on flood risk management plan and flood insurances, interpretative river descents on flood risk management (with journalists, teachers, and general public), interpretative visit to the floodplain, workshop on agricultural insurance, deliberative workshop on project funding, deliberative workshops in schools on flood risk management (playing with a flood risk model). The combination of obtaining data through a mixed-methods approach of qualitative inquiry and quantitative surveys, as well as action research through co-decision processes and pilot participatory activities, show us the significant impact of public participation on social capacity building for flood risk mitigation and contributes to the understanding of which main factors intervene in this process.

Keywords: flood risk management, public participation, risk reduction, social capacities, vulnerability assessment

Procedia PDF Downloads 199
173 A Study on Characteristics of Runoff Analysis Methods at the Time of Rainfall in Rural Area, Okinawa Prefecture Part 2: A Case of Kohatu River in South Central Part of Okinawa Pref

Authors: Kazuki Kohama, Hiroko Ono

Abstract:

The rainfall in Japan is gradually increasing every year according to Japan Meteorological Agency and Intergovernmental Panel on Climate Change Fifth Assessment Report. It means that the rainfall difference between rainy season and non-rainfall is increasing. In addition, the increasing trend of strong rain for a short time clearly appears. In recent years, natural disasters have caused enormous human injuries in various parts of Japan. Regarding water disaster, local heavy rain and floods of large rivers occur frequently, and it was decided on a policy to promote hard and soft sides as emergency disaster prevention measures with water disaster prevention awareness social reconstruction vision. Okinawa prefecture in subtropical region has torrential rain and water disaster several times a year such as river flood, in which is caused in specific rivers from all 97 rivers. Also, the shortage of capacity and narrow width are characteristic of river in Okinawa and easily cause river flood in heavy rain. This study focuses on Kohatu River that is one of the specific rivers. In fact, the water level greatly rises over the river levee almost once a year but non-damage of buildings around. On the other hand in some case, the water level reaches to ground floor height of house and has happed nine times until today. The purpose of this research is to figure out relationship between precipitation, surface outflow and total treatment water quantity of Kohatu River. For the purpose, we perform hydrological analysis although is complicated and needs specific details or data so that, the method is mainly using Geographic Information System software and outflow analysis system. At first, we extract watershed and then divided to 23 catchment areas to understand how much surface outflow flows to runoff point in each 10 minutes. On second, we create Unit Hydrograph indicating the area of surface outflow with flow area and time. This index shows the maximum amount of surface outflow at 2400 to 3000 seconds. Lastly, we compare an estimated value from Unit Hydrograph to a measured value. However, we found that measure value is usually lower than measured value because of evaporation and transpiration. In this study, hydrograph analysis was performed using GIS software and outflow analysis system. Based on these, we could clarify the flood time and amount of surface outflow.

Keywords: disaster prevention, water disaster, river flood, GIS software

Procedia PDF Downloads 129
172 Semantic Differential Technique as a Kansei Engineering Tool to Enquire Public Space Design Requirements: The Case of Parks in Tehran

Authors: Nasser Koleini Mamaghani, Sara Mostowfi

Abstract:

The complexity of public space design makes it difficult for designers to simultaneously consider all issues for thorough decision-making. Among public spaces, the public space around people’s house is the most prominent space that affects and impacts people’s daily life. Considering recreational public spaces in cities, their main purpose would be to design for experiences that enable a deep feeling of peace and a moment of being away from the hectic daily life. Respecting human emotions and restoring natural environments, although difficult and to some extent out of reach, are key issues for designing such spaces. In this paper we propose to analyse the structure of recreational public spaces and the related emotional impressions. Furthermore, we suggest investigating how these structures influence people’s choice for public spaces by using differential semantics. According to Kansei methodology, in order to evaluate a situation appropriately, the assessment variables must be adapted to the user’s mental scheme. This means that the first step would have to be the identification of a space’s conceptual scheme. In our case study, 32 Kansei words and 4 different locations, each with a different sensual experience, were selected. The 4 locations were all parks in the city of Tehran (Iran), each with a unique structure and artifacts such as a fountain, lighting, sculptures, and music. It should be noted that each of these parks has different combination and structure of environmental and artificial elements like: fountain, lightning, sculpture, music (sound) and so forth. The first one was park No.1, a park with natural environment, the selected space was a fountain with motion light and sculpture. The second park was park No.2, in which there are different styles of park construction: ways from different countries, the selected space was traditional Iranian architecture with a fountain and trees. The third one was park No.3, the park with modern environment and spaces, and included a fountain that moved according to music and lighting. The fourth park was park No.4, the park with combination of four elements: water, fire, earth, wind, the selected space was fountains squirting water from the ground up. 80 participant (55 males and 25 females) aged from 20-60 years participated in this experiment. Each person filled the questionnaire in the park he/she was in. Five-point semantic differential scale was considered to determine the relation between space details and adjectives (kansei words). Received data were analyzed by multivariate statistical technique (factor analysis using SPSS statics). Finally the results of this analysis are criteria as inspiration which can be used in future space designing for creating pleasant feeling in users.

Keywords: environmental design, differential semantics, Kansei engineering, subjective preferences, space

Procedia PDF Downloads 394
171 Redesigning Clinical and Nursing Informatics Capstones

Authors: Sue S. Feldman

Abstract:

As clinical and nursing informatics mature, an area that has gotten a lot of attention is the value capstone projects. Capstones are meant to address authentic and complex domain-specific problems. While capstone projects have not always been essential in graduate clinical and nursing informatics education, employers are wanting to see evidence of the prospective employee's knowledge and skills as an indication of employability. Capstones can be organized in many ways: a single course over a single semester, multiple courses over multiple semesters, as a targeted demonstration of skills, as a synthesis of prior knowledge and skills, mentored by one single person or mentored by various people, submitted as an assignment or presented in front of a panel. Because of the potential for capstones to enhance the educational experience, and as a mechanism for application of knowledge and demonstration of skills, a rigorous capstone can accelerate a graduate's potential in the workforce. In 2016, the capstone at the University of Alabama at Birmingham (UAB) could feel the external forces of a maturing Clinical and Nursing Informatics discipline. While the program had a capstone course for many years, it was lacking the depth of knowledge and demonstration of skills being asked for by those hiring in a maturing Informatics field. Since the program is online, all capstones were always in the online environment. While this modality did not change, other contributors to instruction modality changed. Pre-2016, the instruction modality was self-guided. Students checked in with a single instructor, and that instructor monitored progress across all capstones toward a PowerPoint and written paper deliverable. At the time, the enrollment was few, and the maturity had not yet pushed hard enough. By 2017, doubling enrollment and the increased demand of a more rigorously trained workforce led to restructuring the capstone so that graduates would have and retain the skills learned in the capstone process. There were three major changes: the capstone was broken up into a 3-course sequence (meaning it lasted about 10 months instead of 14 weeks), there were many chunks of deliverables, and each faculty had a cadre of about 5 students to advise through the capstone process. Literature suggests that the chunking, breaking up complex projects (i.e., the capstone in one summer) into smaller, more manageable chunks (i.e., chunks of the capstone across 3 semesters), can increase and sustain learning while allowing for increased rigor. By doing this, the teaching responsibility was shared across faculty with each semester course being taught by a different faculty member. This change facilitated delving much deeper in instruction and produced a significantly more rigorous final deliverable. Having students advised across the faculty seemed like the right thing to do. It not only shared the load, but also shared the success of students. Furthermore, it meant that students could be placed with an academic advisor who had expertise in their capstone area, further increasing the rigor of the entire capstone process and project and increasing student knowledge and skills.

Keywords: capstones, clinical informatics, health informatics, informatics

Procedia PDF Downloads 119
170 Educational Leadership Preparation Program Review of Employer Satisfaction

Authors: Glenn Koonce

Abstract:

There is a need to address the improvement of university educational leadership preparation programs through the processes of accreditation and continuous improvement. The program faculty in a university in the eastern part of the United States has incorporated an employer satisfaction focus group to address their national accreditation standard so that employers are satisfied with completers' preparation for the position of principal or assistant principal. Using the Council for the Accreditation of Educator Preparation (CAEP) required proficiencies, the following research questions are investigated: 1) what proficiencies do completers perform the strongest? 2) what proficiencies need to be strengthened? 3) what other strengths beyond the required proficiencies do completers demonstrate? 4) what other areas of responsibility beyond the required proficiencies do completers demonstrate? and 5) how can the program improve in preparing candidates for their positions? This study focuses on employers of one public school district that has a large number of educational leadership completers employed as principals and assistant principals. Central office directors who evaluate principals and principals who evaluate assistant principals are focus group participants. Construction of the focus group questions is a result of recommendations from an accreditation regulatory specialist, reviewed by an expert panel, and piloted by an experienced focus group leader. The focus group session was audio recorded, transcribed, and analyzed using the NVivo Version 14 software. After constructing folders in NVivo, the focus group transcript was loaded and skimmed by diagnosing significant statements and assessing core ideas for developing primary themes. These themes were aligned to address the research questions. From the transcript, codes were assigned to the themes and NVivo provided a coding hierarchy chart or graphical illustration for framing the coding. A final report of the coding process was designed using the primary themes and pertinent codes that were supported in excerpts from the transcript. The outcome of this study is to identify themes that can provide evidence that the educational leadership program is meeting its mission to improve PreK-12 student achievement through well-prepared completers who have achieved the position of principal or assistant principal. The considerations will be used to derive a composite profile of employers' satisfaction with program completers with the capacity to serve, influence, and thrive as educational leaders. Analysis of the idealized themes will result in identifying issues that may challenge university educational leadership programs to improve. Results, conclusions, and recommendations are used for continuous improvement, which is another national accreditation standard required for the program.

Keywords: educational leadership preparation, CAEP accreditation, principal & assistant principal evaluations, continuous improvement

Procedia PDF Downloads 10
169 Examining the Design of a Scaled Audio Tactile Model for Enhancing Interpretation of Visually Impaired Visitors in Heritage Sites

Authors: A. Kavita Murugkar, B. Anurag Kashyap

Abstract:

With the Rights for Persons with Disabilities Act (RPWD Act) 2016, the Indian government has made it mandatory for all establishments, including Heritage Sites, to be accessible for People with Disabilities. However, recent access audit surveys done under the Accessible India Campaign by Ministry of Culture indicate that there are very few accessibility measures provided in the Heritage sites for people with disabilities. Though there are some measures for the mobility impaired, surveys brought out that there are almost no provisions for people with vision impairment (PwVI) in heritage sites thus depriving them of a reasonable physical & intellectual access that facilitates an enjoyable experience and enriching interpretation of the Heritage Site. There is a growing need to develop multisensory interpretative tools that can help the PwVI in perceiving heritage sites in the absence of vision. The purpose of this research was to examine the usability of an audio-tactile model as a haptic and sound-based strategy for augmenting the perception and experience of PwVI in a heritage site. The first phase of the project was a multi-stage phenomenological experimental study with visually impaired users to investigate the design parameters for developing an audio-tactile model for PwVI. The findings from this phase included user preferences related to the physical design of the model such as the size, scale, materials, details, etc., and the information that it will carry such as braille, audio output, tactile text, etc. This was followed by the second phase in which a working prototype of an audio-tactile model is designed and developed for a heritage site based on the findings from the first phase of the study. A nationally listed heritage site from the author’s city was selected for making the model. The model was lastly tested by visually impaired users for final refinements and validation. The prototype developed empowers People with Vision Impairment to navigate independently in heritage sites. Such a model if installed in every heritage site, can serve as a technological guide for the Person with Vision Impairment, giving information of the architecture, details, planning & scale of the buildings, the entrances, location of important features, lifts, staircases, and available, accessible facilities. The model was constructed using 3D modeling and digital printing technology. Though designed for the Indian context, this assistive technology for the blind can be explored for wider applications across the globe. Such an accessible solution can change the otherwise “incomplete’’ perception of the disabled visitor, in this case, a visually impaired visitor and augment the quality of their experience in heritage sites.

Keywords: accessibility, architectural perception, audio tactile model , inclusive heritage, multi-sensory perception, visual impairment, visitor experience

Procedia PDF Downloads 97
168 Using the ISO 9705 Room Corner Test for Smoke Toxicity Quantification of Polyurethane

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Polyurethane (PU) foam is typically sold as acoustic foam that is often used as sound insulation in settings such as night clubs and bars. As a construction product, PU is tested by being glued to the walls and ceiling of the ISO 9705 room corner test room. However, when heat is applied to PU foam, it melts and burns as a pool fire due to it being a thermoplastic. The current test layout is unable to accurately measure mass loss and doesn’t allow for the material to burn as a pool fire without seeping out of the test room floor. The lack of mass loss measurement means gas yields pertaining to smoke toxicity analysis can’t be calculated, which makes data comparisons from any other material or test method difficult. Additionally, the heat release measurements are not representative of the actual measurements taken as a lot of the material seeps through the floor (when a tray to catch the melted material is not used). This research aimed to modify the ISO 9705 test to provide the ability to measure mass loss to allow for better calculation of gas yields and understanding of decomposition. It also aimed to accurately measure smoke toxicity in both the doorway and duct and enable dilution factors to be calculated. Finally, the study aimed to examine if doubling the fuel loading would force under-ventilated flaming. The test layout was modified to be a combination of the SBI (single burning item) test set up inside oof the ISO 9705 test room. Polyurethane was tested in two different ways with the aim of altering the ventilation condition of the tests. Test one was conducted using 1 x SBI test rig aiming for well-ventilated flaming. Test two was conducted using 2 x SBI rigs (facing each other inside the test room) (doubling the fuel loading) aiming for under-ventilated flaming. The two different configurations used were successful in achieving both well-ventilated flaming and under-ventilated flaming, shown by the measured equivalence ratios (measured using a phi meter designed and created for these experiments). The findings show that doubling the fuel loading will successfully force under-ventilated flaming conditions to be achieved. This method can therefore be used when trying to replicate post-flashover conditions in future ISO 9705 room corner tests. The radiative heat generated by the two SBI rigs facing each other facilitated a much higher overall heat release resulting in a more severe fire. The method successfully allowed for accurate measurement of smoke toxicity produced from the PU foam in terms of simple gases such as oxygen depletion, CO and CO2. Overall, the proposed test modifications improve the ability to measure the smoke toxicity of materials in different fire conditions on a large-scale.

Keywords: flammability, ISO9705, large-scale testing, polyurethane, smoke toxicity

Procedia PDF Downloads 63
167 A Measurement Instrument to Determine Curricula Competency of Licensure Track Graduate Psychotherapy Programs in the United States

Authors: Laith F. Gulli, Nicole M. Mallory

Abstract:

We developed a novel measurement instrument to assess Knowledge of Educational Programs in Professional Psychotherapy Programs (KEP-PPP or KEP-Triple P) within the United States. The instrument was designed by a Panel of Experts (PoE) that consisted of Licensed Psychotherapists and Medical Care Providers. Licensure track psychotherapy programs are listed in the databases of the Commission on Accreditation for Marriage and Family Therapy Education (COAMFTE); American Psychological Association (APA); Council on Social Work Education (CSWE); and the Council for Accreditation of Counseling & Related Educational Programs (CACREP). A complete list of psychotherapy programs can be obtained from these professional databases, selecting search fields of (All Programs) in (All States). Each program has a Web link that electronically and directly connects to the institutional program, which can be researched using the KEP-Triple P. The 29-item KEP Triple P was designed to consist of six categorical fields; Institutional Type: Degree: Educational Delivery: Accreditation: Coursework Competency: and Special Program Considerations. The KEP-Triple P was designed to determine whether a specific course(s) is offered in licensure track psychotherapy programs. The KEP-Triple P is designed to be modified to assess any part or the entire curriculum of licensure graduate programs. We utilized the KEP-Triple P instrument to study whether a graduate course in Addictions was offered in Marriage and Family Therapy (MFT) programs. Marriage and Family Therapists are likely to commonly encounter patients with Addiction(s) due to the broad treatment scope providing psychotherapy services to individuals, couples and families of all age groups. Our study of 124 MFT programs which concluded at the end of 2016 found that we were able to assess 61 % of programs (N = 76) since 27 % (N = 34) of programs were inaccessible due to broken Web links. From the total of all MFT programs 11 % (N = 14) did not have a published curriculum on their Institutional Web site. From the sample study, we found that 66 % (N = 50) of curricula did not offer a course in Addiction Treatment and that 34 % (N =26) of curricula did require a mandatory course in Addiction Treatment. From our study sample, we determined that 15 % (N = 11) of MFT doctorate programs did not require an Addictions Treatment course and that 1 % (N = 1) did require such a course. We found that 99 % of our study sample offered a Campus based program and 1 % offered a hybrid program with both online and residential components. From the total sample studied, we determined that 84 % of programs would be able to obtain reaccreditation within a five-year period. We recommend that MFT programs initiate procedures to revise curricula to include a required course in Addiction Treatment prior to their next accreditation cycle, to improve the escalating addiction crisis in the United States. This disparity in MFT curricula raises serious ethical and legal consideration for national and Federal stakeholders as well as for patients seeking a competently trained psychotherapist.

Keywords: addiction, competency, curriculum, psychotherapy

Procedia PDF Downloads 141
166 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 93
165 Mirna Expression Profile is Different in Human Amniotic Mesenchymal Stem Cells Isolated from Obese Respect to Normal Weight Women

Authors: Carmela Nardelli, Laura Iaffaldano, Valentina Capobianco, Antonietta Tafuto, Maddalena Ferrigno, Angela Capone, Giuseppe Maria Maruotti, Maddalena Raia, Rosa Di Noto, Luigi Del Vecchio, Pasquale Martinelli, Lucio Pastore, Lucia Sacchetti

Abstract:

Maternal obesity and nutrient excess in utero increase the risk of future metabolic diseases in the adult life. The mechanisms underlying this process are probably based on genetic, epigenetic alterations and changes in foetal nutrient supply. In mammals, the placenta is the main interface between foetus and mother, it regulates intrauterine development, modulates adaptive responses to sub optimal in uterus conditions and it is also an important source of human amniotic mesenchymal stem cells (hA-MSCs). We previously highlighted a specific microRNA (miRNA) profiling in amnion from obese (Ob) pregnant women, here we compared the miRNA expression profile of hA-MSCs isolated from (Ob) and control (Co) women, aimed to search for any alterations in metabolic pathways that could predispose the new-born to the obese phenotype. Methods: We isolated, at delivery, hA-MSCs from amnion of 16 Ob- and 7 Co-women with pre-pregnancy body mass index (mean/SEM) 40.3/1.8 and 22.4/1.0 kg/m2, respectively. hA-MSCs were phenotyped by flow cytometry. Globally, 384 miRNAs were evaluated by the TaqMan Array Human MicroRNA Panel v 1.0 (Applied Biosystems). By the TargetScan program we selected the target genes of the miRNAs differently expressed in Ob- vs Co-hA-MSCs; further, by KEGG database, we selected the statistical significant biological pathways. Results: The immunophenotype characterization confirmed the mesenchymal origin of the isolated hA-MSCs. A large percentage of the tested miRNAs, about 61.4% (232/378), was expressed in hA-MSCs, whereas 38.6% (146/378) was not. Most of the expressed miRNAs (89.2%, 207/232) did not differ between Ob- and Co-hA-MSCs and were not further investigated. Conversely, 4.8% of miRNAs (11/232) was higher and 6.0% (14/232) was lower in Ob- vs Co-hA-MSCs. Interestingly, 7/232 miRNAs were obesity-specific, being expressed only in hA-MSCs isolated from obese women. Bioinformatics showed that these miRNAs significantly regulated (P<0.001) genes belonging to several metabolic pathways, i.e. MAPK signalling, actin cytoskeleton, focal adhesion, axon guidance, insulin signaling, etc. Conclusions: Our preliminary data highlight an altered miRNA profile in Ob- vs Co-hA-MSCs and suggest that an epigenetic miRNA-based mechanism of gene regulation could affect pathways involved in placental growth and function, thereby potentially increasing the newborn’s risk of metabolic diseases in the adult life.

Keywords: hA-MSCs, obesity, miRNA, biosystem

Procedia PDF Downloads 517
164 Pond Site Diagnosis: Monoclonal Antibody-Based Farmer Level Tests to Detect the Acute Hepatopancreatic Necrosis Disease in Shrimp

Authors: B. T. Naveen Kumar, Anuj Tyagi, Niraj Kumar Singh, Visanu Boonyawiwat, A. H. Shanthanagouda, Orawan Boodde, K. M. Shankar, Prakash Patil, Shubhkaramjeet Kaur

Abstract:

Early mortality syndrome (EMS)/Acute Hepatopancreatic Necrosis Disease (AHPND) has emerged as a major obstacle for the shrimp farming around the world. It is caused by a strain of Vibrio parahaemolyticus. The possible preventive and control measure is, early and rapid detection of the pathogen in the broodstock, post-larvae and monitoring the shrimp during the culture period. Polymerase chain reaction (PCR) based early detection methods are good, but they are costly, time taking and requires a sophisticated laboratory. The present study was conducted to develop a simple, sensitive and rapid diagnostic farmer level kit for the reliable detection of AHPND in shrimp. A panel of monoclonal antibodies (MAbs) were raised against the recombinant Pir B protein (rPirB). First, an immunodot was developed by using MAbs G3B8 and Mab G3H2 which showed specific reactivity to purified r-PirB protein with no cross-reactivity to other shrimp bacterial pathogens (AHPND free Vibrio parahaemolyticus (Indian strains), V. anguillarum, WSSV, Aeromonas hydrophila, and Aphanomyces invadans). Immunodot developed using Mab G3B8 is more sensitive than that with the Mab G3H2. However, immunodot takes almost 2.5 hours to complete with several hands-on steps. Therefore, the flow-through assay (FTA) was developed by using a plastic cassette containing the nitrocellulose membrane with absorbing pads below. The sample was dotted in the test zone on the nitrocellulose membrane followed by continuos addition of five solutions in the order of i) blocking buffer (BSA) ii) primary antibody (MAb) iii) washing Solution iv) secondary antibody and v) chromogen substrate (TMB) clear purple dots against a white background were considered as positive reactions. The FTA developed using MAbG3B8 is more sensitive than that with MAb G3H2. In FTA the two MAbs showed specific reactivity to purified r-PirB protein and not to other shrimp bacterial pathogens. The FTA is simple to farmer/field level, sensitive and rapid requiring only 8-10 min for completion. Tests can be developed to kits, which will be ideal for use in biosecurity, for the first line of screening (at the port or pond site) and during monitoring and surveillance programmes overall for the good management practices to reduce the risk of the disease.

Keywords: acute hepatopancreatic necrosis disease, AHPND, flow-through assay, FTA, farmer level, immunodot, pond site, shrimp

Procedia PDF Downloads 163
163 Integrating the Modbus SCADA Communication Protocol with Elliptic Curve Cryptography

Authors: Despoina Chochtoula, Aristidis Ilias, Yannis Stamatiou

Abstract:

Modbus is a protocol that enables the communication among devices which are connected to the same network. This protocol is, often, deployed in connecting sensor and monitoring units to central supervisory servers in Supervisory Control and Data Acquisition, or SCADA, systems. These systems monitor critical infrastructures, such as factories, power generation stations, nuclear power reactors etc. in order to detect malfunctions and ignite alerts and corrective actions. However, due to their criticality, SCADA systems are vulnerable to attacks that range from simple eavesdropping on operation parameters, exchanged messages, and valuable infrastructure information to malicious modification of vital infrastructure data towards infliction of damage. Thus, the SCADA research community has been active over strengthening SCADA systems with suitable data protection mechanisms based, to a large extend, on cryptographic methods for data encryption, device authentication, and message integrity protection. However, due to the limited computation power of many SCADA sensor and embedded devices, the usual public key cryptographic methods are not appropriate due to their high computational requirements. As an alternative, Elliptic Curve Cryptography has been proposed, which requires smaller key sizes and, thus, less demanding cryptographic operations. Until now, however, no such implementation has been proposed in the SCADA literature, to the best of our knowledge. In order to fill this gap, our methodology was focused on integrating Modbus, a frequently used SCADA communication protocol, with Elliptic Curve based cryptography and develop a server/client application to demonstrate the proof of concept. For the implementation we deployed two C language libraries, which were suitably modify in order to be successfully integrated: libmodbus (https://github.com/stephane/libmodbus) and ecc-lib https://www.ceid.upatras.gr/webpages/faculty/zaro/software/ecc-lib/). The first library provides a C implementation of the Modbus/TCP protocol while the second one offers the functionality to develop cryptographic protocols based on Elliptic Curve Cryptography. These two libraries were combined, after suitable modifications and enhancements, in order to give a modified version of the Modbus/TCP protocol focusing on the security of the data exchanged among the devices and the supervisory servers. The mechanisms we implemented include key generation, key exchange/sharing, message authentication, data integrity check, and encryption/decryption of data. The key generation and key exchange protocols were implemented with the use of Elliptic Curve Cryptography primitives. The keys established by each device are saved in their local memory and are retained during the whole communication session and are used in encrypting and decrypting exchanged messages as well as certifying entities and the integrity of the messages. Finally, the modified library was compiled for the Android environment in order to run the server application as an Android app. The client program runs on a regular computer. The communication between these two entities is an example of the successful establishment of an Elliptic Curve Cryptography based, secure Modbus wireless communication session between a portable device acting as a supervisor station and a monitoring computer. Our first performance measurements are, also, very promising and demonstrate the feasibility of embedding Elliptic Curve Cryptography into SCADA systems, filling in a gap in the relevant scientific literature.

Keywords: elliptic curve cryptography, ICT security, modbus protocol, SCADA, TCP/IP protocol

Procedia PDF Downloads 248
162 Generating a Multiplex Sensing Platform for the Accurate Diagnosis of Sepsis

Authors: N. Demertzis, J. L. Bowen

Abstract:

Sepsis is a complex and rapidly evolving condition, resulting from uncontrolled prolonged activation of host immune system due to pathogenic insult. The aim of this study is the development of a multiplex electrochemical sensing platform, capable of detecting both pathogen associated and host immune markers to enable the rapid and definitive diagnosis of sepsis. A combination of aptamers and molecular imprinting approaches have been employed to generate sensing systems for lipopolysaccharide (LPS), c-reactive protein (CRP) and procalcitonin (PCT). Gold working electrodes were mechanically polished and electrochemically cleaned with 0.1 M sulphuric acid using cyclic voltammetry (CV). Following activation, a self-assembled monolayer (SAM) was generated, by incubating the electrodes with a thiolated anti-LPS aptamer / dithiodibutiric acid (DTBA) mixture (1:20). 3-aminophenylboronic acid (3-APBA) in combination with the anti-LPS aptamer was used for the development of the hybrid molecularly imprinted sensor (apta-MIP). Aptasensors, targeting PCT and CRP were also fabricated, following the same approach as in the case of LPS, with mercaptohexanol (MCH) replacing DTBA. In the case of the CRP aptasensor, the SAM was formed following incubation of a 1:1 aptamer: MCH mixture. However, in the case of PCT, the SAM was formed with the aptamer itself, with subsequent backfilling with 1 μM MCH. The binding performance of all systems has been evaluated using electrochemical impedance spectroscopy. The apta-MIP’s polymer thickness is controlled by varying the number of electropolymerisation cycles. In the ideal number of polymerisation cycles, the polymer must cover the electrode surface and create a binding pocket around LPS and its aptamer binding site. Less polymerisation cycles will create a hybrid system which resembles an aptasensor, while more cycles will be able to cover the complex and demonstrate a bulk polymer-like behaviour. Both aptasensor and apta-MIP were challenged with LPS and compared to conventional imprinted (absence of aptamer from the binding site, polymer formed in presence of LPS) and non-imprinted polymers (NIPS, absence of LPS whilst hybrid polymer is formed). A stable LPS aptasensor, capable of detecting down to 5 pg/ml of LPS was generated. The apparent Kd of the system was estimated at 17 pM, with a Bmax of approximately 50 pM. The aptasensor demonstrated high specificity to LPS. The apta-MIP demonstrated superior recognition properties with a limit of detection of 1 fg/ml and a Bmax of 100 pg/ml. The CRP and PCT aptasensors were both able to detect down to 5 pg/ml. Whilst full binding performance is currently being evaluated, there is none of the sensors demonstrate cross-reactivity towards LPS, CRP or PCT. In conclusion, stable aptasensors capable of detecting LPS, PCT and CRP at low concentrations have been generated. The realisation of a multiplex panel such as described herein, will effectively contribute to the rapid, personalised diagnosis of sepsis.

Keywords: aptamer, electrochemical impedance spectroscopy, molecularly imprinted polymers, sepsis

Procedia PDF Downloads 118
161 Application of Acoustic Emissions Related to Drought Can Elicit Antioxidant Responses and Capsaicinoids Content in Chili Pepper Plants

Authors: Laura Helena Caicedo Lopez, Luis Miguel Contreras Medina, Ramon Gerardo Guevara Gonzales, Juan E. Andrade

Abstract:

In this study, we evaluated the effect of three different hydric stress conditions: Low (LHS), medium (MHS), and high (HHS) on capsaicinoid content and enzyme regulation of C. annuum plants. Five main peaks were detected using a 2 Hz resolution vibrometer laser (Polytec-B&K). These peaks or “characteristic frequencies” were used as acoustic emissions (AEs) treatment, transforming these signals into audible sound with the frequency (Hz) content of each hydric stress. Capsaicinoids (CAPs) are the main, secondary metabolites of chili pepper plants and are known to increase during hydric stress conditions or short drought-periods. The AEs treatments were applied in two plant stages: the first one was in the pre-anthesis stage to evaluate the genes that encode the transcription of enzymes responsible for diverse metabolic activities of C. annuum plants. For example, the antioxidant responses such as peroxidase (POD), superoxide dismutase (Mn-SOD). Also, phenyl-alanine ammonia-lyase (PAL) involved in the biosynthesis of the phenylpropanoid compounds. The chalcone synthase (CHS) related to the natural defense mechanisms and species-specific aquaporin (CAPIP-1) that regulate the flow of water into and out of cells. The second stage was at 40 days after flowering (DAF) to evaluate the biochemical effect of AEs related to hydric stress on capsaicinoids production. These two experiments were conducted to identify the molecular responses of C. annuum plants to AE. Moreover, to define AEs could elicit any increase in the capsaicinoids content after a one-week exposition to AEs treatments. The results show that all AEs treatment signals (LHS, MHS, and HHS) were significantly different compared to the non-acoustic emission control (NAE). Also, the AEs induced the up-regulation of POD (~2.8, 2.9, and 3.6, respectively). The gene expression of another antioxidant response was particularly treatment-dependent. The HHS induced and overexpression of Mn-SOD (~0.23) and PAL (~0.33). As well, the MHS only induced an up-regulation of the CHs gene (~0.63). On the other hand, CAPIP-1 gene gas down-regulated by all AEs treatments LHS, MHS, and HHS ~ (-2.4, -0.43 and -6.4, respectively). Likewise, the down-regulation showed particularities depending on the treatment. LHS and MHS induced downregulation of the SOD gene ~ (-1.26 and -1.20 respectively) and PAL (-4.36 and 2.05, respectively). Correspondingly, the LHS and HHS showed the same tendency in the CHs gene, respectively ~ (-1.12 and -1.02, respectively). Regarding the elicitation effect of AE on the capsaicinoids content, additional treatment controls were included. A white noise treatment (WN) to prove the frequency-selectiveness of signals and a hydric stressed group (HS) to compare the CAPs content. Our findings suggest that WN and NAE did not present differences statically. Conversely, HS and all AEs treatments induced a significant increase of capsaicin (Cap) and dihydrocapsaicin (Dcap) after one-week of a treatment. Specifically, the HS plants showed an increase of 8.33 times compared to the NAE and WN treatments and 1.4 times higher than the MHS, which was the AEs treatment with a larger induction of Capsaicinoids among treatments (5.88) and compared to the controls.

Keywords: acoustic emission, capsaicinoids, elicitors, hydric stress, plant signaling

Procedia PDF Downloads 161
160 The Influence of Argumentation Strategy on Student’s Web-Based Argumentation in Different Scientific Concepts

Authors: Xinyue Jiao, Yu-Ren Lin

Abstract:

Argumentation is an essential aspect of scientific thinking which has been widely concerned in recent reform of science education. The purpose of the present studies was to explore the influences of two variables termed ‘the argumentation strategy’ and ‘the kind of science concept’ on student’s web-based argumentation. The first variable was divided into either monological (which refers to individual’s internal discourse and inner chain reasoning) or dialectical (which refers to dialogue interaction between/among people). The other one was also divided into either descriptive (i.e., macro-level concept, such as phenomenon can be observed and tested directly) or theoretical (i.e., micro-level concept which is abstract, and cannot be tested directly in nature). The present study applied the quasi-experimental design in which 138 7th grade students were invited and then assigned to either monological group (N=70) or dialectical group (N=68) randomly. An argumentation learning program called ‘the PWAL’ was developed to improve their scientific argumentation abilities, such as arguing from multiple perspectives and based on scientific evidence. There were two versions of PWAL created. For the individual version, students can propose argument only through knowledge recall and self-reflecting process. On the other hand, the students were allowed to construct arguments through peers’ communication in the collaborative version. The PWAL involved three descriptive science concept-based topics (unit 1, 3 and 5) and three theoretical concept-based topics (unit 2, 4 and 6). Three kinds of scaffoldings were embedded into the PWAL: a) argument template, which was used for constructing evidence-based argument; b) the model of the Toulmin’s TAP, which shows the structure and elements of a sound argument; c) the discussion block, which enabled the students to review what had been proposed during the argumentation. Both quantitative and qualitative data were collected and analyzed. An analytical framework for coding students’ arguments proposed in the PWAL was constructed. The results showed that the argumentation approach has a significant effect on argumentation only in theoretical topics (f(1, 136)=48.2, p < .001, η2=2.62). The post-hoc analysis showed the students in the collaborative group perform significantly better than the students in the individual group (mean difference=2.27). However, there is no significant difference between the two groups regarding their argumentation in descriptive topics. Secondly, the students made significant progress in the PWAL from the earlier descriptive or theoretical topic to the later one. The results enabled us to conclude that the PWAL was effective for students’ argumentation. And the students’ peers’ interaction was essential for students to argue scientifically especially for the theoretical topic. The follow-up qualitative analysis showed student tended to generate arguments through critical dialogue interactions in the theoretical topic which promoted them to use more critiques and to evaluate and co-construct each other’s arguments. More explanations regarding the students’ web-based argumentation and the suggestions for the development of web-based science learning were proposed in our discussions.

Keywords: argumentation, collaborative learning, scientific concepts, web-based learning

Procedia PDF Downloads 96
159 Sensory Characteristics of White Chocolate Enriched with Encapsulated Raspberry Juice

Authors: Ivana Loncarevic, Biljana Pajin, Jovana Petrovic, Danica Zaric, Vesna Tumbas Saponjac, Aleksandar Fistes

Abstract:

Chocolate is a food that activates pleasure centers in the human brain. In comparison to black and milk chocolate, white chocolate does not contain fat-free cocoa solids and thus lacks bioactive components. The aim of this study was to examine the sensory characteristics of enriched white chocolate with the addition of 10% of raspberry juice encapsulated in maltodextrins (denoted as encapsulate). Chocolate is primarily intended for enjoyment, and therefore, the sensory expectation is a critical factor for consumers when selecting a new type of chocolate. Consumer acceptance of chocolate depends primarily on the appearance and taste, but also very much on the mouthfeel, which mainly depends on the particle size of chocolate. Chocolate samples were evaluated by a panel of 8 trained panelists, food technologists, trained according to ISO 8586 (2012). Panelists developed the list of attributes to be used in this study: intensity of red color (light to dark); glow on the surface (mat to shiny); texture on snap (appearance of cavities or holes on the snap surface that are seen - even to gritty); hardness (hardness felt during the first bite of chocolate sample in half by incisors - soft to hard); melting (the time needed to convert solid chocolate into a liquid state – slowly to quickly); smoothness (perception of evenness of chocolate during melting - very even to very granular); fruitiness (impression of fruity taste - light fruity notes to distinct fruity notes); sweetness (organoleptic characteristic of pure substance or mixture giving sweet taste - lightly sweet to very sweet). The chocolate evaluation was carried out 24 h after sample preparation in the sensory laboratory, in partitioned booths, which were illuminated with fluorescent lights (ISO 8589, 2007). Samples were served in white plastic plates labeled with three-digit codes from a random number table. Panelist scored the perceived intensity of each attribute using a 7-point scale (1 = the least intensity and 7 = the most intensity) (ISO 4121, 2002). The addition of 10% of encapsulate had a big influence on chocolate color, where enriched chocolate got a nice reddish color. At the same time, the enriched chocolate sample had less intensity of gloss on the surface. The panelists noticed that addition of encapsulate reduced the time needed to convert solid chocolate into a liquid state, increasing its hardness. The addition of encapsulate had a significant impact on chocolate flavor. It reduced the sweetness of white chocolate and contributed to the fruity raspberry flavor.

Keywords: white chocolate, encapsulated raspberry juice, color, sensory characteristics

Procedia PDF Downloads 155
158 The Effect of Social Media Influencer on Boycott Participation through Attitude toward the Offending Country in a Situational Animosity Context

Authors: Hsing-Hua Stella Chang, Mong-Ching Lin, Cher-Min Fong

Abstract:

Using surrogate boycotts as a coercive tactic to force the offending party into changing its approaches has been increasingly significant over the last several decades, and is expected to increase in the future. Research shows that surrogate boycotts are often triggered by controversial international events, and particular foreign countries serve as the offending party in the international marketplace. In other words, multinational corporations are likely to become surrogate boycott targets in overseas markets because of the animosity between their home and host countries. Focusing on the surrogate boycott triggered by a severe situation animosity, this research aims to examine how social media influencers (SMIs) serving as electronic key opinion leaders (EKOLs) in an international crisis facilitate and organize a boycott, and persuade consumers to participate in the boycott. This research suggests that SMIs could be a particularly important information source in a surrogate boycott sparked by a situation of animosity. This research suggests that under such a context, SMIs become a critical information source for individuals to enhance and update their understanding of the event because, unlike traditional media, social media serve as a platform for instant and 24-hour non-stop information access and dissemination. The Xinjiang cotton event was adopted as the research context, which was viewed as an ongoing inter-country conflict, reflecting a crisis, which provokes animosity against the West. Through online panel services, both studies recruited Mainland Chinese nationals to be respondents to the surveys. The findings show that: 1. Social media influencer message is positively related to a negative attitude toward the offending country. 2. Attitude toward the offending country is positively related to boycotting participation. To address the unexplored question – of the effect of social media influencer influence on consumer participation in boycotts, this research presents a finer-grained examination of boycott motivation, with a special focus on a situational animosity context. This research is split into two interrelated parts. In the first part, this research shows that attitudes toward the offending country can be socially constructed by the influence of social media influencers in a situational animosity context. The study results show that consumers perceive different strengths of social pressure related to various levels of influencer messages and thus exhibit different levels of attitude toward the offending country. In the second part, this research further investigates the effect of attitude toward the offending country on boycott participation. The study findings show that such attitude exacerbated the effect of social media influencer messages on boycott participation in a situation of animosity.

Keywords: animosity, social media marketing, boycott, attitude toward the offending country

Procedia PDF Downloads 91
157 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs

Authors: André Augusto Ceballos Melo

Abstract:

Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.

Keywords: stable diffusion, neural interface, smart prosthetic, augmenting

Procedia PDF Downloads 91
156 High Purity Lignin for Asphalt Applications: Using the Dawn Technology™ Wood Fractionation Process

Authors: Ed de Jong

Abstract:

Avantium is a leading technology development company and a frontrunner in renewable chemistry. Avantium develops disruptive technologies that enable the production of sustainable high value products from renewable materials and actively seek out collaborations and partnerships with like-minded companies and academic institutions globally, to speed up introductions of chemical innovations in the marketplace. In addition, Avantium helps companies to accelerate their catalysis R&D to improve efficiencies and deliver increased sustainability, growth, and profits, by providing proprietary systems and services to this regard. Many chemical building blocks and materials can be produced from biomass, nowadays mainly from 1st generation based carbohydrates, but potential for competition with the human food chain leads brand-owners to look for strategies to transition from 1st to 2nd generation feedstock. The use of non-edible lignocellulosic feedstock is an equally attractive source to produce chemical intermediates and an important part of the solution addressing these global issues (Paris targets). Avantium’s Dawn Technology™ separates the glucose, mixed sugars, and lignin available in non-food agricultural and forestry residues such as wood chips, wheat straw, bagasse, empty fruit bunches or corn stover. The resulting very pure lignin is dense in energy and can be used for energy generation. However, such a material might preferably be deployed in higher added value applications. Bitumen, which is fossil based, are mostly used for paving applications. Traditional hot mix asphalt emits large quantities of the GHG’s CO₂, CH₄, and N₂O, which is unfavorable for obvious environmental reasons. Another challenge for the bitumen industry is that the petrochemical industry is becoming more and more efficient in breaking down higher chain hydrocarbons to lower chain hydrocarbons with higher added value than bitumen. This has a negative effect on the availability of bitumen. The asphalt market, as well as governments, are looking for alternatives with higher sustainability in terms of GHG emission. The usage of alternative sustainable binders, which can (partly) replace the bitumen, contributes to reduce GHG emissions and at the same time broadens the availability of binders. As lignin is a major component (around 25-30%) of lignocellulosic material, which includes terrestrial plants (e.g., trees, bushes, and grass) and agricultural residues (e.g., empty fruit bunches, corn stover, sugarcane bagasse, straw, etc.), it is globally highly available. The chemical structure shows resemblance with the structure of bitumen and could, therefore, be used as an alternative for bitumen in applications like roofing or asphalt. Applications such as the use of lignin in asphalt need both fundamental research as well as practical proof under relevant use conditions. From a fundamental point of view, rheological aspects, as well as mixing, are key criteria. From a practical point of view, behavior in real road conditions is key (how easy can the asphalt be prepared, how easy can it be applied on the road, what is the durability, etc.). The paper will discuss the fundamentals of the use of lignin as bitumen replacement as well as the status of the different demonstration projects in Europe using lignin as a partial bitumen replacement in asphalts and will especially present the results of using Dawn Technology™ lignin as partial replacement of bitumen.

Keywords: biorefinery, wood fractionation, lignin, asphalt, bitumen, sustainability

Procedia PDF Downloads 146