Search results for: foreign real estate investment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7711

Search results for: foreign real estate investment

241 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 78
240 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance

Authors: Aleksandra Czubek

Abstract:

As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.

Keywords: ip, technology, copyright, data, infringement, comparative analysis

Procedia PDF Downloads 18
239 Innovation Ecosystems in Construction Industry

Authors: Cansu Gülser, Tuğce Ercan

Abstract:

The construction sector is a key driver of the global economy, contributing significantly to growth and employment through a diverse array of sub-sectors. However, it faces challenges due to its project-based nature, which often hampers long-term collaboration and broader incentives beyond individual projects. These limitations are frequently discussed in scientific literature as obstacles to innovation and industry-wide change. Traditional practices and unwritten rules further hinder the adoption of new processes within the construction industry. The disadvantages of the construction industry’s project-based structure in fostering innovation and long-term relationships include limited continuity, fragmented collaborations, and a focus on short-term goals, which collectively hinder the development of sustained partnerships, inhibit the sharing of knowledge and best practices, and reduce incentives for investing in innovative processes and technologies. This structure typically emphasizes specific projects, which restricts broader collaborations and incentives that extend beyond individual projects, thus impeding innovation and change. The temporal complexities inherent in project-based sectors like construction make it difficult to address societal challenges through collaborative efforts. Traditional management approaches are inadequate for scaling up innovations and adapting to significant changes. For systemic transformation in the construction sector, there is a need for more collaborative relationships and activities beyond traditional supply chains. This study delves into the concept of an innovation ecosystem within the construction sector, highlighting various research findings. It aims to explore key questions about the components that enhance innovation capacity, the relationship between a robust innovation ecosystem and this capacity, and the reasons why innovation is less prevalent and implemented in this sector compared to others. Additionally, it examines the main factors hindering innovation within companies and identifies strategies to improve these efforts, particularly in developing countries. The innovation ecosystem in the construction sector generates various outputs through interactions between business resources and external components. These outputs include innovative value creation, sustainable practices, robust collaborations, knowledge sharing, competitiveness, and advanced project management, all of which contribute significantly to company market performance and competitive advantage. This article offers insights and strategic recommendations for industry professionals, policymakers, and researchers interested in developing and sustaining innovation ecosystems in the construction sector. Future research should focus on broader samples for generalization, comparative sector analysis, and application-focused studies addressing real industry challenges. Additionally, studying the long-term impacts of innovation ecosystems, integrating advanced technologies like AI and machine learning into project management, and developing future application strategies and policies are also important.

Keywords: construction industry, innovation ecosystem, innovation ecosystem components, project management

Procedia PDF Downloads 35
238 Barbie in India: A Study of Effects of Barbie in Psychological and Social Health

Authors: Suhrita Saha

Abstract:

Barbie is a fashion doll manufactured by the American toy company Mattel Inc and it made debut at the American International Toy Fair in New York in 9 March 1959. From being a fashion doll to a symbol of fetishistic commodification, Barbie has come a long way. A Barbie doll is sold every three seconds across the world, which makes the billion dollar brand the world’s most popular doll for the girls. The 11.5 inch moulded plastic doll has a height of 5 feet 9 inches at 1/6 scale. Her vital statistics have been estimated at 36 inches (chest), 18 inches (waist) and 33 inches (hips). Her weight is permanently set at 110 pounds which would be 35 pounds underweight. Ruth Handler, the creator of Barbie wanted a doll that represented adulthood and allowed children to imagine themselves as teenagers or adults. While Barbie might have been intended to be independent, imaginative and innovative, the physical uniqueness does not confine the doll to the status of a play thing. It is a cultural icon but with far reaching critical implications. The doll is a commodity bearing more social value than practical use value. The way Barbie is produced represents industrialization and commodification of the process of symbolic production. And this symbolic production and consumption is a standardized planned one that produce stereotypical ‘pseudo-individuality’ and suppresses cultural alternatives. Children are being subject to and also arise as subjects in this consumer context. A very gendered, physiologically dissected sexually charged symbolism is imposed upon children (both male and female), childhood, their social worlds, identity, and relationship formation. Barbie is also very popular among Indian children. While the doll is essentially an imaginative representation of the West, it is internalized by the Indian sensibilities. Through observation and questionnaire-based interview within a sample population of adolescent children (primarily female, a few male) and parents (primarily mothers) in Kolkata, an Indian metropolis, the paper puts forth findings of sociological relevance. 1. Barbie creates, recreates, and accentuates already existing divides between the binaries like male- female, fat- thin, sexy- nonsexy, beauty- brain and more. 2. The Indian girl child in her associative process with Barbie wants to be like her and commodifies her own self. The male child also readily accepts this standardized commodification. Definition of beauty is thus based on prejudice and stereotype. 3. Not being able to become Barbie creates health issues both psychological and physiological varying from anorexia to obesity as well as personality disorder. 4. From being a plaything Barbie becomes the game maker. Barbie along with many other forms of simulation further creates a consumer culture and market for all kind of fitness related hyper enchantment and subsequent disillusionment. The construct becomes the reality and the real gets lost in the play world. The paper would thus argue that Barbie from being an innocuous doll transports itself into becoming social construct with long term and irreversible adverse impact.

Keywords: barbie, commodification, personality disorder, sterotype

Procedia PDF Downloads 362
237 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 74
236 Identifying the Faces of colonialism: An Analysis of Gender Inequalities in Economic Participation in Pakistan through Postcolonial Feminist Lens

Authors: Umbreen Salim, Anila Noor

Abstract:

This paper analyses the influences and faces of colonialism in women’s participation in economic activity in postcolonial Pakistan, through postcolonial feminist economic lens. It is an attempt to probe the shifts in gender inequalities that have existed in three stages; pre-colonial, colonial, and postcolonial times in the Indo-Pak subcontinent. It delves into an inquiry of pre-colonial as it is imperative to understand the situation and context before colonisation in order to assess the deviations associated with its onset. Hence, in order to trace gender inequalities this paper analyses from Mughal Era (1526-1757) that existed before British colonisation, then, the gender inequalities that existed during British colonisation (1857- 1947) and the associated dynamics and changes in women’s vulnerabilities to participate in the economy are examined. Followed by, the postcolonial (1947 onwards) scenario of discriminations and oppressions faced by women. As part of the research methodology, primary and secondary data analysis was done. Analysis of secondary data including literary works and photographs was carried out, followed by primary data collection using ethnographic approaches and participatory tools to understand the presence of coloniality and gender inequalities embedded in the social structure through participant’s real-life stories. The data is analysed using feminist postcolonial analysis. Intersectionality has been a key tool of analysis as the paper delved into the gender inequalities through the class and caste lens briefly touching at religion. It is imperative to mention the significance of the study and very importantly the practical challenges as historical analysis of 18th and 19th century is involved. Most of the available work on history is produced by a) men and b) foreigners and mostly white authors. Since the historical analysis is mostly by men the gender analysis presented misses on many aspects of women’s issues and since the authors have been mostly white European gives it as Mohanty says, ‘under western eyes’ perspective. Whereas the edge of this paper is the authors’ deep attachment, belongingness as lived reality and work with women in Pakistan as postcolonial subjects, a better position to relate with the social reality and understand the phenomenon. The study brought some key results as gender inequalities existed before colonisation when women were hidden wheel of stable economy which was completely invisible. During the British colonisation, the vulnerabilities of women only increased and as compared to men their inferiority status further strengthened. Today, the postcolonial woman lives in deep-rooted effects of coloniality where she is divided in class and position within the class, and she has to face gender inequalities within household and in the market for economic participation. Gender inequalities have existed in pre-colonial, during colonisation and postcolonial times in Pakistan with varying dynamics, degrees and intensities for women whereby social class, caste and religion have been key factors defining the extent of discrimination and oppression. Colonialism may have physically ended but the coloniality remains and has its deep, broad and wide effects in increasing gender inequalities in women’s participation in the economy in Pakistan.

Keywords: colonialism, economic participation, gender inequalities, women

Procedia PDF Downloads 208
235 Fighting the Crisis with 4.0 Competences: Higher Education Projects in the Times of Pandemic

Authors: Jadwiga Fila, Mateusz Jezowski, Pawel Poszytek

Abstract:

The outbreak of the global COVID-19 pandemic started the times of crisis full of uncertainty, especially in the field of transnational cooperation projects based on the international mobility of their participants. This is notably the case of Erasmus+ Program for higher education, which is the flagship European initiative boosting cooperation between educational institutions, businesses, and other actors, enabling students and staff mobility, as well as strategic partnerships between different parties. The aim of this abstract is to study whether competences 4.0 are able to empower Erasmus+ project leaders in sustaining their international cooperation in times of global crisis, widespread online learning, and common project disruption or cancellation. The concept of competences 4.0 emerged from the notion of the industry 4.0, and it relates to skills that are fundamental for the current labor market. For the aim of the study presented in this abstract, four main 4.0 competences were distinguished: digital, managerial, social, and cognitive competence. The hypothesis for the study stipulated that the above-mentioned highly-developed competences may act as a protective shield against the pandemic challenges in terms of projects’ sustainability and continuation. The objective of the research was to assess to what extent individual competences are useful in managing projects in times of crisis. For this purpose, the study was conducted, involving, among others, 141 Polish higher education project leaders who were running their cooperation projects during the peak of the COVID-19 pandemic (Mar-Nov 2020). The research explored the self-perception of the above-mentioned competences among Erasmus+ project leaders and the contextual data regarding the sustainability of the projects. The quantitative character of data permitted validation of scales (Cronbach’s Alfa measure), and the use of factor analysis made it possible to create a distinctive variable for each competence and its dimensions. Finally, logistic regression was used to examine the association of competences and other factors on project status. The study shows that the project leaders’ competence profile attributed the highest score to digital competence (4.36 on the 1-5 scale). Slightly lower values were obtained for cognitive competence (3.96) and managerial competence (3.82). The lowest score was accorded to one specific dimension of social competence: adaptability and ability to manage stress (1.74), which proves that the pandemic was a real challenge which had to be faced by project coordinators. For higher education projects, 10% were suspended or prolonged because of the COVID-19 pandemic, whereas 90% were undisrupted (continued or already successfully finished). The quantitative analysis showed a positive relationship between the leaders’ levels of competences and the projects status. In the case of all competences, the scores were higher for project leaders who finished projects successfully than for leaders who suspended or prolonged their projects. The research demonstrated that, in the demanding times of the COVID-19 pandemic, competences 4.0, to a certain extent, do play a significant role in the successful management of Erasmus+ projects. The implementation and sustainability of international educational projects, despite mobility and sanitary obstacles, depended, among other factors, on the level of leaders’ competences.

Keywords: Competences 4.0, COVID-19 pandemic, Erasmus+ Program, international education, project sustainability

Procedia PDF Downloads 94
234 Iron-Metal-Organic Frameworks: Potential Application as Theranostics for Inhalable Therapy of Tuberculosis

Authors: Gabriela Wyszogrodzka, Przemyslaw Dorozynski, Barbara Gil, Maciej Strzempek, Bartosz Marszalek, Piotr Kulinowski, Wladyslaw Piotr Weglarz, Elzbieta Menaszek

Abstract:

MOFs (Metal-Organic Frameworks) belong to a new group of porous materials with a hybrid organic-inorganic construction. Their structure is a network consisting of metal cations or clusters (acting as metallic centers, nodes) and the organic linkers between nodes. The interest in MOFs is primarily associated with the use of their well-developed surface and large porous. Possibility to build MOFs of biocompatible components let to use them as potential drug carriers. Furthermore, forming MOFs structure from cations possessing paramagnetic properties (e.g. iron cations) allows to use them as MRI (Magnetic Resonance Imaging) contrast agents. The concept of formation of particles that combine the ability to transfer active substance with imaging properties has been called theranostic (from words combination therapy and diagnostics). By building MOF structure from iron cations it is possible to use them as theranostic agents and monitoring the distribution of the active substance after administration in real time. In the study iron-MOF: Fe-MIL-101-NH2 was chosen, consisting of iron cluster in nodes of the structure and amino-terephthalic acid as a linker. The aim of the study was to investigate the possibility of applying Fe-MIL-101-NH2 as inhalable theranostic particulate system for the first-line anti-tuberculosis antibiotic – isoniazid. The drug content incorporated into Fe-MIL-101-NH2 was evaluated by dissolution study using spectrophotometric method. Results showed isoniazid encapsulation efficiency – ca. 12.5% wt. Possibility of Fe-MIL-101-NH2 application as the MRI contrast agent was demonstrated by magnetic resonance tomography. FeMIL-101-NH2 effectively shortening T1 and T2 relaxation times (increasing R1 and R2 relaxation rates) linearly with the concentrations of suspended material. Images obtained using multi-echo magnetic resonance imaging sequence revealed possibility to use FeMIL-101-NH2 as positive and negative contrasts depending on applied repetition time. MOFs micronization via ultrasound was evaluated by XRD, nitrogen adsorption, FTIR, SEM imaging and did not influence their crystal shape and size. Ultrasonication let to break the aggregates and achieve very homogeneously looking SEM images. MOFs cytotoxicity was evaluated in in vitro test with a highly sensitive resazurin based reagent PrestoBlue™ on L929 fibroblast cell line. After 24h no inhibition of cell proliferation was observed. All results proved potential possibility of application of ironMOFs as an isoniazid carrier and as MRI contrast agent in inhalatory treatment of tuberculosis. Acknowledgments: Authors gratefully acknowledge the National Science Center Poland for providing financial support, grant no 2014/15/B/ST5/04498.

Keywords: imaging agents, metal-organic frameworks, theranostics, tuberculosis

Procedia PDF Downloads 251
233 Fake News Domination and Threats on Democratic Systems

Authors: Laura Irimies, Cosmin Irimies

Abstract:

The public space all over the world is currently confronted with the aggressive assault of fake news that have lately impacted public agenda setting, collective decisions and social attitudes. Top leaders constantly call out most mainstream news as “fake news” and the public opinion get more confused. "Fake news" are generally defined as false, often sensational, information disseminated under the guise of news reporting and has been declared the word of the year 2017 by Collins Dictionary and it also has been one of the most debated socio-political topics of recent years. Websites which, deliberately or not, publish misleading information are often shared on social media where they essentially increase their reach and influence. According to international reports, the exposure to fake news is an undeniable reality all over the world as the exposure to completely invented information goes up to the 31 percent in the US, and it is even bigger in Eastern Europe countries, such as Hungary (42%) and Romania (38%) or in Mediterranean countries, such as Greece (44%) or Turkey (49%), and lower in Northern and Western Europe countries – Germany (9%), Denmark (9%) or Holland (10%). While the study of fake news (mechanism and effects) is still in its infancy, it has become truly relevant as the phenomenon seems to have a growing impact on democratic systems. Studies conducted by the European Commission show that 83% of respondents out of a total of 26,576 interviewees consider the existence of news that misrepresent reality as a threat for democracy. Studies recently conducted at Arizona State University show that people with higher education can more easily spot fake headlines, but over 30 percent of them can still be trapped by fake information. If we were to refer only to some of the most recent situations in Romania, fake news issues and hidden agenda suspicions related to the massive and extremely violent public demonstrations held on August 10th, 2018 with a strong participation of the Romanian diaspora have been massively reflected by the international media and generated serious debates within the European Commission. Considering the above framework, the study raises four main research questions: 1. Is fake news a problem or just a natural consequence of mainstream media decline and the abundance of sources of information? 2. What are the implications for democracy? 3. Can fake news be controlled without restricting fundamental human rights? 4. How could the public be properly educated to detect fake news? The research uses mostly qualitative but also quantitative methods, content analysis of studies, websites and media content, official reports and interviews. The study will prove the real threat fake news represent and also the need for proper media literacy education and will draw basic guidelines for developing a new and essential skill: that of detecting fake in news in a society overwhelmed by sources of information that constantly roll massive amounts of information increasing the risk of misinformation and leading to inadequate public decisions that could affect democratic stability.

Keywords: agenda setting democracy, fake news, journalism, media literacy

Procedia PDF Downloads 130
232 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland

Authors: A. Sgobba, C. Meskell

Abstract:

The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.

Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources

Procedia PDF Downloads 129
231 Positive Incentives to Reduce Private Car Use: A Theory-Based Critical Analysis

Authors: Rafael Alexandre Dos Reis

Abstract:

Research has shown a substantial increase in the participation of Conventionally Fuelled Vehicles (CFVs) in the urban transport modal split. The reasons for this unsustainable reality are multiple, from economic interventions to individual behaviour. The development and delivery of positive incentives for the adoption of more environmental-friendly modes of transport is an emerging strategy to help in tackling the problem of excessive use of conventionally fuelled vehicles. The efficiency of this approach, like other information-based schemes, can benefit from the knowledge of their potential impacts in theoretical constructs of multiple behaviour change theories. The goal of this research is to critically analyse theories of behaviour that are relevant to transport research and the impacts of positive incentives on the theoretical determinants of behaviour, strengthening the current body of evidence about the benefits of this approach. The main method to investigate this will involve a literature review on two main topics: the current theories of behaviour that have empirical support in transport research and the past or ongoing positive incentives programs that had an impact on car use reduction. The reviewed programs of positive incentives were the following: The TravelSmart®; Spitsmijden®; Incentives for Singapore Commuters® (INSINC); COMMUTEGREENER®; MOVESMARTER®; STREETLIFE®; SUPERHUB®; SUNSET® and the EMPOWER® project. The theories analysed were the heory of Planned Behaviour (TPB); The Norm Activation Theory (NAM); Social Learning Theory (SLT); The Theory of Interpersonal Behaviour (TIB); The Goal-Setting Theory (GST) and The Value-Belief-Norm Theory (VBN). After the revisions of the theoretical constructs of each of the theories and their influence on car use, it can be concluded that positive incentives schemes impact on behaviour change in the following manners: -Changing individual’s attitudes through informational incentives; -Increasing feelings of moral obligations to reduce the use of CFVs; -Increase the perceived social pressure to engage in more sustainable mobility behaviours through the use of comparison mechanisms in social media, for example; -Increase the perceived control of behaviour through informational incentives and training incentives; -Increasing personal norms with reinforcing information; -Providing tools for self-monitoring and self-evaluation; -Providing real experiences in alternative modes to the car; -Making the observation of others’ car use reduction possible; -Informing about consequences of behaviour and emphasizing the individual’s responsibility with society and the environment; -Increasing the perception of the consequences of car use to an individual’s valued objects; -Increasing the perceived ability to reduce threats to environment; -Help establishing goals to reduce car use; - iving personalized feedback on the goal; -Increase feelings of commitment to the goal; -Reducing the perceived complexity of the use of alternatives to the car. It is notable that the emerging technique of delivering positive incentives are systematically connected to causal determinants of travel behaviour. The preliminary results of the reviewed programs evidence how positive incentives might strengthen these determinants and help in the process of behaviour change.

Keywords: positive incentives, private car use reduction, sustainable behaviour, voluntary travel behaviour change

Procedia PDF Downloads 339
230 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 68
229 Agri-Food Transparency and Traceability: A Marketing Tool to Satisfy Consumer Awareness Needs

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

The link between man and food plays, in the social and economic system, a central role where cultural and multidisciplinary aspects intertwine: food is not only nutrition, but also communication, culture, politics, environment, science, ethics, fashion. This multi-dimensionality has many implications in the food economy. In recent years, the consumer became more conscious about his food choices, involving a consistent change in consumption models. This change concerns several aspects: awareness of food system issues, employment of socially and environmentally conscious decision-making, food choices based on different characteristics than nutritional ones i.e. origin of food, how it’s produced, and who’s producing it. In this frame the ‘consumption choices’ and the ‘interests of the citizen’ become one part of the others. The figure of the ‘Citizen Consumer’ is born, a responsible and ethically motivated individual to change his lifestyle, achieving the goal of sustainable consumption. Simultaneously the branding, that before was guarantee of the product quality, today is questioned. In order to meet these needs, Agri-Food companies are developing specific product lines that follow two main philosophies: ‘Back to basics’ and ‘Less is more’. However, the issue of ethical behavior does not seem to find an adequate on market offer. Most likely due to a lack of attention on the communication strategy used, very often based on market logic and rarely on ethical one. The label in its classic concept of ‘clean labeling’ can no longer be the only instrument through which to convey product information and its evolution towards a concept of ‘clear label’ is necessary to embrace ethical and transparent concepts in progress the process of democratization of the Food System. The implementation of a voluntary traceability path, relying on the technological models of the Internet of Things or Industry 4.0, would enable the Agri-Food Supply Chain to collect data that, if properly treated, could satisfy the information need of consumers. A change of approach is therefore proposed towards Agri-Food traceability that is no longer intended as a tool to be used to respond to the legislator, but rather as a promotional tool useful to tell the company in a transparent manner and then reach the slice of the market of food citizens. The use of mobile technology can also facilitate this information transfer. However, in order to guarantee maximum efficiency, an appropriate communication model based on the ethical communication principles should be used, which aims to overcome the pipeline communication model, to offer the listener a new way of telling the food product, based on real data collected through processes traceability. The Citizen Consumer is therefore placed at the center of the new model of communication in which he has the opportunity to choose what to know and how. The new label creates a virtual access point capable of telling the product according to different point of views, following the personal interests and offering the possibility to give several content modalities to support different situations and usability.

Keywords: agri food traceability, agri-food transparency, clear label, food system, internet of things

Procedia PDF Downloads 158
228 Evaluation of Natural Frequency of Single and Grouped Helical Piles

Authors: Maryam Shahbazi, Amy B. Cerato

Abstract:

The importance of a systems’ natural frequency (fn) emerges when the vibration force frequency is equivalent to foundation's fn which causes response amplitude (resonance) that may cause irreversible damage to the structure. Several factors such as pile geometry (e.g., length and diameter), soil density, load magnitude, pile condition, and physical structure affect the fn of a soil-pile system; some of these parameters are evaluated in this study. Although experimental and analytical studies have assessed the fn of a soil-pile system, few have included individual and grouped helical piles. Thus, the current study aims to provide quantitative data on dynamic characteristics of helical pile-soil systems from full-scale shake table tests that will allow engineers to predict more realistic dynamic response under motions with variable frequency ranges. To evaluate the fn of single and grouped helical piles in dry dense sand, full-scale shake table tests were conducted in a laminar box (6.7 m x 3.0 m with 4.6 m high). Two different diameters (8.8 cm and 14 cm) helical piles were embedded in the soil box with corresponding lengths of 3.66m (excluding one pile with length of 3.96) and 4.27m. Different configurations were implemented to evaluate conditions such as fixed and pinned connections. In the group configuration, all four piles with similar geometry were tied together. Simulated real earthquake motions, in addition to white noise, were applied to evaluate the wide range of soil-pile system behavior. The Fast Fourier Transform (FFT) of measured time history responses using installed strain gages and accelerometers were used to evaluate fn. Both time-history records using accelerometer or strain gages were found to be acceptable for calculating fn. In this study, the existence of a pile reduced the fn of the soil slightly. Greater fn occurred on single piles with larger l/d ratios (higher slenderness ratio). Also, regardless of the connection type, the more slender pile group which is obviously surrounded by more soil, yielded higher natural frequencies under white noise, which may be due to exhibiting more passive soil resistance around it. Relatively speaking, within both pile groups, a pinned connection led to a lower fn than a fixed connection (e.g., for the same pile group the fn’s are 5.23Hz and 4.65Hz for fixed and pinned connections, respectively). Generally speaking, a stronger motion causes nonlinear behavior and degrades stiffness which reduces a pile’s fn; even more, reduction occurs in soil with a lower density. Moreover, fn of dense sand under white noise signal was obtained 5.03 which is reduced by 44% when an earthquake with the acceleration of 0.5g was applied. By knowing the factors affecting fn, the designer can effectively match the properties of the soil to a type of pile and structure to attempt to avoid resonance. The quantitative results in this study assist engineers in predicting a probable range of fn for helical pile foundations under potential future earthquake, and machine loading applied forces.

Keywords: helical pile, natural frequency, pile group, shake table, stiffness

Procedia PDF Downloads 133
227 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 82
226 A Vision Making Exercise for Twente Region; Development and Assesment

Authors: Gelareh Ghaderi

Abstract:

the overall objective of this study is to develop two alternative plans of spatial and infrastructural development for the Netwerkstad Twente (Twente region) until 2040 and to assess the impacts of those two alternative plans. This region is located on the eastern border of the Netherlands, and it comprises of five municipalities. Based on the strengths and opportunities of the five municipalities of the Netwerkstad Twente, and in order develop the region internationally, strengthen the job market and retain skilled and knowledgeable young population, two alternative visions have been developed; environmental oriented vision, and economical oriented vision. Environmental oriented vision is based mostly on preserving beautiful landscapes. Twente would be recognized as an educational center, driven by green technologies and environment-friendly economy. Market-oriented vision is based on attracting and developing different economic activities in the region based on visions of the five cities of Netwerkstad Twente, in order to improve the competitiveness of the region in national and international scale. On the basis of the two developed visions and strategies for achieving the visions, land use and infrastructural development are modeled and assessed. Based on the SWOT analysis, criteria were formulated and employed in modeling the two contrasting land use visions by the year 2040. Land use modeling consists of determination of future land use demand, assessment of suitability land (Suitability analysis), and allocation of land uses on suitable land. Suitability analysis aims to determine the available supply of land for future development as well as assessing their suitability for specific type of land uses on the basis of the formulated set of criteria. Suitability analysis was operated using CommunityViz, a Planning Support System application for spatially explicit land suitability and allocation. Netwerkstad Twente has highly developed transportation infrastructure, consists of highways network, national road network, regional road network, street network, local road network, railway network and bike-path network. Based on the assumptions of speed limitations on different types of roads provided, infrastructure accessibility level of predicted land use parcels by four different transport modes is investigated. For evaluation of the two development scenarios, the Multi-criteria Evaluation (MCE) method is used. The first step was to determine criteria used for evaluation of each vision. All factors were categorized as economical, ecological and social. Results of Multi-criteria Evaluation show that Environmental oriented cities scenario has higher overall score. Environment-oriented scenario has impressive scores in relation to economical and ecological factors. This is due to the fact that a large percentage of housing tends towards compact housing. Twente region has immense potential, and the success of this project will define the Eastern part of The Netherlands and create a real competitive local economy with innovations and attractive environment as its backbone.

Keywords: economical oriented vision, environmental oriented vision, infrastructure, land use, multi criteria assesment, vision

Procedia PDF Downloads 227
225 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 326
224 The Radicalization of Islam in the Syrian Conflict: A Systematic Review from the Interreligious Dialogue Perspective

Authors: Cosette Maiky

Abstract:

Seven years have passed since the crisis erupted and the list of challenges to peacebuilding and interreligious dialogue is still growing ever more discouraging: Violence, displacement, sectarianism, discrimination, radicalisation, fragmentation, and collapse of various social and economic infrastructure have notoriously plagued the war-torn country. As the situation in Syria and neighbouring countries is still creating a real concern about the future of the social cohesion and the coexistence in the region, in her function as Field Expert on Arab Countries at King Abdullah bin Abdelaziz Centre for Interreligious and Intercultural Dialogue, the author shall present a systematic review paper that focuses on the radicalization of Islam in Syria. The exercise was based on a series of research questions that guided both the review of literature as well as the interviews. Their relative meaningfulness shall be assessed and trade-offs discussed in each case to ensure that key questions were addressed and to avoid unnecessary effort. There was an element of flexibility, as the assessment progressed, to further provide and inject additional generic questions. The main sources for the information were: Documents and literature with a direct bearing on the issues of relevance collected in all available formats and information collected through key informant interviews. This latter was particularly helpful to understand what some of the capacity constraints are, as well as the gaps, enablers and barriers. Respondents were selected among those who are engaged in IRD activities clearly linked to peacebuilding (i.e. religious leaders, leaders in religious communities, peace actors, religious actors, conflict parties, minority groups, women initiatives, youth initiatives, civil society organizations, academia, etc.), with relevant professional qualifications and work experience. During the research process, the Consultant carefully took account of sensitivities around terminologies as well as a highly insecure and dynamic context. The Consultant (Arabic native speaker), therefore, adapted terminologies while conducting interviews according to the area and respondent. Findings revealed: the deep ideological polarization and lack of trust dividing communities and preventing meaningful dialogue opportunities; the challenge of prioritizing IRD and peacebuilding work in the context of such a severe humanitarian crisis facing the country; the need to engage religious leaders and institutions in peacebuilding processes and initiatives, the need to have institutions with specific IRD mandate, which can have a sustainable influence on peace through various levels of interventions (from grassroots level to policy and research), and lastly, the need to address stigma in media representation of Muslims and Islam. While religion and religious agendas have been massively used for political issues and power play in the Middle East – and elsewhere, more extensive policy and research efforts are needed to highlight the positive role of religion and religious actors in dialogue and peacebuilding processes.

Keywords: radicalisation, Islam, Syria, conflict

Procedia PDF Downloads 173
223 Implementation of Project-Based Learning with Peer Assessment in Large Classes under Consideration of Faculty’s Scare Resources

Authors: Margit Kastner

Abstract:

To overcome the negative consequences associated with large class sizes and to support students in developing the necessary competences (e.g., critical thinking, problem-solving, or team-work skills) a marketing course has been redesigned by implementing project-based learning with peer assessment (PBL&PA). This means that students can voluntarily take advantage of this supplementary offer and explore -in addition to attending the lecture where clicker questions are asked- a real-world problem, find a solution, and assess the results of peers while working in small collaborative groups. In order to handle this with little further effort, the process is technically supported by the university’s e-learning system in such a way that students upload their solution in form of an assignment which is then automatically distributed to peer groups who have to assess the work of three other groups. Finally, students’ work is graded automatically considering both, students’ contribution to the project and the conformity of the peer assessment. The purpose of this study is to evaluate students’ perception of PBL&PA using an online-questionnaire to collect the data. More specifically, it aims to discover students’ motivations for (not) working on a project and the benefits and problems students encounter. In addition to the survey, students’ performance was analyzed by comparing the final grades of those who participated in PBL&PA with those who did not participate. Among the 260 students who filled out the questionnaire, 47% participated in PBL&PA. Besides extrinsic motivations (bonus credits), students’ participation was often motivated by learning and social benefits. Reasons for not working on a project were connected to students’ organization and management of their studies (e.g., time constraints, no/wrong information) and teamwork concerns (e.g., missing engagement of peers, prior negative experiences). In addition, high workload and insufficient extrinsic motivation (bonus credits) were mentioned. With regards to benefits and problems students encountered during the project, students provided more positive than negative comments. Positive aspects most often stated were learning and social benefits while negative ones were mainly attached to the technical implementation. Interestingly, bonus credits were hardly named as a positive aspect meaning that intrinsic motivations have become more important when working on the project. Team aspects generated mixed feelings. In addition, students who voluntarily participated in PBL&PA were, in general, more active and utilized further course offers such as clicker questions. Examining students’ performance at the final exam revealed that students without participating in any of the offered active learning tasks performed poorest in the exam while students who used all activities were best. In conclusion, the goals of the implementation were met in terms of students’ perceived benefits and the positive impact on students’ exam performance. Since the comparison of the automatic grading with faculty grading showed valid results, it is possible to rely only on automatic grading in the future. That way, the additional workload for faculty will be within limits. Thus, the implementation of project-based learning with peer assessment can be recommended for large classes.

Keywords: automated grading, large classes, peer assessment, project-based learning

Procedia PDF Downloads 165
222 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal

Authors: C. Bateira, J. Fernandes, A. Costa

Abstract:

The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.

Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards

Procedia PDF Downloads 177
221 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico

Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos

Abstract:

Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.

Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis

Procedia PDF Downloads 151
220 Rapid Atmospheric Pressure Photoionization-Mass Spectrometry (APPI-MS) Method for the Detection of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans in Real Environmental Samples Collected within the Vicinity of Industrial Incinerators

Authors: M. Amo, A. Alvaro, A. Astudillo, R. Mc Culloch, J. C. del Castillo, M. Gómez, J. M. Martín

Abstract:

Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) of course comprise a range of highly toxic compounds that may exist as particulates within the air or accumulate within water supplies, soil, or vegetation. They may be created either ubiquitously or naturally within the environment as a product of forest fires or volcanic eruptions. It is only since the industrial revolution, however, that it has become necessary to closely monitor their generation as a byproduct of manufacturing/combustion processes, in an effort to mitigate widespread contamination events. Of course, the environmental concentrations of these toxins are expected to be extremely low, therefore highly sensitive and accurate methods are required for their determination. Since ionization of non-polar compounds through electrospray and APCI is difficult and inefficient, we evaluate the performance of a novel low-flow Atmospheric Pressure Photoionization (APPI) source for the trace detection of various dioxins and furans using rapid Mass Spectrometry workflows. Air, soil and biota (vegetable matter) samples were collected monthly during one year from various locations within the vicinity of an industrial incinerator in Spain. Analytes were extracted and concentrated using soxhlet extraction in toluene and concentrated by rotavapor and nitrogen flow. Various ionization methods as electrospray (ES) and atmospheric pressure chemical ionization (APCI) were evaluated, however, only the low-flow APPI source was capable of providing the necessary performance, in terms of sensitivity, required for detecting all targeted analytes. In total, 10 analytes including 2,3,7,8-tetrachlorodibenzodioxin (TCDD) were detected and characterized using the APPI-MS method. Both PCDDs and PCFDs were detected most efficiently in negative ionization mode. The most abundant ion always corresponded to the loss of a chlorine and addition of an oxygen, yielding [M-Cl+O]- ions. MRM methods were created in order to provide selectivity for each analyte. No chromatographic separation was employed; however, matrix effects were determined to have a negligible impact on analyte signals. Triple Quadrupole Mass Spectrometry was chosen because of its unique potential for high sensitivity and selectivity. The mass spectrometer used was a Sciex´s Qtrap3200 working in negative Multi Reacting Monitoring Mode (MRM). Typically mass detection limits were determined to be near the 1-pg level. The APPI-MS2 technology applied to the detection of PCDD/Fs allows fast and reliable atmospheric analysis, minimizing considerably operational times and costs, with respect other technologies available. In addition, the limit of detection can be easily improved using a more sensitive mass spectrometer since the background in the analysis channel is very low. The APPI developed by SEADM allows polar and non-polar compounds ionization with high efficiency and repeatability.

Keywords: atmospheric pressure photoionization-mass spectrometry (APPI-MS), dioxin, furan, incinerator

Procedia PDF Downloads 208
219 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 354
218 Exploring Fluoroquinolone-Resistance Dynamics Using a Distinct in Vitro Fermentation Chicken Caeca Model

Authors: Bello Gonzalez T. D. J., Setten Van M., Essen Van A., Brouwer M., Veldman K. T.

Abstract:

Resistance to fluoroquinolones (FQ) has evolved increasingly over the years, posing a significant challenge for the treatment of human infections, particularly gastrointestinal tract infections caused by zoonotic bacteria transmitted through the food chain and environment. In broiler chickens, a relatively high proportion of FQ resistance has been observed in Escherichia coli indicator, Salmonella and Campylobacter isolates. We hypothesize that flumequine (Flu), used as a secondary choice for the treatment of poultry infections, could potentially be associated with a high proportion of FQ resistance. To evaluate this hypothesis, we used an in vitro fermentation chicken caeca model. Two continuous single-stage fermenters were used to simulate in real time the physiological conditions of the chicken caeca microbial content (temperature, pH, caecal content mixing, and anoxic environment). A pool of chicken caecal content containing FQ-resistant E. coli obtained from chickens at slaughter age was used as inoculum along with a spiked FQ-susceptible Campylobacter jejuni strain isolated from broilers. Flu was added to one of the fermenters (Flu-fermenter) every 24 hours for two days to evaluate the selection and maintenance of FQ resistance over time, while the other served as a control (C-Fermenter). The experiment duration was 5 days. Samples were collected at three different time points: before, during and after Flu administration. Serial dilutions were plated on Butzler culture media with and without Flu (8mg/L) and enrofloxacin (4mg/L) and on MacConkey culture media with and without Flu (4mg/L) and enrofloxacin (1mg/L) to determine the proportion of resistant strains over time. Positive cultures were identified by mass spectrometry and matrix-assisted laser desorption/ionization (MALDI). A subset of the obtained isolates were used for Whole Genome Sequencing analysis. Over time, E. coli exhibited positive growth in both fermenters, while C. jejuni growth was detected up to day 3. The proportion of Flu-resistant E. coli strains recovered remained consistent over time after antibiotic selective pressure, while in the C-fermenter, a decrease was observed at day 5; a similar pattern was observed in the enrofloxacin-resistant E. coli strains. This suggests that Flu might play a role in the selection and persistence of enrofloxacin resistance, compared to C-fermenter, where enrofloxacin-resistant E. coli strains appear at a later time. Furthermore, positive growth was detected from both fermenters only on Butzler plates without antibiotics. A subset of C. jejuni strains from the Flu-fermenter revealed that those strains were susceptible to ciprofloxacin (MIC < 0.12 μg/mL). A selection of E. coli strains from both fermenters revealed the presence of plasmid-mediated quinolone resistance (PMQR) (qnr-B19) in only one strain from the C-fermenter belonging to sequence type (ST) 48, and in all from Flu-fermenter belonged to ST189. Our results showed that Flu selective impact on PMQR-positive E. coli strains, while no effect was observed in C. jejuni. Maintenance of Flu-resistance was correlated with antibiotic selective pressure. Further studies into antibiotic resistance gene transfer among commensal and zoonotic bacteria in the chicken caeca content may help to elucidate the resistance spread mechanisms.

Keywords: fluoroquinolone-resistance, escherichia coli, campylobacter jejuni, in vitro model

Procedia PDF Downloads 62
217 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 84
216 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 165
215 Impedimetric Phage-Based Sensor for the Rapid Detection of Staphylococcus aureus from Nasal Swab

Authors: Z. Yousefniayejahr, S. Bolognini, A. Bonini, C. Campobasso, N. Poma, F. Vivaldi, M. Di Luca, A. Tavanti, F. Di Francesco

Abstract:

Pathogenic bacteria represent a threat to healthcare systems and the food industry because their rapid detection remains challenging. Electrochemical biosensors are gaining prominence as a novel technology for the detection of pathogens due to intrinsic features such as low cost, rapid response time, and portability, which make them a valuable alternative to traditional methodologies. These sensors use biorecognition elements that are crucial for the identification of specific bacteria. In this context, bacteriophages are promising tools for their inherent high selectivity towards bacterial hosts, which is of fundamental importance when detecting bacterial pathogens in complex biological samples. In this study, we present the development of a low-cost and portable sensor based on the Zeno phage for the rapid detection of Staphylococcus aureus. Screen-printed gold electrodes functionalized with the Zeno phage were used, and electrochemical impedance spectroscopy was applied to evaluate the change of the charge transfer resistance (Rct) as a result of the interaction with S. aureus MRSA ATCC 43300. The phage-based biosensor showed a linear range from 101 to 104 CFU/mL with a 20-minute response time and a limit of detection (LOD) of 1.2 CFU/mL under physiological conditions. The biosensor’s ability to recognize various strains of staphylococci was also successfully demonstrated in the presence of clinical isolates collected from different geographic areas. Assays using S. epidermidis were also carried out to verify the species-specificity of the phage sensor. We only observed a remarkable change of the Rct in the presence of the target S. aureus bacteria, while no substantial binding to S. epidermidis occurred. This confirmed that the Zeno phage sensor only targets S. aureus species within the genus Staphylococcus. In addition, the biosensor's specificity with respect to other bacterial species, including gram-positive bacteria like Enterococcus faecium and the gram-negative bacterium Pseudomonas aeruginosa, was evaluated, and a non-significant impedimetric signal was observed. Notably, the biosensor successfully identified S. aureus bacterial cells in a complex matrix such as a nasal swab, opening the possibility of its use in a real-case scenario. We diluted different concentrations of S. aureus from 108 to 100 CFU/mL with a ratio of 1:10 in the nasal swap matrices collected from healthy donors. Three different sensors were applied to measure various concentrations of bacteria. Our sensor indicated high selectivity to detect S. aureus in biological matrices compared to time-consuming traditional methods, such as enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and radioimmunoassay (RIA), etc. With the aim to study the possibility to use this biosensor to address the challenge associated to pathogen detection, ongoing research is focused on the assessment of the biosensor’s analytical performances in different biological samples and the discovery of new phage bioreceptors.

Keywords: electrochemical impedance spectroscopy, bacteriophage, biosensor, Staphylococcus aureus

Procedia PDF Downloads 66
214 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability

Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris

Abstract:

Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.

Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector

Procedia PDF Downloads 225
213 Community Participation and Place Identity as Mediators on the Impact of Resident Social Capital on Support Intention for Festival Tourism

Authors: Nien-Te Kuo, Yi-Sung Cheng, Kuo-Chien Chang

Abstract:

Cultural festival tourism is now seen by many as an opportunity to facilitate community development because it has significant influences on the economic, social, cultural, and political aspects of local communities. The potential for tourist attraction has been recognized as a useful tool to strengthen local economies from governments. However, most community festivals in Taiwan are short-lived, often only lasting for a few years or occasionally not making it past a one-off event. Researchers suggested that most governments and other stakeholders do not recognize the importance of building a partnership with residents when developing community tourism. Thus, the sustainable community tourism development still remains a key issue in the existing literature. The success of community tourism is related to the attitudes and lifestyles of local residents. In order to maintain sustainable tourism, residents need to be seen as development partners. Residents’ support intention for tourism development not only helps to increase awareness of local culture, history, the natural environment, and infrastructure, but also improves the interactive relationship between the host community and tourists. Furthermore, researchers have identified the social capital theory as the core of sustainable community tourism development. The social capital of residents has been seen as a good way to solve issues of tourism governance, forecast the participation behavior and improve support intention of residents. In addition, previous studies have pointed out the role of community participation and place identity in increasing resident support intention for tourism development. A lack of place identity is one of the main reasons that community tourism has become a mere formality and is not sustainable. It refers to how much residents participate during tourism development and is mainly influenced by individual interest. Scholars believed that the place identity of residents is the soul of community festivals. It shows the community spirit to visitors and has significant impacts on tourism benefits and support intention of residents in community tourism development. Although the importance of community participation and place identity have been confirmed by both governmental and non-governmental organizations, real-life execution still needs to be improved. This study aimed to use social capital theory to investigate the social structure between community residents, participation levels in festival tourism, degrees of place identity, and resident support intention for future community tourism development, and the causal relationship that these factors have with cultural festival tourism. A quantitative research approach was employed to examine the proposed model. Structural equation model was used to test and verify the proposed hypotheses. This was a case study of the Kaohsiung Zuoying Wannian Folklore Festival. The festival was located in the Zuoying District of Kaohsiung City, Taiwan. The target population of this study was residents who attended the festival. The results reveal significant correlations among social capital, community participation, place identity and support intention. The results also confirm that impacts of social capital on support intention were significantly mediated by community participation and place identity. Practical suggestions were provided for tourism operators and policy makers. This work was supported by the Ministry of Science and Technology of Taiwan, Republic of China, under the grant MOST-105-2410-H-328-013.

Keywords: community participation, place identity, social capital, support intention

Procedia PDF Downloads 326
212 Challenges for Persons with Disabilities During COVID-19 Pandemic in Thailand

Authors: Tavee Cheausuwantavee

Abstract:

: COVID-19 pandemic significantly has impacted everyone’s life. Persons with disabilities (PWDs) in Thailand have been also effected by COVID-19 situation in many aspects of their lives, while there have been no more appropriate services of the government and providers. Research projects had been only focused on health precaution and protection. Rapid need assessments on populations and vulnerable groups were limited and conducted via social media and an online survey. However, little is known about the real problems and needs of Thai PWDs during the COVID-19 pandemic for an effective plan and integral services for those PWDs. Therefore, this study aims to explore the diverse problems and needs of Thai PWDs in the COVID -19 pandemic. Results from the study can be used by the government and other stakeholders for further effective services. Methods: This study was used a mixed-method design that consisted of both quantitative and qualitative measures. In terms of the quantitative approach, there were 744 PWDs and caregivers of all types of PWDs selected by proportional multistage stratified random sampling according to their disability classification and geographic location. Questionnaires with 59 items regarding participant characteristics, problems, and needs in health, education, employment, and other social inclusion, were distributed to all participants and some caregivers completed questionnaires when PWDs were not able to due to limited communication and/or literacy skills. Completed questionnaires were analyzed by descriptive statistics. For qualitative design, 62 key informants who were PWDs or caregivers were selected by purposive sampling. Ten focus groups, each consisting of 5-6 participants and 7 in-depth interviews from all the groups identified above, were conducted by researchers across five regions. Focus group and in-depth interview guidelines with 6 items regarding problems and needs in health, education, employment, other social inclusion, and their coping during COVID -19 pandemic. Data were analyzed using a modification of thematic content analysis. Results: Both quantitative and qualitative studies showed that PWDs and their caregivers had significant problems and needs all aspects of their life, including income and employment opportunity, daily living and social inclusion, health, and education, respectively. These problems and needs were related to each other, forming a vicious cycle. Participants also learned from negative pandemic to more positive life aspects, including their health protection, financial plan, family cohesion, and virtual technology literacy and innovation. Conclusion and implications: There have been challenges facing all life aspects of PWDs in Thailand during the COVID -19 pandemic, particularly incomes and daily living. All challenges have been the vicious cycle and complicated. There have been also a positive lesson learned of participants from the pandemic. Recommendations for government and stakeholders in the COVID-19 pandemic for PWDs are the following. First, the health protection strategy and policy of PWDs should be promoted together with other quality of life development including income generation, education and social inclusion. Second, virtual technology and alternative innovation should be enhanced for proactive service providers. Third, accessible information during the pandemic for all PWDs must be concerned. Forth, lesson learned from the pandemic should be shared and disseminated for crisis preparation and a positive mindset in the disruptive world.

Keywords: challenge, COVID-19, disability, Thailand

Procedia PDF Downloads 77