Search results for: ASEAN member states
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3646

Search results for: ASEAN member states

76 On the Bias and Predictability of Asylum Cases

Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats

Abstract:

An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.

Keywords: asylum adjudications, automated decision-making, machine learning, text mining

Procedia PDF Downloads 95
75 Quantifying Firm-Level Environmental Innovation Performance: Determining the Sustainability Value of Patent Portfolios

Authors: Maximilian Elsen, Frank Tietze

Abstract:

The development and diffusion of green technologies are crucial for achieving our ambitious climate targets. The Paris Agreement commits its members to develop strategies for achieving net zero greenhouse gas emissions by the second half of the century. Governments, executives, and academics are working on net-zero strategies and the business of rating organisations on their environmental, social and governance (ESG) performance has grown tremendously in its public interest. ESG data is now commonly integrated into traditional investment analysis and an important factor in investment decisions. Creating these metrics, however, is inherently challenging as environmental and social impacts are hard to measure and uniform requirements on ESG reporting are lacking. ESG metrics are often incomplete and inconsistent as they lack fully accepted reporting standards and are often of qualitative nature. This study explores the use of patent data for assessing the environmental performance of companies by focusing on their patented inventions in the space of climate change mitigation and adaptation technologies (CCMAT). The present study builds on the successful identification of CCMAT patents. In this context, the study adopts the Y02 patent classification, a fully cross-sectional tagging scheme that is fully incorporated in the Cooperative Patent Classification (CPC), to identify Climate Change Adaptation Technologies. The Y02 classification was jointly developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) and provides means to examine technologies in the field of mitigation and adaptation to climate change across relevant technologies. This paper develops sustainability-related metrics for firm-level patent portfolios. We do so by adopting a three-step approach. First, we identify relevant CCMAT patents based on their classification as Y02 CPC patents. Second, we examine the technological strength of the identified CCMAT patents by including more traditional metrics from the field of patent analytics while considering their relevance in the space of CCMAT. Such metrics include, among others, the number of forward citations a patent receives, as well as the backward citations and the size of the focal patent family. Third, we conduct our analysis on a firm level by sector for a sample of companies from different industries and compare the derived sustainability performance metrics with the firms’ environmental and financial performance based on carbon emissions and revenue data. The main outcome of this research is the development of sustainability-related metrics for firm-level environmental performance based on patent data. This research has the potential to complement existing ESG metrics from an innovation perspective by focusing on the environmental performance of companies and putting them into perspective to conventional financial performance metrics. We further provide insights into the environmental performance of companies on a sector level. This study has implications of both academic and practical nature. Academically, it contributes to the research on eco-innovation and the literature on innovation and intellectual property (IP). Practically, the study has implications for policymakers by deriving meaningful insights into the environmental performance from an innovation and IP perspective. Such metrics are further relevant for investors and potentially complement existing ESG data.

Keywords: climate change mitigation, innovation, patent portfolios, sustainability

Procedia PDF Downloads 83
74 Exploring Empathy Through Patients’ Eyes: A Thematic Narrative Analysis of Patient Narratives in the UK

Authors: Qudsiya Baig

Abstract:

Empathy yields an unparalleled therapeutic value within patient physician interactions. Medical research is inundated with evidence to support that a physician’s ability to empathise with patients leads to a greater willingness to report symptoms, an improvement in diagnostic accuracy and safety, and a better adherence and satisfaction with treatment plans. Furthermore, the Institute of Medicine states that empathy leads to a more patient-centred care, which is one of the six main goals of a 21st century health system. However, there is a paradox between the theoretical significance of empathy and its presence, or lack thereof, in clinical practice. Recent studies have reported that empathy declines amongst students and physicians over time. The three most impactful contributors to this decline are: (1) disagreements over the definitions of empathy making it difficult to implement it into practice (2) poor consideration or regulation of empathy leading to burnout and thus, abandonment altogether, and (3) the lack of diversity in the curriculum and the influence of medical culture, which prioritises science over patient experience, limiting some physicians from using ‘too much’ empathy in the fear of losing clinical objectivity. These issues were investigated by conducting a fully inductive thematic narrative analysis of patient narratives in the UK to evaluate the behaviours and attitudes that patients associate with empathy. The principal enquiries underpinning this study included uncovering the factors that affected experience of empathy within provider-patient interactions and to analyse their effects on patient care. This research contributes uniquely to this discourse by examining the phenomenon of empathy directly from patients’ experiences, which were systematically extracted from a repository of online patient narratives of care titled ‘CareOpinion UK’. Narrative analysis was specifically chosen as the methodology to examine narratives from a phenomenological lens to focus on the particularity and context of each story. By enquiring beyond the superficial who-whatwhere, the study of narratives prescribed meaning to illness by highlighting the everyday reality of patients who face the exigent life circumstances created by suffering, disability, and the threat of life. The following six themes were found to be the most impactful in influencing the experience of empathy: dismissive behaviours, judgmental attitudes, undermining patients’ pain or concerns, holistic care and failures and successes of communication or language. For each theme there were overarching themes relating to either a failure to understand the patient’s perspective or a success in taking a person-centred approach. An in-depth analysis revealed that a lack of empathy was greatly associated with an emotive-cognitive imbalance, which disengaged physicians with their patients’ emotions. This study hereby concludes that competent providers require a combination of knowledge, skills, and more importantly empathic attitudes to help create a context for effective care. The crucial elements of that context involve (a) identifying empathy clues within interactions to engage with patients’ situations, (b) attributing a perspective to the patient through perspective-taking and (c) adapting behaviour and communication according to patient’s individual needs. Empathy underpins that context, as does an appreciation of narrative, and the two are interrelated.

Keywords: empathy, narratives, person-centred, perspective, perspective-taking

Procedia PDF Downloads 137
73 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 188
72 „Real and Symbolic in Poetics of Multiplied Screens and Images“

Authors: Kristina Horvat Blazinovic

Abstract:

In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.

Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time

Procedia PDF Downloads 173
71 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 340
70 Basic Education Curriculum in South- South Nigeria: Challenges and Opportunities of Quality Contents in the Second Language Learning

Authors: Catherine Alex Agbor

Abstract:

The modern Nigerian society is dynamic, divided in zones based on economic, political and educational resources often shared across the zones. The Six Geopolitical Zones in Nigeria is a major division in modern Nigeria, created during the regime of president Ibrahim Badamasi Babangida. They are North Central, North East, North West, South East, South South and South West. However, the zone used in this study is known as former South-Eastern State of Akwa-Ibom State and Cross-River State; former Rivers State of Bayelsa State and Rivers State; and former Mid-Western Region, Nigeria of Delta State and Edo State. Many reforms have taken place overtime, particularly in the education sector. Education is constantly presenting new ideas and innovative approaches which act to facilitate the rapid exchange of knowledge and provide quality basic education for learners. The Federal Government of Nigeria in accordance with its National Council on Education directed the Nigerian Educational Research and Development Council to restructure its basic education curriculum with the hope to enable the nation meet national and global developmental goals. One of the goals of the 9-year Basic Education Programme is developing in the entire citizenry a strong consciousness for education and a strong commitment to its vigorous promotion. Another is ensuring the acquisition of appropriate levels of literacy, numeracy, manipulative, communicative and life-skills as well as the ethical, moral and civic values for laying a solid foundation for lifelong learning. Therefore, this article at the introductory stage is aimed to describe some key issues in Nigeria’s experience in the basic education curriculum. In this study, particular attention is paid to this very recent educational policy of the Nigerian government known as Universal Basic Education, its challenges and what can be done to make the policy achieve its desired objectives. It progresses to analyze modern requirements for second language teaching; and presents the challenges of second language teaching in Nigeria. Finally, it reports a study which investigated special efforts for appropriate achievement of quality education in language classroom in the south-south zone of Nigeria. One fundamental research question was posed on what educational practices can contribute to current understanding of the structure of language curriculum. More explicitly, the study was designed to analyze the extent to which quality content contributes to current understanding of the structure of school curriculum in the zone. Otherwise stated, it investigated how student-centred educational practices impact on their learning of French language. One hundred and eighty (180) participants (teachers) were purposefully sampled for the study. Qualitative technique was used to elicit information from participants. The qualitative method used was Focus Group Discussion (FGD). Participants were divided into six groups comprising of 30 teachers from each zone. Group discussions were based mainly on curriculum contents and practices. Information from participants revealed that the curriculum content, among others is inadequate and should be re-examined. Recommendations were proffered as a panacea to concrete implementation of the basic education in Nigeria.

Keywords: basic education, quality contents, second language, south-south states

Procedia PDF Downloads 241
69 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Keywords: heterophony, modalism, serialism, synchrony, syntax

Procedia PDF Downloads 345
68 A Geospatial Approach to Coastal Vulnerability Using Satellite Imagery and Coastal Vulnerability Index: A Case Study Mauritius

Authors: Manta Nowbuth, Marie Anais Kimberley Therese

Abstract:

The vulnerability of coastal areas to storm surges stands as a critical global concern. The increasing frequency and intensity of extreme weather events have increased the risks faced by communities living along the coastlines Worldwide. Small Island developing states (SIDS) stands out as being exceptionally vulnerable, coastal regions, ecosystems of human habitation and natural forces, bear witness to the frontlines of climate-induced challenges, and the intensification of storm surges underscores the urgent need for a comprehensive understanding of coastal vulnerability. With limited landmass, low-lying terrains, and resilience on coastal resources, SIDS face an amplified vulnerability to the consequences of storm surges, the delicate balance between human activities and environmental dynamics in these island nations increases the urgency of tailored strategies for assessing and mitigating coastal vulnerability. This research uses an approach to evaluate the vulnerability of coastal communities in Mauritius. The Satellite imagery analysis makes use of sentinel satellite imageries, modified normalised difference water index, classification techniques and the DSAS add on to quantify the extent of shoreline erosion or accumulation, providing a spatial perspective on coastal vulnerability. The coastal Vulnerability Index (CVI) is applied by Gonitz et al Formula, this index considers factors such as coastal slope, sea level rise, mean significant wave height, and tidal range. Weighted assessments identify regions with varying levels of vulnerability, ranging from low to high. The study was carried out in a Village Located in the south of Mauritius, namely Rivière des Galets, with a population of about 500 people over an area of 60,000m². The Village of Rivière des Galets being located in the south, and the southern coast of Mauritius being exposed to the open Indian ocean, is vulnerable to swells, The swells generated by the South east trade winds can lead to large waves and rough sea conditions along the Southern Coastline which has an impact on the coastal activities, including fishing, tourism and coastal Infrastructures, hence, On the one hand, the results highlighted that from a stretch of 123km of coastline the linear rate regression for the 5 –year span varies from-24.1m/yr. to 8.2m/yr., the maximum rate of change in terms of eroded land is -24m/yr. and the maximum rate of accretion is 8.2m/yr. On the other hand, the coastal vulnerability index varies from 9.1 to 45.6 and it was categorised into low, moderate, high and very high risks zones. It has been observed that region which lacks protective barriers and are made of sandy beaches are categorised as high risks zone and hence it is imperative to high risk regions for immediate attention and intervention, as they will most likely be exposed to coastal hazards and impacts from climate change, which demands proactive measures for enhanced resilience and sustainable adaptation strategies.

Keywords: climate change, coastal vulnerability, disaster management, remote sensing, satellite imagery, storm surge

Procedia PDF Downloads 9
67 Implementation of Smart Card Automatic Fare Collection Technology in Small Transit Agencies for Standards Development

Authors: Walter E. Allen, Robert D. Murray

Abstract:

Many large transit agencies have adopted RFID technology and electronic automatic fare collection (AFC) or smart card systems, but small and rural agencies remain tied to obsolete manual, cash-based fare collection. Small countries or transit agencies can benefit from the implementation of smart card AFC technology with the promise of increased passenger convenience, added passenger satisfaction and improved agency efficiency. For transit agencies, it reduces revenue loss, improves passenger flow and bus stop data. For countries, further implementation into security, distribution of social services or currency transactions can provide greater benefits. However, small countries or transit agencies cannot afford expensive proprietary smart card solutions typically offered by the major system suppliers. Deployment of Contactless Fare Media System (CFMS) Standard eliminates the proprietary solution, ultimately lowering the cost of implementation. Acumen Building Enterprise, Inc. chose the Yuma County Intergovernmental Public Transportation Authority (YCIPTA) existing proprietary YCAT smart card system to implement CFMS. The revised system enables the purchase of fare product online with prepaid debit or credit cards using the Payment Gateway Processor. Open and interoperable smart card standards for transit have been developed. During the 90-day Pilot Operation conducted, the transit agency gathered the data from the bus AcuFare 200 Card Reader, loads (copies) the data to a USB Thumb Drive and uploads the data to the Acumen Host Processing Center for consolidation of the data into the transit agency master data file. The transition from the existing proprietary smart card data format to the new CFMS smart card data format was transparent to the transit agency cardholders. It was proven that open standards and interoperability design can work and reduce both implementation and operational costs for small transit agencies or countries looking to expand smart card technology. Acumen was able to avoid the implementation of the Payment Card Industry (PCI) Data Security Standards (DSS) which is expensive to develop and costly to operate on a continuing basis. Due to the substantial additional complexities of implementation and the variety of options presented to the transit agency cardholder, Acumen chose to implement only the Directed Autoload. To improve the implementation efficiency and the results for a similar undertaking, it should be considered that some passengers lack credit cards and are averse to technology. There are more than 1,300 small and rural agencies in the United States. This grows by 10 fold when considering small countries or rural locations throughout Latin American and the world. Acumen is evaluating additional countries, sites or transit agency that can benefit from the smart card systems. Frequently, payment card systems require extensive security procedures for implementation. The Project demonstrated the ability to purchase fare value, rides and passes with credit cards on the internet at a reasonable cost without highly complex security requirements.

Keywords: automatic fare collection, near field communication, small transit agencies, smart cards

Procedia PDF Downloads 283
66 The Shared Breath Project: Inhabiting Each Other’s Words and Being

Authors: Beverly Redman

Abstract:

With the Theatre Season of 2020-2021 cancelled due to COVID-19 at Purdue University, Fort Wayne, IN, USA, faculty directors found themselves scrambling to create theatre production opportunities for their students in the Department of Theatre. Redman, Chair of the Department, found her community to be suffering from anxieties brought on by a confluence of issues: the global-scale Covid-19 Pandemic, the United States’ Black Lives Matter protests erupting in cities all across the country and the coming Presidential election, arguably the most important and most contentious in the country’s history. Redman wanted to give her students the opportunity to speak not only on these issues but also to be able to record who they were at this time in their personal lives, as well as in this broad socio-political context. She also wanted to invite them into an experience of feeling empathy, too, at a time when empathy in this world seems to be sorely lacking. Returning to a mode of Devising Theatre she had used with community groups in the past, in which storytelling and re-enactment of participants’ life events combined with oral history documentation practices, Redman planned The Shared Breath Project. The process involved three months of workshops, in which participants alternated between theatre exercises and oral history collection and documentation activities as a way of generating original material for a theatre production. The goal of the first half of the project was for each participant to produce a solo piece in the form of a monologue after many generations of potential material born out of gammes, improvisations, interviews and the like. Along the way, many film and audio clips recorded the process of each person’s written documentation—documentation prepared by the subject him or herself but also by others in the group assigned to listen, watch and record. Then, in the second half of the project—and only once each participant had taken their own contributions from raw improvisatory self-presentations and through the stages of composition and performative polish, participants then exchanged their pieces. The second half of the project involved taking on each other’s words, mannerisms, gestures, melodic and rhythmic speech patterns and inhabiting them through the rehearsal process as their own, thus the title, The Shared Breath Project. Here, in stage two the acting challenges evolved to be those of capturing the other and becoming the other through accurate mimicry that embraces Denis Diderot’s concept of the Paradox of Acting, in that the actor is both seeming and being simultaneous. This paper shares the carefully documented process of making the live-streamed theatre production that resulted from these workshops, writing processes and rehearsals, and forming, The Shared Breath Project, which ultimately took the students’ Realist, life-based pieces and edited them into a single unified theatre production. The paper also utilizes research on the Paradox of Acting, putting a Post-Structuralist spin on Diderot’s theory. Here, the paper suggests the limitations of inhabiting the other by allowing that the other is always already a thing impenetrable but nevertheless worthy of unceasing empathetic, striving and delving in an epoch in which slow, careful attention to our fellows is in short supply.

Keywords: otherness, paradox of acting, oral history theatre, devised theatre, political theatre, community-based theatre, peoples’ theatre

Procedia PDF Downloads 184
65 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 59
64 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics

Authors: Varun Kumar, Chandra Shakher

Abstract:

Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.

Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy

Procedia PDF Downloads 499
63 Urban Sprawl: A Case Study of Suryapet Town in Nalgonda District of Telangana State, a Geoinformatic Approach

Authors: Ashok Kumar Lonavath, V. Sathish Kumar

Abstract:

Urban sprawl is the uncontrolled and uncoordinated outgrowth of towns and cities. The process of urban sprawl can be described by change in pattern over time, like proportional increase in built-up surface to population leading to rapid urban spatial expansion. Significant economic and livelihood opportunities in the urban areas results in lack of basic amenities due to the unplanned growth The patterns, processes, dynamic causes and consequences of sprawl can be explored and designed with the help of spatial planning support system. In India context the urban area is defined as the population more than 5000, density more than 400 persons per sq. km and 75% of the population is involved in non-agricultural occupations. India’s urban population is increasing at the rate of 2.35% pa. The class I town’s population of India according to 2011 census is 18.8% that accounts for 60.4% of total unban population. Similarly in Erstwhile Andhra Pradesh it is 22.9% which accounts for 68.8% of total urban population. Suryapet town has historical recognition as ‘Gate Way of Telangana’ in the Indian State of Andhra Pradesh. The Municipality was constituted in 1952 as Grade-III, later upgraded into Grade-II in 1984 and to Grade-I in 1998. The area is 35 Sq.kms. Three major tanks located in three different directions and Musi River is flowing from a distance of 8 kms. The average ground water table is about 50m below ground. It is a fast growing town with a population of 1, 06,805 and 25,448 households. Density is 3051pp sq km, It is a Class I city as per population census. It secured the ISO 14001-2004 certificate for establishing and maintaining an environment-friendly system for solid waste disposal. It is the first municipality in the country to receive such a certificate. It won HUDCO award under environment management, award of appreciation and cash from Ministry of Housing and Poverty Elevation from Government of India and undivided Andhra Pradesh under UN Human Settlement Programme, Greentech Excellance award, Supreme Courts appreciation for solid waste management. Foreign delegates from different countries and also from various other states of India visited Suryapet municipality for study tour and training programs as part of their official visit Suryapet is located at 17°5’ North Latitude and 79°37’ East Longitude. The average elevation is 266m, annual mean temperature is 36°C and average rainfall is 821.0 mm. The people of this town are engaged in Commercial and agriculture activities hence the town has become a centre for marketing and stocking agricultural produce. It is also educational centre in this region. The present paper on urban sprawl is a theoretical framework to analyze the interaction of planning and governance on the extent of outgrowth and level of services. The GIS techniques, SOI Toposheet, satellite imageries and image analysis techniques are extensively used to explore the sprawl and measure the urban land-use. This paper concludes outlining the challenges in addressing urban sprawl while ensuring adequate level of services that planning and governance have to ensure towards achieving sustainable urbanization.

Keywords: remote sensing, GIS, urban sprawl, urbanization

Procedia PDF Downloads 229
62 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example

Authors: Alena Nesterenko, Svetlana Petrikova

Abstract:

Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.

Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation

Procedia PDF Downloads 208
61 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice

Authors: Diana Reckien

Abstract:

Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.

Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity

Procedia PDF Downloads 395
60 Impact of School Environment on Socio-Affective Development: A Quasi-Experimental Longitudinal Study of Urban and Suburban Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Richard B. Speaker, Pat Austin

Abstract:

This study used two psychological scales to examine the level of social and emotional intelligence and moral judgment of over 500 gifted and talented high school students in various academic and creative arts programs in a large metropolitan area in the southeastern United States. For decades, numerous models and programs purporting to encourage socio-affective characteristics of adolescent development have been explored in curriculum theory and design. Socio-affective merges social, emotional, and moral domains. It encompasses interpersonal relations and social behaviors; development and regulation of emotions; personal and gender identity construction; empathy development; moral development, thinking, and judgment. Examining development in these socio-affective domains can provide insight into why some gifted and talented adolescents are not successful in adulthood despite advanced IQ scores. Particularly whether nonintellectual characteristics of gifted and talented individuals, such as emotional, social and moral capabilities, are as advanced as their intellectual abilities and how these are related to each other. Unique characteristics distinguish gifted and talented individuals; these may appear as strengths, but there is the potential for problems to accompany them. Although many thrive in their school environments, some gifted students struggle rather than flourish. In the socio-affective domain, these adolescents face special intrapersonal, interpersonal, and environmental problems. Gifted individuals’ cognitive, psychological, and emotional development occurs asynchronously, in multidimensional layers at different rates and unevenly across ability levels. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of gifted and talented adolescents. This quasi-experimental longitudinal study examined students in several gifted and talented education programs (creative arts school, urban charter schools, and suburban public schools) for (1) socio-affective development level and (2) whether a particular gifted and talented program encourages developmental growth. The following research questions guided the study: (1) How do academically and artistically talented gifted 10th and 11th grade students perform on psychometric scales of social and emotional intelligence and moral judgment? Do they differ from their age or grade normative sample? Are their gender differences among gifted students? (2) Does school environment impact 10th and 11th grade gifted and talented students’ socio-affective development? Do gifted adolescents who participate in a particular school gifted program differ in their developmental profiles of social and emotional intelligence and moral judgment? Students’ performances on psychometric instruments were compared over time and by type of program. Participants took pre-, mid-, and post-tests over the course of an academic school year with Defining Issues Test (DIT-2) assessing moral judgment and BarOn EQ-I: YV assessing social and emotional intelligence. Based on these assessments, quantitative differences in growth on psychological scales (individual and school) were examined. Change scores between schools were also compared. If a school showed change, artifacts (culture, curricula, instructional methodology) provided insight as to environmental qualities that produced this difference.

Keywords: gifted and talented education, moral development, socio-affective development, socio-affective education

Procedia PDF Downloads 162
59 Start with the Art: Early Results from a Study of Arts-Integrated Instruction for Young Children

Authors: Juliane Toce, Steven Holochwost

Abstract:

A substantial and growing literature has demonstrated that arts education benefits young children’s socioemotional and cognitive development. Less is known about the capacity of arts-integrated instruction to yield benefits to similar domains, particularly among demographically and socioeconomically diverse groups of young children. However, the small literature on this topic suggests that arts-integrated instruction may foster young children’s socioemotional and cognitive development by presenting opportunities to 1) engage in instructional content in diverse ways, 2) experience and regulate strong emotions, 3) experience growth-oriented feedback, and 4) engage in collaborative work with peers. Start with the Art is a new program of arts-integrated instruction currently being implemented in four schools in a school district that serves students from a diverse range of backgrounds. The program employs a co-teaching model in which teaching artists and classroom teachers engage in collaborative lesson planning and instruction over the course of the academic year and is currently the focus of an impact study featuring a randomized-control design, as well as an implementation study, both of which are funded through an Educational Innovation and Research grant from the United States Department of Education. The paper will present the early results from the Start with the Art implementation study. These results will provide an overview of the extent to which the program was implemented in accordance with design, with a particular emphasis on the degree to which the four opportunities enumerated above (e.g., opportunities to engage in instructional content in diverse ways) were presented to students. There will be a review key factors that may influence the fidelity of implementation, including classroom teachers’ reception of the program and the extent to which extant conditions in the classroom (e.g., the overall level of classroom organization) may have impacted implementation fidelity. With the explicit purpose of creating a program that values and meets the needs of the teachers and students, Start with the Art incorporates the feedback from individuals participating in the intervention. Tracing its trajectory from inception to ongoing development and examining the adaptive changes made in response to teachers' transformative experiences in the post-pandemic classroom, Start with the Art continues to solicit input from experts in integrating artistic content into core curricula within educational settings catering to students from under-represented backgrounds in the arts. Leveraging the input from this rich consortium of experts has allowed for a comprehensive evaluation of the program’s implementation. The early findings derived from the implementation study emphasize the potential of arts-integrated instruction to incorporate restorative practices. Such practices serve as a crucial support system for both students and educators, providing avenues for children to express themselves, heal emotionally, and foster social development, while empowering teachers to create more empathetic, inclusive, and supportive learning environments. This all-encompassing analysis spotlights Start with the Art’s adaptability to any learning environment through the program’s effectiveness, resilience, and its capacity to transform - through art - the classroom experience within the ever-evolving landscape of education.

Keywords: arts-integration, social emotional learning, diverse learners, co-teaching, teaching artists, post-pandemic teaching

Procedia PDF Downloads 62
58 Remote Sensing of Urban Land Cover Change: Trends, Driving Forces, and Indicators

Authors: Wei Ji

Abstract:

This study was conducted in the Kansas City metropolitan area of the United States, which has experienced significant urban sprawling in recent decades. The remote sensing of land cover changes in this area spanned over four decades from 1972 through 2010. The project was implemented in two stages: the first stage focused on detection of long-term trends of urban land cover change, while the second one examined how to detect the coupled effects of human impact and climate change on urban landscapes. For the first-stage study, six Landsat images were used with a time interval of about five years for the period from 1972 through 2001. Four major land cover types, built-up land, forestland, non-forest vegetation land, and surface water, were mapped using supervised image classification techniques. The study found that over the three decades the built-up lands in the study area were more than doubled, which was mainly at the expense of non-forest vegetation lands. Surprisingly and interestingly, the area also saw a significant gain in surface water coverage. This observation raised questions: How have human activities and precipitation variation jointly impacted surface water cover during recent decades? How can we detect such coupled impacts through remote sensing analysis? These questions led to the second stage of the study, in which we designed and developed approaches to detecting fine-scale surface waters and analyzing coupled effects of human impact and precipitation variation on the waters. To effectively detect urban landscape changes that might be jointly shaped by precipitation variation, our study proposed “urban wetscapes” (loosely-defined urban wetlands) as a new indicator for remote sensing detection. The study examined whether urban wetscape dynamics was a sensitive indicator of the coupled effects of the two driving forces. To better detect this indicator, a rule-based classification algorithm was developed to identify fine-scale, hidden wetlands that could not be appropriately detected based on their spectral differentiability by a traditional image classification. Three SPOT images for years 1992, 2008, and 2010, respectively were classified with this technique to generate the four types of land cover as described above. The spatial analyses of remotely-sensed wetscape changes were implemented at the scales of metropolitan, watershed, and sub-watershed, as well as based on the size of surface water bodies in order to accurately reveal urban wetscape change trends in relation to the driving forces. The study identified that urban wetscape dynamics varied in trend and magnitude from the metropolitan, watersheds, to sub-watersheds in response to human impacts at different scales. The study also found that increased precipitation in the region in the past decades swelled larger wetlands in particular while generally smaller wetlands decreased mainly due to human development activities. These results confirm that wetscape dynamics can effectively reveal the coupled effects of human impact and climate change on urban landscapes. As such, remote sensing of this indicator provides new insights into the relationships between urban land cover changes and driving forces.

Keywords: urban land cover, human impact, climate change, rule-based classification, across-scale analysis

Procedia PDF Downloads 308
57 The Impact of Spirituality on the Voluntary Simplicity Lifestyle Tendency: An Explanatory Study on Turkish Consumers

Authors: Esna B. Buğday, Niray Tunçel

Abstract:

Spirituality has a motivational influence on consumers' psychological states, lifestyles, and behavioral intentions. Spirituality refers to the feeling that there is a divine power greater than ourselves and a connection among oneself, others, nature, and the sacred. In addition, spirituality concerns the human soul and spirit against the material and physical world and consists of three dimensions: self-discovery, relationships, and belief in a higher power. Of them, self-discovery is to explore the meaning and the purpose of life. Relationships refer to the awareness of the connection between human beings and nature as well as respect for them. In addition, higher power represents the transcendent aspect of spirituality, which means to believe in a holy power that creates all the systems in the universe. Furthermore, a voluntary simplicity lifestyle is (1) to adopt a simple lifestyle by minimizing the attachment to and the consumption of material things and possessions, (2) to have an ecological awareness respecting all living creatures, and (3) to express the desire for exploring and developing the inner life. Voluntary simplicity is a multi-dimensional construct that consists of a desire for a voluntarily simple life (e.g., avoiding excessive consumption), cautious attitudes in shopping (e.g., not buying unnecessary products), acceptance of self-sufficiency (e.g., being self-sufficient individual), and rejection of highly developed functions of products (e.g., preference for simple functioned products). One of the main reasons for living simply is to sustain a spiritual life, as voluntary simplicity provides the space for achieving psychological and spiritual growth, cultivating self-reliance since voluntary simplifier frees themselves from the overwhelming externals and takes control of their daily lives. From this point of view, it is expected that people with a strong sense of spirituality will be likely to adopt a simple lifestyle. In this respect, the study aims to examine the impact of spirituality on consumers' voluntary simple lifestyle tendencies. As consumers' consumption attitudes and behaviors depend on their lifestyles, exploring the factors that lead them to embrace voluntary simplicity significantly predicts their purchase behavior. In this respect, this study presents empirical research based on a data set collected from 478 Turkish consumers through an online survey. First, exploratory factor analysis is applied to the data to reveal the dimensions of spirituality and voluntary simplicity scales. Second, confirmatory factor analysis is conducted to assess the measurement model. Last, the hypotheses are analyzed using partial least square structural equation modeling (PLS-SEM). The results confirm that spirituality's self-discovery and relationships dimensions positively impact both cautious attitudes in shopping and acceptance of self-sufficiency dimensions of voluntary simplicity. In contrast, belief in a higher power does not significantly influence consumers' voluntary simplicity tendencies. Even though there has been theoretical support drawing a positive relationship between spirituality and voluntary simplicity, to the best of the authors' knowledge, this has not been empirically tested in the literature before. Hence, this study contributes to the current knowledge by analyzing the direct influence of spirituality on consumers' voluntary simplicity tendencies. Additionally, analyzing this impact on the consumers of an emerging market is another contribution to the literature.

Keywords: spirituality, voluntary simplicity, self-sufficiency, conscious shopping, Turkish consumers

Procedia PDF Downloads 153
56 Advances and Challenges in Assessing Students’ Learning Competencies in 21st Century Higher Education

Authors: O. Zlatkin-Troitschanskaia, J. Fischer, C. Lautenbach, H. A. Pant

Abstract:

In 21st century higher education (HE), the diversity among students has increased in recent years due to the internationalization and higher mobility. Offering and providing equal and fair opportunities based on students’ individual skills and abilities instead of their social or cultural background is one of the major aims of HE. In this context, valid, objective and transparent assessments of students’ preconditions and academic competencies in HE are required. However, as analyses of the current states of research and practice show, a substantial research gap on assessment practices in HE still exists, calling for the development of effective solutions. These demands lead to significant conceptual and methodological challenges. Funded by the German Federal Ministry of Education and Research, the research program 'Modeling and Measuring Competencies in Higher Education – Validation and Methodological Challenges' (KoKoHs) focusses on addressing these challenges in HE assessment practice by modeling and validating objective test instruments. Including 16 cross-university collaborative projects, the German-wide research program contributes to bridging the research gap in current assessment research and practice by concentrating on practical and policy-related challenges of assessment in HE. In this paper, we present a differentiated overview of existing assessments of HE at the national and international level. Based on the state of research, we describe the theoretical and conceptual framework of the KoKoHs Program as well as results of the validation studies, including their key outcomes. More precisely, this includes an insight into more than 40 developed assessments covering a broad range of transparent and objective methods for validly measuring domain-specific and generic knowledge and skills for five major study areas (Economics, Social Science, Teacher Education, Medicine and Psychology). Computer-, video- and simulation-based instruments have been applied and validated to measure over 20,000 students at the beginning, middle and end of their (bachelor and master) studies at more than 300 HE institutions throughout Germany or during their practical training phase, traineeship or occupation. Focussing on the validity of the assessments, all test instruments have been analyzed comprehensively, using a broad range of methods and observing the validity criteria of the Standards for Psychological and Educational Testing developed by the American Educational Research Association, the American Economic Association and the National Council on Measurement. The results of the developed assessments presented in this paper, provide valuable outcomes to predict students’ skills and abilities at the beginning and the end of their studies as well as their learning development and performance. This allows for a differentiated view of the diversity among students. Based on the given research results practical implications and recommendations are formulated. In particular, appropriate and effective learning opportunities for students can be created to support the learning development of students, promote their individual potential and reduce knowledge and skill gaps. Overall, the presented research on competency assessment is highly relevant to national and international HE practice.

Keywords: 21st century skills, academic competencies, innovative assessments, KoKoHs

Procedia PDF Downloads 140
55 Online Factorial Experimental Study Testing the Effectiveness of Pictorial Waterpipe-specific Health Warning Labels Compared with Text-only Labels in the United States of America

Authors: Taghrid Asfar, Olusanya J. Oluwole, Michael Schmidt, Alejandra Casas, Zoran Bursac, Wasim Maziak.

Abstract:

Waterpipe (WP) smoking (a.k.a. hookah) has increased dramatically in the US mainly due to the misperception that it is safer than cigarette smoking. Mounting evidence show that WP smoking is addictive and harmful. Health warning labels (HWLs) are effective in communicating smoking-related risks. Currently, the FDA requires that WP tobacco packages have a textual HWL about nicotine. While this represents a good step, it is inadequate given the established harm of WP smoking beyond addiction and the superior performance of pictorial HWLs over text-only ones. We developed 24 WP pictorial HWLs in a Delphi study among international expert panel. HWLs were grouped into 6 themes: addiction, harm compared to cigarettes, harm to others, health effects, quitting, and specific harms. This study aims to compare the effect of the pictorial HWLs compared to the FDA HWL, and 2) the effect of pictorial HWLs between the 6 themes. A 2x7 between/within subject online factorial experimental study was conducted among a national convenience sample of 300 (50% current WP smokers; 50% nonsmokers) US adults (females 71.1%; mean age of 31.1±3.41 years) in March 2022. The first factor varied WP smoking status (smokers, nonsmokers). The second factor varied the HWL theme and type (text, pictorial). Participants were randomized to view and rate 7 HWLs: 1 FDA text HWL (control) and 6 HWLs, one from each of the 6 themes, all presented in random order. HWLs were rated based on the message impact framework into five categories: attention, reaction (believability, relevance, fear), perceived effectiveness, intentions to quit WP among current smokers, and intention to not initiate WP among nonsmokers. measures were assessed on a 5-point Likert scale (1=not at all to 5=very much) for attention and reaction and on a 7-point Likert scale (1=not at all to 7=very much) for the perceived effectiveness and intentions to quit or not initiate WP smoking. Means and SDs of outcome measures for each HWL type and theme were calculated. Planned comparisons using Friedman test followed by pairwise Wilcoxon signed-rank test for multiple comparisons were used to examine distributional differences of outcomes between the HWL type and themes. Approximately 74.4 % of participants were non-Hispanic Whites, 68.4% had college degrees, and 41.5% were under the poverty level. Participants reported starting WTS on average at 20.3±8.19 years. Compared with the FDA text HWL, pictorial HWLs elicited higher attention (p<0.0001), fear (p<0.0001), harm perception (p<0.0003), perceived effectiveness (p<0.0001), and intentions to quit (p=0.0014) and not initiate WP smoking (p<0.0003). HWLs in theme 3 (harm to others) achieved the highest rating in attention (4.14±1), believability (4.15±0.95), overall perceived effectiveness (7.60±2.35), harm perception (7.53±2.43), and intentions to quit (7.35±2.57). HWLs in theme 2 (WP harm compared to cigarettes) achieved the highest rating in discouraging WP smoking initiation (7.32±2.54). Pictorial HWLs were superior to the FDA text-only for several communication outcomes. Pictorial HWLs related to WP harm to others and WP harm compared to cigarette are promising. These findings provide strong evidence for the potential implementation of WP-specific pictorial HWLs.

Keywords: health communication, waterpipe smoking, factorial experiment, reaction, harm perception, tobacco regulations

Procedia PDF Downloads 115
54 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers

Procedia PDF Downloads 61
53 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 70
52 Evaluation of Genetic Potentials of Onion (Allium Cepa L.) Cultivars of North Western Nigeria

Authors: L. Abubakar, B. M. Sokoto, I. U. Mohammed, M. S. Na’allah, A. Mohammad, A. N. Garba, T. S. Bubuche

Abstract:

Onion (Allium cepa var. cepa L.) is the most important species of the Allium group belonging to family Alliaceae and genus Allium. It can be regarded as the single important vegetable species in the world after tomatoes. Despite the similarities, which bring the species together, the genus is a strikingly diverse one, with more than five hundred species, which are perennial and mostly bulbous plants. Out of these, only seven species are in cultivation, and five are the most important species of the cultivated Allium. However, Allium cepa (onion) and Allium sativum (Garlic) are the two major cultivated species grown all over the world of which the onion crop is the most important. North Western Nigeria (Sokoto, Kebbi and Zamfara States) constitute the major onion producing zone in Nigeria, which is primarily during the dry season. However, onion production in the zone is seriously affected by two main factors i.e. diseases and storage losses, in addition to other constraints that limits the cultivation of the crop during the rainy season which include lack of prolonged rainy season to allow for proper maturation of the crop. The major onion disease in this zone is purple blotch caused by a fungus Alternaria porri and currently efforts are on to develop onion hybrids resistant to the disease. Genetic diversity plays an important role in plant breeding either to exploit heterosis or to generate productive recombinants. Assessment of a large number of genotypes for a genetic diversity is the first step in this direction. The objective of this research therefore is to evaluate the genetic potentials of the onion cultivars of North Western Nigeria, with a view of developing new cultivars that address the major production challenges to onion cultivation in North Western, Nigeria. Thirteen onion cultivars were collected during an expedition covering North western Nigeria and Southern part of Niger Republic during 2013, which are areas noted for onion production. The cultivars were evaluated at two locations; Sokoto, in Sokoto State and Jega in Kebbi State all in Nigeria during the 2013/14 onion season (dry season) under irrigation. The objective of the research was to determine the genetic potentials of onion cultivars of north western Nigeria as a basis for breeding purposes. Combined analysis of the results revealed highly significant variation between the cultivars across the locations with respect to plant height, number of leaves/plant, bolting %, bulb height, bulb weight, mean bulb yield and cured bulb weight, with significant variation in terms of bulb diameter. Tasa from Warra Local Government Area of Kebbi State (V4) recorded the greatest mean fresh bulb yield with Jar Albasa (V8) from Illela Local Government Area of Sokoto State recording the least. Similarly Marsa (V5) from Silame Local Government Area recorded the greatest mean cured bulb yield (marketable bulb)with Kiba (V11) from Goronyo Local Government of Sokoto State recording the least. Significant variation was recorded between the locations with respect to all characters, with Sokoto being better in terms of plant height, number of leaves/plant, bolting % and bulb diameter. Jega was better in terms of bulb height, bulb yield and cured bulb weight. Significant variation was therefore observed between the cultivars.

Keywords: evaluation, genetic, onions, North Western Nigeria

Procedia PDF Downloads 410
51 A Review on Cyberchondria Based on Bibliometric Analysis

Authors: Xiaoqing Peng, Aijing Luo, Yang Chen

Abstract:

Background: Cyberchondria, as an "emerging risk" accompanied by the information era, is a new abnormal pattern characterized by excessive or repeated online searches for health-related information and escalating health anxiety, which endangers people's physical and mental health and poses a huge threat to public health. Objective: To explore and discuss the research status, hotspots and trends of Cyberchondria. Methods: Based on a total of 77 articles regarding "Cyberchondria" extracted from Web of Science from the beginning till October 2019, the literature trends, countries, institutions, hotspots are analyzed by bibliometric analysis, the concept definition of Cyberchondria, instruments, relevant factors, treatment and intervention are discussed as well. Results: Since "Cyberchondria" was put forward for the first time in 2001, the last two decades witnessed a noticeable increase in the amount of literature, especially during 2014-2019, it quadrupled dramatically at 62 compared with that before 2014 only at 15, which shows that Cyberchondria has become a new theme and hot topic in recent years. The United States was the most active contributor with the largest publication (23), followed by England (11) and Australia (11), while the leading institutions were Baylor University(7) and University of Sydney(7), followed by Florida State University(4) and University of Manchester(4). The WoS categories "Psychiatry/Psychology " and "Computer/ Information Science "were the areas of greatest influence. The concept definition of Cyberchondria is not completely unified in the world, but it is generally considered as an abnormal behavioral pattern and emotional state and has been invoked to refer to the anxiety-amplifying effects of online health-related searches. The first and the most frequently cited scale for measuring the severity of Cyberchondria called “The Cyberchondria Severity Scale (CSS) ”was developed in 2014, which conceptualized Cyberchondria as a multidimensional construct consisting of compulsion, distress, excessiveness, reassurance, and mistrust of medical professionals which was proved to be not necessary for this construct later. Since then, the Brazilian, German, Turkish, Polish and Chinese versions were subsequently developed, improved and culturally adjusted, while CSS was optimized to a simplified version (CSS-12) in 2019, all of which should be worthy of further verification. The hotspots of Cyberchondria mainly focuses on relevant factors as follows: intolerance of uncertainty, anxiety sensitivity, obsessive-compulsive disorder, internet addition, abnormal illness behavior, Whiteley index, problematic internet use, trying to make clear the role played by “associated factors” and “anxiety-amplifying factors” in the development of Cyberchondria, to better understand the aetiological links and pathways in the relationships between hypochondriasis, health anxiety and online health-related searches. Although the treatment and intervention of Cyberchondria are still in the initial stage of exploration, there are kinds of meaningful attempts to seek effective strategies from different aspects such as online psychological treatment, network technology management, health information literacy improvement and public health service. Conclusion: Research on Cyberchondria is in its infancy but should be deserved more attention. A conceptual consensus on Cyberchondria, a refined assessment tool, prospective studies conducted in various populations, targeted treatments for it would be the main research direction in the near future.

Keywords: cyberchondria, hypochondriasis, health anxiety, online health-related searches

Procedia PDF Downloads 122
50 In Vitro Intestine Tissue Model to Study the Impact of Plastic Particles

Authors: Ashleigh Williams

Abstract:

Micro- and nanoplastics’ (MNLPs) omnipresence and ecological accumulation is evident when surveying recent environmental impact studies. For example, in 2014 it was estimated that at least 52.3 trillion plastic microparticles are floating at sea, and scientists have even found plastics present remote Arctic ice and snow (5,6). Plastics have even found their way into precipitation, with more than 1000 tons of microplastic rain precipitating onto the Western United States in 2020. Even more recent studies evaluating the chemical safety of reusable plastic bottles found that hundreds of chemicals leached into the control liquid in the bottle (ddH2O, ph = 7) during a 24-hour time period. A consequence of the increased abundance in plastic waste in the air, land, and water every year is the bioaccumulation of MNLPs in ecosystems and trophic niches of the animal food chain, which could potentially cause increased direct and indirect exposure of humans to MNLPs via inhalation, ingestion, and dermal contact. Though the detrimental, toxic effects of MNLPs have been established in marine biota, much less is known about the potentially hazardous health effects of chronic MNLP ingestion in humans. Recent data indicate that long-term exposure to MNLPs could cause possible inflammatory and dysbiotic effects. However, toxicity seems to be largely dose-, as well as size-dependent. In addition, the transcytotic uptake of MNLPs through the intestinal epithelia in humans remain relatively unknown. To this point, the goal of the current study was to investigate the mechanisms of micro- and nanoplastic uptake and transcytosis of Polystyrene (PE) in human stem-cell derived, physiologically relevant in vitro intestinal model systems, and to compare the relative effect of particle size (30 nm, 100 nm, 500 nm and 1 µm), and concentration (0 µg/mL, 250 µg/mL, 500 µg/mL, 1000 µg/mL) on polystyrene MNLP uptake, transcytosis and intestinal epithelial model integrity. Observational and quantitative data obtained from confocal microscopy, immunostaining, transepithelial electrical resistance (TEER) measurements, cryosectioning, and ELISA cytokine assays of the proinflammatory cytokines Interleukin-6 and Interleukin-8 were used to evaluate the localization and transcytosis of polystyrene MNPs and its impact on epithelial integrity in human-derived intestinal in vitro model systems. The effect of Microfold (M) cell induction on polystyrene micro- and nanoparticle (MNP) uptake, transcytosis, and potential inflammation was also assessed and compared to samples grown under standard conditions. Microfold (M) cells, link the human intestinal system to the immune system and are the primary cells in the epithelium responsible for sampling and transporting foreign matter of interest from the lumen of the gut to underlying immune cells. Given the uptake capabilities of Microfold cells to interact both specifically and nonspecific to abiotic and biotic materials, it was expected that M- cell induced in vitro samples would have increased binding, localization, and potentially transcytosis of Polystyrene MNLPs across the epithelial barrier. Experimental results of this study would not only help in the evaluation of the plastic toxicity, but would allow for more detailed modeling of gut inflammation and the intestinal immune system.

Keywords: nanoplastics, enteroids, intestinal barrier, tissue engineering, microfold (M) cells

Procedia PDF Downloads 85
49 Broad Host Range Bacteriophage Cocktail for Reduction of Staphylococcus aureus as Potential Therapy for Atopic Dermatitis

Authors: Tamar Lin, Nufar Buchshtab, Yifat Elharar, Julian Nicenboim, Rotem Edgar, Iddo Weiner, Lior Zelcbuch, Ariel Cohen, Sharon Kredo-Russo, Inbar Gahali-Sass, Naomi Zak, Sailaja Puttagunta, Merav Bassan

Abstract:

Background: Atopic dermatitis (AD) is a chronic, relapsing inflammatory skin disorder that is characterized by dry skin and flares of eczematous lesions and intense pruritus. Multiple lines of evidence suggest that AD is associated with increased colonization by Staphylococcus aureus, which contributes to disease pathogenesis through the release of virulence factors that affect both keratinocytes and immune cells, leading to disruption of the skin barrier and immune cell dysfunction. The aim of the current study is to develop a bacteriophage-based product that specifically targets S. aureus. Methods: For the discovery of phage, environmental samples were screened on 118 S. aureus strains isolated from skin samples, followed by multiple enrichment steps. Natural phages were isolated, subjected to Next-generation Sequencing (NGS), and analyzed using proprietary bioinformatics tools for undesirable genes (toxins, antibiotic resistance genes, lysogeny potential), taxonomic classification, and purity. Phage host range was determined by an efficiency of plating (EOP) value above 0.1 and the ability of the cocktail to completely lyse liquid bacterial culture under different growth conditions (e.g., temperature, bacterial stage). Results: Sequencing analysis demonstrated that the 118 S. aureus clinical strains were distributed across the phylogenetic tree of all available Refseq S. aureus (~10,750 strains). Screening environmental samples on the S. aureus isolates resulted in the isolation of 50 lytic phages from different genera, including Silviavirus, Kayvirus, Podoviridae, and a novel unidentified phage. NGS sequencing confirmed the absence of toxic elements in the phages’ genomes. The host range of the individual phages, as measured by the efficiency of plating (EOP), ranged between 41% (48/118) to 79% (93/118). Host range studies in liquid culture revealed that a subset of the phages can infect a broad range of S. aureus strains in different metabolic states, including stationary state. Combining the single-phage EOP results of selected phages resulted in a broad host range cocktail which infected 92% (109/118) of the strains. When tested in vitro in a liquid infection assay, clearance was achieved in 87% (103/118) of the strains, with no evidence of phage resistance throughout the study (24 hours). A S. aureus host was identified that can be used for the production of all the phages in the cocktail at high titers suitable for large-scale manufacturing. This host was validated for the absence of contaminating prophages using advanced NGS methods combined with multiple production cycles. The phages are produced under optimized scale-up conditions and are being used for the development of a topical formulation (BX005) that may be administered to subjects with atopic dermatitis. Conclusions: A cocktail of natural phages targeting S. aureus was effective in reducing bacterial burden across multiple assays. Phage products may offer safe and effective steroid-sparing options for atopic dermatitis.

Keywords: atopic dermatitis, bacteriophage cocktail, host range, Staphylococcus aureus

Procedia PDF Downloads 153
48 Femicide in the News: Jewish and Arab Victims and Culprits in the Israeli Hebrew Media

Authors: Ina Filkobski, Eran Shor

Abstract:

This article explores how newspapers cover murder of women by family members and intimate partners. Three major Israeli newspapers were compared in order to analyse the coverage of Jewish and Arab victims and culprits and to examine whether and in what ways the media contribute to the construction of symbolic boundaries between minority and dominant social groups. A sample of some 459 articles that were published between 2013 and 2015 was studied using a systematic qualitative content analysis. Our findings suggest that the treatment of murder cases by the media varies according to the ethnicity of both victims and culprits. The murder of Jews by family members or intimate partners was framed as a shocking and unusual event, a result of the individual personality or pathology of the culprit. Conversely, when Arabs were the killers, murders were often explained by focusing on the culture of the ethnic group, described as traditional, violent, and patriarchal. In two-thirds of the cases in which Arabs were involved, so-called ‘honor killing’ or other cultural explanations were proposed as the motive for the murder. This was often the case even before a suspect was detected, while police investigation was at its very early stages, and often despite forceful denials from victims’ families. In case of Jewish culprits, more than half of the articles in our sample suggested mental disorder to explain the acts and cultural explanations were almost entirely absent. Beyond the emphasis on psychological vs. cultural explanations, newspaper articles also tend to provide much more detail about Jewish culprits than about Arabs. Such detailed examinations convey a desire to make sense of the event by understanding the supposedly unique and unorthodox nature of the killer. The detailed accounts were usually absent from the reports on Arab killers. Thus, even if reports do not explicitly offer cultural motivations for the murder, the fact that reports often remain laconic leaves people to draw their own conclusions, which would then be likely based on existing cognitive scripts and previous reports on family murders among Arabs. Such treatment contributes to the notion that Arab and Muslim cultures, religions, and nationalities are essentially misogynistic and adhere to norms of honor and shame that are radically different from those of modern societies, such as the Jewish-Israeli one. Murder within the family is one of the most dramatic occurrences in the social world, and in societies that see themselves as modern it is a taboo; an ultimate signifier of danger. We suggest that representations of murder provide a valuable prism for examining the construction of group boundaries. Our analysis, therefore, contributes to the scholarly effort to understand the creation and reinforcement of symbolic boundaries between ‘society’ and its ‘others’ by systematically tracing the media constructions of ‘otherness’. While our analysis focuses on Israel, studies on the United States, Canada, and various European countries with ethnically and racially heterogeneous populations, make it clear that the stigmatisation and exclusion of visible, religious, and language minorities are not unique to the Israeli case.

Keywords: comparative study of media coverege of minority and majority groups, construction of symbolic group boundaries, murder of women by family members and intimate partners, Israel, Jews, Arabs

Procedia PDF Downloads 185
47 Modelling Spatial Dynamics of Terrorism

Authors: André Python

Abstract:

To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.

Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling

Procedia PDF Downloads 351