Search results for: corrective measures
407 The Impact of Tourism on the Intangible Cultural Heritage of Pilgrim Routes: The Case of El Camino de Santiago
Authors: Miguel Angel Calvo Salve
Abstract:
This qualitative and quantitative study will identify the impact of tourism pressure on the intangible cultural heritage of the pilgrim route of El Camino de Santiago (Saint James Way) and propose an approach to a sustainable touristic model for these Cultural Routes. Since 1993, the Spanish Section of the Pilgrim Route of El Camino de Santiago has been on the World Heritage List. In 1994, the International Committee on Cultural Routes (CIIC-ICOMOS) initiated its work with the goal of studying, preserving, and promoting the cultural routes and their significance as a whole. Another ICOMOS group, the Charter on Cultural Routes, pointed out in 2008 the importance of both tangible and intangible heritage and the need for a holistic vision in preserving these important cultural assets. Tangible elements provide a physical confirmation of the existence of these cultural routes, while the intangible elements serve to give sense and meaning to it as a whole. Intangible assets of a Cultural Route are key to understanding the route's significance and its associated heritage values. Like many pilgrim routes, the Route to Santiago, as the result of a long evolutionary process, exhibits and is supported by intangible assets, including hospitality, cultural and religious expressions, music, literature, and artisanal trade, among others. A large increase in pilgrims walking the route, with very different aims and tourism pressure, has shown how the dynamic links between the intangible cultural heritage and the local inhabitants along El Camino are fragile and vulnerable. Economic benefits for the communities and population along the cultural routes are commonly fundamental for the micro-economies of the people living there, substituting traditional productive activities, which, in fact, modifies and has an impact on the surrounding environment and the route itself. Consumption of heritage is one of the major issues of sustainable preservation promoted with the intention of revitalizing those sites and places. The adaptation of local communities to new conditions aimed at preserving and protecting existing heritage has had a significant impact on immaterial inheritance. Based on questionnaires to pilgrims, tourists and local communities along El Camino during the peak season of the year, and using official statistics from the Galician Pilgrim’s Office, this study will identify the risk and threats to El Camino de Santiago as a Cultural Route. The threats visible nowadays due to the impact of mass tourism include transformations of tangible heritage, consumerism of the intangible, changes of local activities, loss in the authenticity of symbols and spiritual significance, and pilgrimage transformed into a tourism ‘product’, among others. The study will also approach some measures and solutions to mitigate those impacts and better preserve this type of cultural heritage. Therefore, this study will help the Route services providers and policymakers to better preserve the Cultural Route as a whole to ultimately improve the satisfying experience of pilgrims.Keywords: cultural routes, El Camino de Santiago, impact of tourism, intangible heritage
Procedia PDF Downloads 85406 The Impact of the Global Financial Crisis on the Performance of Czech Industrial Enterprises
Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak
Abstract:
The global financial crisis that erupted in 2008 is associated mainly with the debt crisis. It quickly spread globally through financial markets, international banks and trade links, and affected many economic sectors. Measured by the index of the year-on-year change in GDP and industrial production, the consequences of the global financial crisis manifested themselves with some delay also in the Czech economy. This can be considered a result of the overwhelming export orientation of Czech industrial enterprises. These events offer an important opportunity to study how financial and macroeconomic instability affects corporate performance. Corporate performance factors have long been given considerable attention. It is therefore reasonable to ask whether the findings published in the past are also valid in the times of economic instability and subsequent recession. The decisive factor in effective corporate performance measurement is the existence of an appropriate system of indicators that are able to assess progress in achieving corporate goals. Performance measures may be based on non-financial as well as on financial information. In this paper, financial indicators are used in combination with other characteristics, such as the firm size and ownership structure. Financial performance is evaluated based on traditional performance indicators, namely, return on equity and return on assets, supplemented with indebtedness and current liquidity indices. As investments are a very important factor in corporate performance, their trends and importance were also investigated by looking at the ratio of investments to previous year’s sales and the rate of reinvested earnings. In addition to traditional financial performance indicators, the Economic Value Added was also used. Data used in the research were obtained from a questionnaire survey administered in industrial enterprises in the Czech Republic and from AMADEUS (Analyse Major Database from European Sources), from which accounting data of companies were obtained. Respondents were members of the companies’ senior management. Research results unequivocally confirmed that corporate performance dropped significantly in the 2010-2012 period, which can be considered a result of the global financial crisis and a subsequent economic recession. It was reflected mainly in the decreasing values of profitability indicators and the Economic Value Added. Although the total year-on-year indebtedness declined, intercompany indebtedness increased. This can be considered a result of impeded access of companies to bank loans due to the credit crunch. Comparison of the results obtained with the conclusions of previous research on a similar topic showed that the assumption that firms under foreign control achieved higher performance during the period investigated was not confirmed.Keywords: corporate performance, foreign control, intercompany indebtedness, ratio of investment
Procedia PDF Downloads 334405 Distributional and Developmental Analysis of PM2.5 in Beijing, China
Authors: Alexander K. Guo
Abstract:
PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.Keywords: Beijing, distribution, patterns, pm2.5, trends
Procedia PDF Downloads 247404 Social Inclusion in Higher Institutions: The Plights of Students with Disabilities in Kaduna Polytechnic, Nigeria
Authors: Mairo H. Ipadeola, Catherine James Atteng
Abstract:
The term social inclusion refers to a process by which those disadvantaged in society can have access to fully participate in education like others. Student with special needs are expected to learn along with their peers within the some educational institutions which provide adequate access for all. There for, the study sort to understand the typical ways in which students with disabilities (SWD) were denied from fully participating as students in Kaduna Polytechnic. In doing this, two (2) objectives and research questions were raised. Firstly, to explore the attitudes of others towards students with disabilities in the institutions and secondly, to ascertain the extent of social participation and physical accessibility for students with disabilities (SWD) while in the institutions. Based on the objectives the paper postulated the research questions: what are the attitudes of management, teachers, and students towards students with special need in Kaduna Polytechnic and to what extent did the students with disabilities experience social participation and physical accessibility within Kaduna Polytechnic school environment? The study area was Kaduna Polytechnic. The study engaged the interview for the data collected which were transcribed and analyzed by thematic coding. The findings were categorized under themes, sub-themes, and codes. The findings revealed that the perception, behavior, and association experiences of students with disabilities within Kaduna Polytechnic were not encouraging. Their experiences were characterized by negative attitudes, feelings of rejection, neglect, and bullying. Data generated on social participation indicated that 71% of the respondents believed that learning, school activities, recreations, and student politics between SWD and the other student were in the direction of low / very low. All the respondents, particularly students with blindness and physical challenges faced difficulty with environmental and physical access above all within the school environment, classroom, walkways and ramps, Also, directions were none existent in most departments with physical access to classrooms, toilets, cafeterias, and school shops absent or very low (71% and 29% of the respondents). The conclusion was that the physical barriers limited the possibilities of social participation of SWD.The paper made some recommendations such as mass public enlightenment on radio and television to change the perception of society about people with disability. Also, the federal, state, and local governments enact building acts for fresh builders and adopted measures and time frames for existing public buildings to be made accessible for people with disabilities. All stakeholders should ensure that the five (5) percent budget set aside by State Universal Basic Education Board (SUBEB) and/or Tertiary Education Trust Fund (TETFUND) for the provision of specialized equipment and facilities for the student with special needs should be used prudently spent and monitored by the board.cm.Keywords: social inclusion, students with disability, social participation, environmental/physical access
Procedia PDF Downloads 54403 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items
Authors: Wen-Chung Wang, Xue-Lan Qiu
Abstract:
Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison
Procedia PDF Downloads 247402 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning
Authors: Ioanna Taouki, Marie Lallier, David Soto
Abstract:
Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition
Procedia PDF Downloads 151401 A Comparative Laboratory Evaluation of Efficacy of Two Fungi: Beauveria bassiana and Acremonium perscinum, on Dichomeris eridantis Meyrick (Lepidoptera: Gelechiidae) Larvae, an Important Pest of Dalbergia sissoo
Authors: Gunjan Srivastava, Shamila Kalia
Abstract:
Dalbergia sissoo Roxb., (Family- Leguminosae; Subfamily- Papilionoideae), is an economically and ecologically important tree species having medicinal value. Of the rich complex of insect fauna, ten have been recognized as potential pests of nurseries and plantations. Present study was conducted to explore an effective ecofriendly control of Dichomeris eridantis Meyrick, an important defoliator pest of D. sissoo. Health and environmental concerns demanded devising a bio-intensive pest management strategy and employing ecofriendly measures. In the present laboratory bioassay two entomopathogenic fungi Acremonium perscinum and Beauveria bassiana were tested and compared for evaluating the efficacy of their seven different concentrations (besides control) against the 3rd, 4th and 5th instar larvae of D. eridantis, on the basis of mean percent mortality data recorded and tabulated for seven days after treatment application. Analysis showed that both treatments vary significantly among themselves. Also, variations amongst instars and duration with respect to their mortality were highly significant (p < .001). All their interactions were found to vary significantly. B. bassiana at 0.25x107 spores / ml spore concentration caused maximum mean percent mortality (62.38%) followed by mean percent mortality at its 0.25x106 spores / ml concentration (56.67%). Mean percent mortality at maximum spore concentration (0.054x107 spores / ml) and next highest spore concentration (0.054 x106 spores / ml) due to A. perscinum treatment were far less effective (mean percent mortality of 45.40% and 31.29%, respectively). At 168 hours mean percent mortality of larval instars due to both fungal treatment applications reached its maximum (52.99%) whereas, at 24 hours mean percent mortality remained least (5.70%). In both cases, treatments were most effective against 3rd instar larvae and least effective against 5th instar larvae. A comparative acccount of efficacy of B. bassiana and A. perscinum on the 3rd, 4th and 5th instar larvae of D. eridantis on 5th, 6th and 7th post treatment observation days after their application, on the basis of their median lethal concentrations (LC50) proved B. bassiana to be more potential microbial pathogen of the two fungal microbes, for all the three instars (3rd, 4th and 5th) of D. eridantis, on all the three days (5th, 6th and 7th post observation days after application of both treatments). Percent mortality of D. eridantis increased in a dose dependent manner. Koch’s Postulates tested positive, thus confirming the pathogenicity of B. bassiana against the larval instars of D. eridantis. LC90 values of 0.280x1011 spores/ml, 0.301x108 spores/ml and 0.262x108 spores/ml concentrations of B. bassiana were standardized which can effectively cause mortality of all the larval instars of D. eridantis in the field after 5th, 6th and 7th day of their application, respectively. Therefore, these concentrations can be safely used in nurseries as well as plantations of D. sissoo for effective control of D. eridantis larvae.Keywords: Acremonium perscinum, Beauveria bassiana, Dalbergia sissoo, Dichomeris eridantis
Procedia PDF Downloads 225400 Explaining Irregularity in Music by Entropy and Information Content
Authors: Lorena Mihelac, Janez Povh
Abstract:
In 2017, we conducted a research study using data consisting of 160 musical excerpts from different musical styles, to analyze the impact of entropy of the harmony on the acceptability of music. In measuring the entropy of harmony, we were interested in unigrams (individual chords in the harmonic progression) and bigrams (the connection of two adjacent chords). In this study, it has been found that 53 musical excerpts out from 160 were evaluated by participants as very complex, although the entropy of the harmonic progression (unigrams and bigrams) was calculated as low. We have explained this by particularities of chord progression, which impact the listener's feeling of complexity and acceptability. We have evaluated the same data twice with new participants in 2018 and with the same participants for the third time in 2019. These three evaluations have shown that the same 53 musical excerpts, found to be difficult and complex in the study conducted in 2017, are exhibiting a high feeling of complexity again. It was proposed that the content of these musical excerpts, defined as “irregular,” is not meeting the listener's expectancy and the basic perceptual principles, creating a higher feeling of difficulty and complexity. As the “irregularities” in these 53 musical excerpts seem to be perceived by the participants without being aware of it, affecting the pleasantness and the feeling of complexity, they have been defined as “subliminal irregularities” and the 53 musical excerpts as “irregular.” In our recent study (2019) of the same data (used in previous research works), we have proposed a new measure of the complexity of harmony, “regularity,” based on the irregularities in the harmonic progression and other plausible particularities in the musical structure found in previous studies. We have in this study also proposed a list of 10 different particularities for which we were assuming that they are impacting the participant’s perception of complexity in harmony. These ten particularities have been tested in this paper, by extending the analysis in our 53 irregular musical excerpts from harmony to melody. In the examining of melody, we have used the computational model “Information Dynamics of Music” (IDyOM) and two information-theoretic measures: entropy - the uncertainty of the prediction before the next event is heard, and information content - the unexpectedness of an event in a sequence. In order to describe the features of melody in these musical examples, we have used four different viewpoints: pitch, interval, duration, scale degree. The results have shown that the texture of melody (e.g., multiple voices, homorhythmic structure) and structure of melody (e.g., huge interval leaps, syncopated rhythm, implied harmony in compound melodies) in these musical excerpts are impacting the participant’s perception of complexity. High information content values were found in compound melodies in which implied harmonies seem to have suggested additional harmonies, affecting the participant’s perception of the chord progression in harmony by creating a sense of an ambiguous musical structure.Keywords: entropy and information content, harmony, subliminal (ir)regularity, IDyOM
Procedia PDF Downloads 133399 Measuring Emotion Dynamics on Facebook: Associations between Variability in Expressed Emotion and Psychological Functioning
Authors: Elizabeth M. Seabrook, Nikki S. Rickard
Abstract:
Examining time-dependent measures of emotion such as variability, instability, and inertia, provide critical and complementary insights into mental health status. Observing changes in the pattern of emotional expression over time could act as a tool to identify meaningful shifts between psychological well- and ill-being. From a practical standpoint, however, examining emotion dynamics day-to-day is likely to be burdensome and invasive. Utilizing social media data as a facet of lived experience can provide real-world, temporally specific access to emotional expression. Emotional language on social media may provide accurate and sensitive insights into individual and community mental health and well-being, particularly with focus placed on the within-person dynamics of online emotion expression. The objective of the current study was to examine the dynamics of emotional expression on the social network platform Facebook for active users and their relationship with psychological well- and ill-being. It was expected that greater positive and negative emotion variability, instability, and inertia would be associated with poorer psychological well-being and greater depression symptoms. Data were collected using a smartphone app, MoodPrism, which delivered demographic questionnaires, psychological inventories assessing depression symptoms and psychological well-being, and collected the Status Updates of consenting participants. MoodPrism also delivered an experience sampling methodology where participants completed items assessing positive affect, negative affect, and arousal, daily for a 30-day period. The number of positive and negative words in posts was extracted and automatically collated by MoodPrism. The relative proportion of positive and negative words from the total words written in posts was then calculated. Preliminary analyses have been conducted with the data of 9 participants. While these analyses are underpowered due to sample size, they have revealed trends that greater variability in the emotion valence expressed in posts is positively associated with greater depression symptoms (r(9) = .56, p = .12), as is greater instability in emotion valence (r(9) = .58, p = .099). Full data analysis utilizing time-series techniques to explore the Facebook data set will be presented at the conference. Identifying the features of emotion dynamics (variability, instability, inertia) that are relevant to mental health in social media emotional expression is a fundamental step in creating automated screening tools for mental health that are temporally sensitive, unobtrusive, and accurate. The current findings show how monitoring basic social network characteristics over time can provide greater depth in predicting risk and changes in depression and positive well-being.Keywords: emotion, experience sampling methods, mental health, social media
Procedia PDF Downloads 251398 Capital Accumulation and Unemployment in Namibia, Nigeria and South Africa
Authors: Abubakar Dikko
Abstract:
The research investigates the causes of unemployment in Namibia, Nigeria and South Africa, and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria and South Africa are great African nations battling with high unemployment rates. In 2013, the countries recorded high unemployment rates of 16.9%, 23.9% and 24.9% respectively. Most of the unemployed in these economies comprises of youth. Roughly about 40% working age South Africans has jobs, whereas in Nigeria and Namibia is less than that. Unemployment in Africa has wide implications on households which has led to extensive poverty and inequality, and created a rampant criminality. Recently in South Africa there has been a case of xenophobic attacks which were caused by the citizens of the country as a result of unemployment. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes that there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). The African countries with low level of capital accumulation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.Keywords: capital accumulation, unemployment, NAIRU, Post-Keynesian economics
Procedia PDF Downloads 265397 Inherent Difficulties in Countering Islamophobia
Authors: Imbesat Daudi
Abstract:
Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam
Procedia PDF Downloads 48396 Analysis of Resistance and Virulence Genes of Gram-Positive Bacteria Detected in Calf Colostrums
Authors: C. Miranda, S. Cunha, R. Soares, M. Maia, G. Igrejas, F. Silva, P. Poeta
Abstract:
The worldwide inappropriate use of antibiotics has increased the emergence of antimicrobial-resistant microorganisms isolated from animals, humans, food, and the environment. To combat this complex and multifaceted problem is essential to know the prevalence in livestock animals and possible ways of transmission among animals and between these and humans. Enterococci species, in particular E. faecalis and E. faecium, are the most common nosocomial bacteria, causing infections in animals and humans. Thus, the aim of this study was to characterize resistance and virulence factors genes among two enterococci species isolated from calf colostrums in Portuguese dairy farms. The 55 enterococci isolates (44 E. faecalis and 11 E. faecium) were tested for the presence of the resistance genes for the following antibiotics: erythromicyn (ermA, ermB, and ermC), tetracycline (tetL, tetM, tetK, and tetO), quinupristin/dalfopristin (vatD and vatE) and vancomycin (vanB). Of which, 25 isolates (15 E. faecalis and 10 E. faecium) were tested until now for 8 virulence factors genes (esp, ace, gelE, agg, cpd, cylA, cylB, and cylLL). The resistance and virulence genes were performed by PCR, using specific primers and conditions. Negative and positive controls were used in all PCR assays. All enterococci isolates showed resistance to erythromicyn and tetracycline through the presence of the genes: ermB (n=29, 53%), ermC (n=10, 18%), tetL (n=49, 89%), tetM (n=39, 71%) and tetK (n=33, 60%). Only two (4%) E. faecalis isolates showed the presence of tetO gene. No resistance genes for vancomycin were found. The virulence genes detected in both species were cpd (n=17, 68%), agg (n=16, 64%), ace (n=15, 60%), esp (n=13, 52%), gelE (n=13, 52%) and cylLL (n=8, 32%). In general, each isolate showed at least three virulence genes. In three E. faecalis isolates was not found virulence genes and only E. faecalis isolates showed virulence genes for cylA (n=4, 16%) and cylB (n=6, 24%). In conclusion, these colostrum samples that were consumed by calves demonstrated the presence of antibiotic-resistant enterococci harbored virulence genes. This genotypic characterization is crucial to control the antibiotic-resistant bacteria through the implementation of restricts measures safeguarding public health. Acknowledgements: This work was funded by the R&D Project CAREBIO2 (Comparative assessment of antimicrobial resistance in environmental biofilms through proteomics - towards innovative theragnostic biomarkers), with reference NORTE-01-0145-FEDER-030101 and PTDC/SAU-INF/30101/2017, financed by the European Regional Development Fund (ERDF) through the Northern Regional Operational Program (NORTE 2020) and the Foundation for Science and Technology (FCT). This work was supported by the Associate Laboratory for Green Chemistry - LAQV which is financed by national funds from FCT/MCTES (UIDB/50006/2020 and UIDP/50006/2020).Keywords: antimicrobial resistance, calf, colostrums, enterococci
Procedia PDF Downloads 200395 A Descriptive Study on Comparison of Maternal and Perinatal Outcome of Twin Pregnancies Conceived Spontaneously and by Assisted Conception Methods
Authors: Aishvarya Gupta, Keerthana Anand, Sasirekha Rengaraj, Latha Chathurvedula
Abstract:
Introduction: Advances in assisted reproductive technology and increase in the proportion of infertile couples have both contributed to the steep increase in the incidence of twin pregnancies in past decades. Maternal and perinatal complications are higher in twins than in singleton pregnancies. Studies comparing the maternal and perinatal outcomes of ART twin pregnancies versus spontaneously conceived twin pregnancies report heterogeneous results making it unclear whether the complications are due to twin gestation per se or because of assisted reproductive techniques. The present study aims to compare both maternal and perinatal outcomes in twin pregnancies which are spontaneously conceived and after assisted conception methods, so that targeted steps can be undertaken in order to improve maternal and perinatal outcome of twins. Objectives: To study perinatal and maternal outcome in twin pregnancies conceived spontaneously as well as with assisted methods and compare the outcomes between the two groups. Setting: Women delivering at JIPMER (tertiary care institute), Pondicherry. Population: 380 women with twin pregnancies who delivered in JIPMER between June 2015 and March 2017 were included in the study. Methods: The study population was divided into two cohorts – one conceived by spontaneous conception and other by assisted reproductive methods. Association of various maternal and perinatal outcomes with the method of conception was assessed using chi square test or Student's t test as appropriate. Multiple logistic regression analysis was done to assess the independent association of assisted conception with maternal outcomes after adjusting for age, parity and BMI. Multiple logistic regression analysis was done to assess the independent association of assisted conception with perinatal outcomes after adjusting for age, parity, BMI, chorionicity, gestational age at delivery and presence of hypertension or gestational diabetes in the mother. A p value of < 0.05 was considered as significant. Result: There was increased proportion of women with GDM (21% v/s 4.29%) and premature rupture of membranes (35% v/s 22.85%) in the assisted conception group and more anemic women in the spontaneous group (71.27% v/s 55.1%). However assisted conception per se increased the incidence of GDM among twin gestations (OR 3.39, 95% CI 1.34 – 8.61) and did not influence any of the other maternal outcomes. Among the perinatal outcomes, assisted conception per se increased the risk of having very preterm (<32 weeks) neonates (OR 3.013, 95% CI 1.432 – 6.337). The mean birth weight did not significantly differ between the two groups (p = 0.429). Though there were higher proportion of babies admitted to NICU in the assisted conception group (48.48% v/s 36.43%), assisted conception per se did not increase the risk of admission to NICU (OR 1.23, 95% CI 0.76 – 1.98). There was no significant difference in perinatal mortality rates between the two groups (p = 0.829). Conclusion: Assisted conception per se increases the risk of developing GDM in women with twin gestation and increases the risk of delivering very preterm babies. Hence measures should be taken to ensure appropriate screening methods for GDM and suitable neonatal care in such pregnancies.Keywords: assisted conception, maternal outcomes, perinatal outcomes, twin gestation
Procedia PDF Downloads 212394 Acrylamide Concentration in Cakes with Different Caloric Sweeteners
Authors: L. García, N. Cobas, M. López
Abstract:
Acrylamide, a probable carcinogen, is formed in high-temperature processed food (>120ºC) when the free amino acid asparagine reacts with reducing sugars, mainly glucose and fructose. Cane juices' repeated heating would potentially form acrylamide during brown sugar production. This study aims to determine if using panela in yogurt cake preparation increases acrylamide formation. A secondary aim is to analyze the acrylamide concentration in four cake confections with different caloric sweetener ingredients: beet sugar (BS), cane sugar (CS), panela (P), and a panela and chocolate mix (PC). The doughs were obtained by combining ingredients in a planetary mixer. A model system made up of flour (25%), caloric sweeteners (25 %), eggs (23%), yogurt (15.7%), sunflower oil (9.4%), and brewer's yeast (2 %) was applied to BS, CS and P cakes. The ingredients of PC cakes varied: flour (21.5 %), panela chocolate (21.5 %), eggs (25.9 %), yogurt (18 %), sunflower oil (10.8 %), and brewer’s yeast (2.3 %). The preparations were baked for 45' at 180 ºC. Moisture was estimated by AOAC. Protein was determined by the Kjeldahl method. Ash percentage was calculated by weight loss after pyrolysis (≈ 600 °C). Fat content was measured using liquid-solid extraction in hydrolyzed raw ingredients and final confections. Carbohydrates were determined by difference and total sugars by the Luff-Schoorl method, based on the iodometric determination of copper ions. Finally, acrylamide content was determined by LC-MS by the isocratic system (phase A: 97.5 % water with 0.1% formic acid; phase B: 2.5 % methanol), using a standard internal procedure. Statistical analysis was performed using SPSS v.23. One-way variance analysis determined differences between acrylamide content and compositional analysis, with caloric sweeteners as fixed effect. Significance levels were determined by applying Duncan's t-test (p<0.05). P cakes showed a lower energy value than the other baked products; sugar content was similar to BS and CS, with 6.1 % mean crude protein. Acrylamide content in caloric sweeteners was similar to previously reported values. However, P and PC showed significantly higher concentrations, probably explained by the applied procedure. Acrylamide formation depends on both reducing sugars and asparagine concentration and availability. Beet sugar samples did not present acrylamide concentrations within the detection and quantification limit. However, the highest acrylamide content was measured in the BS. This may be due to the higher concentration of reducing sugars and asparagine in other raw ingredients. The cakes made with panela, cane sugar, or panela with chocolate did not differ in acrylamide content. The lack of asparagine measures constitutes a limitation. Cakes made with panela showed lower acrylamide formation than products elaborated with beet or cane sugar.Keywords: beet sugar, cane sugar, panela, yogurt cake
Procedia PDF Downloads 66393 The Confluence between Autism Spectrum Disorder and the Schizoid Personality
Authors: Murray David Schane
Abstract:
Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions
Procedia PDF Downloads 114392 Impact of Emotional Intelligence and Cognitive Intelligence on Radio Presenter's Performance in All India Radio, Kolkata, India
Authors: Soumya Dutta
Abstract:
This research paper aims at investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance in the All India Radio, Kolkata (India’s public service broadcaster). The ancient concept of productivity is the ratio of what is produced to what is required to produce it. But, father of modern management Peter F. Drucker (1909-2005) defined productivity of knowledge work and knowledge workers in a new form. In the other hand, the concept of Emotional Intelligence (EI) originated back in 1920’s when Thorndike (1920) for the first time proposed the emotional intelligence into three dimensions, i.e., abstract intelligence, mechanical intelligence, and social intelligence. The contribution of Salovey and Mayer (1990) is substantive, as they proposed a model for emotional intelligence by defining EI as part of the social intelligence, which takes measures the ability of an individual to regulate his/her personal and other’s emotions and feeling. Cognitive intelligence illustrates the specialization of general intelligence in the domain of cognition in ways that possess experience and learning about cognitive processes such as memory. The outcomes of past research on emotional intelligence show that emotional intelligence has a positive effect on social- mental factors of human resource; positive effects of emotional intelligence on leaders and followers in terms of performance, results, work, satisfaction; emotional intelligence has a positive and significant relationship with the teachers' job performance. In this paper, we made a conceptual framework based on theories of emotional intelligence proposed by Salovey and Mayer (1989-1990) and a compensatory model of emotional intelligence, cognitive intelligence, and job performance proposed by Stephen Cote and Christopher T. H. Miners (2006). For investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance, sample size consists 59 radio presenters (considering gender, academic qualification, instructional mood, age group, etc.) from All India Radio, Kolkata station. Questionnaires prepared based on cognitive (henceforth called C based and represented by C1, C2,.., C5) as well as emotional intelligence (henceforth called E based and represented by E1, E2,., E20). These were sent to around 59 respondents (Presenters) for getting their responses. Performance score was collected from the report of program executive of All India Radio, Kolkata. The linear regression has been carried out using all the E-based and C-based variables as the predictor variables. The possible problem of autocorrelation has been tested by having the Durbinson-Watson (DW) Statistic. Values of this statistic, almost within the range of 1.80-2.20, indicate the absence of any significant problem of autocorrelation. The possible problem of multicollinearity has been tested by having the Variable Inflation Factor (VIF) value. Values of this statistic, around within 2, indicates the absence of any significant problem of multicollinearity. It is inferred that the performance scores can be statistically regressed linearly on the E-based and C-based scores, which can explain 74.50% of the variations in the performance.Keywords: cognitive intelligence, emotional intelligence, performance, productivity
Procedia PDF Downloads 165391 Assessment of the Environmental Compliance at the Jurassic Production Facilities towards HSE MS Procedures and Kuwait Environment Public Authority Regulations
Authors: Fatemah Al-Baroud, Sudharani Shreenivas Kshatriya
Abstract:
Kuwait Oil Company (KOC) is one of the companies for gas & oil production in Kuwait. The oil and gas industry is truly global; with operations conducted in every corner of the globe, the global community will rely heavily on oil and gas supplies. KOC has made many commitments to protect all due to KOC’s operations and operational releases. As per KOC’s strategy, the substantial increase in production activities will bring many challenges in managing various environmental hazards and stresses in the company. In order to handle those environmental challenges, the need of implementing effectively the health, safety, and environmental management system (HSEMS) is significant. And by implementing the HSEMS system properly, the environmental aspects of the activities, products, and services were identified, evaluated, and controlled in order to (i) Comply with local regulatory and other obligatory requirements; (ii) Comply with company policy and business requirements; and (iii) Reduce adverse environmental impact, including adverse impact to company reputation. Assessments for the Jurassic Production Facilities are being carried out as a part of the KOC HSEMS procedural requirement and monitoring the implementation of the relevant HSEMS procedures in the facilities. The assessments have been done by conducting series of theme audits using KOC’s audit protocol at JPFs. The objectives of the audits are to evaluate the compliance of the facilities towards the implementation of environmental procedures and the status of the KEPA requirement at all JPFs. The list of the facilities that were covered during the theme audit program are the following: (1) Jurassic Production Facility (JPF) – Sabriya (2) Jurassic Production Facility (JPF) – East Raudhatian (3) Jurassic Production Facility (JPF) – West Raudhatian (4)Early Production Facility (EPF 50). The auditing process comprehensively focuses on the application of KOC HSE MS procedures at JPFs and their ability to reduce the resultant negative impacts on the environment from the operations. Number of findings and observations were noted and highlighted in the audit reports and sent to all concerned controlling teams. The results of these audits indicated that the facilities, in general view, were in line with KOC HSE Procedures, and there were commitments in documenting all the HSE issues in the right records and plans. Further, implemented several control measures at JPFs that minimized/reduced the environmental impact, such as SRU were installed for sulphur recovery. Future scope and monitoring audit after a sufficient period of time will be carried out in conjunction with the controlling teams in order to verify the current status of the recommendations and evaluate the contractors' performance towards the required actions in preserving the environment.Keywords: assessment of the environmental compliance, environmental and social impact assessment, kuwait environment public authority regulations, health, safety and environment management procedures, jurassic production facilities
Procedia PDF Downloads 187390 Effect of Two Types of Shoe Insole on the Dynamics of Lower Extremities Joints in Individuals with Leg Length Discrepancy during Stance Phase of Walking
Authors: Mansour Eslami, Fereshte Habibi
Abstract:
Limb length discrepancy (LLD), or anisomeric, is defined as a condition in which paired limbs are noticeably unequal. Individuals with LLD during walking use compensatory mechanisms to dynamically lengthen the short limb and shorten the long limb to minimize the displacement of the body center of mass and consequently reduce body energy expenditure. Due to the compensatory movements created, LLD greater than 1 cm increases the odds of creating lumbar problems and hip and knee osteoarthritis. Insoles are non-surgical therapies that are recommended to improve the walking pattern, pain and create greater symmetry between the two lower limbs. However, it is not yet clear what effect insoles have on the variables related to injuries during walking. The aim of the present study was to evaluate the effect of internal and external heel lift insoles on pelvic kinematic in sagittal and frontal planes and lower extremity joint moments in individuals with mild leg length discrepancy during the stance phase of walking. Biomechanical data of twenty-eight men with structural leg length discrepancy of 10-25 mm were collected while they walked under three conditions: shoes without insole (SH), with internal heel lift insoles (IHLI) in shoes, and with external heal lift insole (EHLI). The tests were performed for both short and long legs. The pelvic kinematic and joint moment were measured with a motion capture system and force plate. Five walking trials were performed for each condition. The average value of five successful trials was used for further statistical analysis. Repeated measures ANCOVA with Bonferroni post hoc test were used for between-group comparisons (p ≤ 0.05). In both internal and external heel lift insoles (IHLI, EHLI), there was a significant decrease in the peak values of lateral and anterior pelvic tilts of the long leg, hip, and knee moments of a long leg and ankle moment of short leg (p ≤ 0.05). Furthermore, significant increases in peak values of lateral and anterior pelvic tilt of short leg in IHLI and EHLI were observed as compared to Shoe (SH) condition (p ≤ 0.01). In addition, a significant difference was observed between the IHLI and EHLI conditions in peak anterior pelvic tilt of long leg and plantar flexor moment of short leg (p=0.04; p= 0.04 respectively). Our findings indicate that both IHLI and EHLI can play an important role in controlling excessive pelvic movements in the sagittal and frontal planes in individuals with mild LLD during walking. Furthermore, the EHLI may have a better effect in preventing musculoskeletal injuries compared to the IHLI.Keywords: kinematic, leg length discrepancy, shoe insole, walking
Procedia PDF Downloads 119389 Impact of Collieries on Groundwater in Damodar River Basin
Authors: Rajkumar Ghosh
Abstract:
The industrialization of coal mining and related activities has a significant impact on groundwater in the surrounding areas of the Damodar River. The Damodar River basin, located in eastern India, is known as the "Ruhr of India" due to its abundant coal reserves and extensive coal mining and industrial operations. One of the major consequences of collieries on groundwater is the contamination of water sources. Coal mining activities often involve the excavation and extraction of coal through underground or open-pit mining methods. These processes can release various pollutants and chemicals into the groundwater, including heavy metals, acid mine drainage, and other toxic substances. As a result, the quality of groundwater in the Damodar River region has deteriorated, making it unsuitable for drinking, irrigation, and other purposes. The high concentration of heavy metals, such as arsenic, lead, and mercury, in the groundwater has posed severe health risks to the local population. Prolonged exposure to contaminated water can lead to various health problems, including skin diseases, respiratory issues, and even long-term ailments like cancer. The contamination has also affected the aquatic ecosystem, harming fish populations and other organisms dependent on the river's water. Moreover, the excessive extraction of groundwater for industrial processes, including coal washing and cooling systems, has resulted in a decline in the water table and depletion of aquifers. This has led to water scarcity and reduced availability of water for agricultural activities, impacting the livelihoods of farmers in the region. Efforts have been made to mitigate these issues through the implementation of regulations and improved industrial practices. However, the historical legacy of coal industrialization continues to impact the groundwater in the Damodar River area. Remediation measures, such as the installation of water treatment plants and the promotion of sustainable mining practices, are essential to restore the quality of groundwater and ensure the well-being of the affected communities. In conclusion, the coal industrialization in the Damodar River surrounding has had a detrimental impact on groundwater. This research focuses on soil subsidence induced by the over-exploitation of ground water for dewatering open pit coal mines. Soil degradation happens in arid and semi-arid regions as a result of land subsidence in coal mining region, which reduces soil fertility. Depletion of aquifers, contamination, and water scarcity are some of the key challenges resulting from these activities. It is crucial to prioritize sustainable mining practices, environmental conservation, and the provision of clean drinking water to mitigate the long-lasting effects of collieries on the groundwater resources in the region.Keywords: coal mining, groundwater, soil subsidence, water table, damodar river
Procedia PDF Downloads 82388 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability
Authors: Akshay B. Pawar, Rohit Y. Parasnis
Abstract:
Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot
Procedia PDF Downloads 324387 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography
Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld
Abstract:
With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry
Procedia PDF Downloads 311386 The Development of Local-Global Perceptual Bias across Cultures: Examining the Effects of Gender, Education, and Urbanisation
Authors: Helen J. Spray, Karina J. Linnell
Abstract:
Local-global bias in adulthood is strongly dependent on environmental factors and a global bias is not the universal characteristic of adult perception it was once thought to be: whilst Western adults typically demonstrate a global bias, Namibian adults living in traditional villages possess a strong local bias. Furthermore, environmental effects on local-global bias have been shown to be highly gender-specific; whereas urbanisation promoted a global bias in urbanised Namibian women but not men, education promoted a global bias in urbanised Namibian men but not women. Adult populations, however, provide only a snapshot of the gene-environment interactions which shape perceptual bias. Yet, to date, there has been little work on the development of local-global bias across environmental settings. In the current study, local-global bias was assessed using a similarity-matching task with Navon figures in children aged between 4 and 15 years from across three populations: traditional Namibians, urban Namibians, and urban British. For the two Namibian groups, measures of urbanisation and education were obtained. Data were subjected to both between-group and within-group analyses. Between-group analyses compared developmental trajectories across population and gender. These analyses revealed a global bias from even as early as 4 in the British sample, and showed that the developmental onset of a global bias is not fixed. Urbanised Namibian children ultimately developed a global bias that was indistinguishable from British children; however, a global bias did not emerge until much later in development. For all populations, the greatest developmental effects were observed directly following the onset of formal education. No overall gender effects were observed; however, there was a significant gender by age interaction which was difficult to reconcile with existing biological-level accounts of gender differences in the development of local-global bias. Within-group analyses compared the effects of urbanisation and education on local-global bias for traditional and urban Namibian boys and girls separately. For both traditional and urban boys, education mediated all effects of age and urbanisation; however, this was not the case for girls. Traditional Namibian girls retained a local bias regardless of age, education, or urbanisation, and in urbanised girls, the development of a global bias was not attributable to any one factor specifically. These results are broadly consistent with aforementioned findings that education promoted a global bias in urbanised Namibian men but not women. The development of local-global bias does not follow a fixed trajectory but is subject to environmental control. Understanding how variability in the development of local-global bias might arise, particularly in the context of gender, may have far-reaching implications. For example, a number of educationally important cognitive functions (e.g., spatial ability) are known to show consistent gender differences in childhood and local-global bias may mediate some of these effects. With education becoming an increasingly prevalent force across much of the developing world it will be important to understand the processes that underpin its effects and their implications.Keywords: cross-cultural, development, education, gender, local-global bias, perception, urbanisation, urbanization
Procedia PDF Downloads 141385 Impact of an Educational Intervention on Knowledge, Attitude and Practices of Community Members on Schistosomiasis in Nelson Mandela Bay
Authors: Prince S. Campbell, Janine B. Adams, Melusi Thwala, Opeoluwa Oyedele, Paula E. Melariri
Abstract:
Schistosomiasis, often known as bilharzia, is a parasitic water-borne disease caused by trematode flatworms of the genus Schistosoma. Schistosomiasis infection and prevention have been found to be influenced by a range of socio-cultural risk factors, including human characteristics (e.g., gender, age, education, knowledge, attitude, and practices), as well as environmental and economic elements. Lack of awareness of the disease may also contribute to an individual's tendency to participate in behaviours or activities that heighten their susceptibility to infection. The current study assessed the community knowledge, attitude and practices (KAP) on schistosomiasis and implemented an educational intervention following pre-test interviews. A cross-sectional quasi-experimental research design was used in this quantitative study. Pre- and post-intervention interview format surveys were conducted using a structured questionnaire, targeting individuals aged 18–65 years residing within 5 km of select water bodies. The questionnaire contained 54 close-ended questions about schistosomiasis causes, transmission, and clinical symptoms and the participants were interviewed face-to-face in their homes. Data was captured on Question Pro and analyzed using Microsoft Office Excel 365 (2019) and R (version 4.3.1) software. Overall, 380 individuals completed the pre and post-intervention assessments; 194 and 185 were males (51.1%) and females (48.7%), respectively. A notable 91.3% of participants did not know about schistosomiasis in the pre-intervention phase; however, the mean post-intervention test score (9.4 ± 1.4) for knowledge among participants was higher than the pre-intervention test score (2.2 ± 2.1) indicating a good and improved knowledge of schistosomiasis among the participants. Furthermore, the paired samples t-test results demonstrated that the increase in knowledge levels was statistically significant (p<0.001). Also, the post-intervention improvement of both practice (p<0.001) and attitude (p<0.001) levels was statistically significant. A positive correlation (r=0.23, p<0.001) was found between knowledge and attitude in the pre-intervention stage. Knowledgeable participants had a more positive attitude towards obtaining medical assistance and disease prevention. Moreover, attitudes and practices correlated negatively (r=-0.13, p=0.013) post-intervention; hence, those with positive attitudes did not engage in risky water-related practices, which was the desired outcome. The educational intervention had a favourable impact on the KAP of the study population as the majority were able to recall the disease aetiology, symptoms, transmission pattern, and preventative measures three months post-intervention. Nevertheless, previous research has suggested that participants were unable to recall information about the disease following the intervention. Consequently, research should prioritize behavioural modification strategies that may result in a more persistent outcome in terms of the participants' knowledge, which could ultimately contribute to the development of long-term positive attitudes and practices.Keywords: educational intervention, knowledge, attitudes and practices, schistosomiasis
Procedia PDF Downloads 22384 Printed Electronics for Enhanced Monitoring of Organ-on-Chip Culture Media Parameters
Authors: Alejandra Ben-Aissa, Martina Moreno, Luciano Sappia, Paul Lacharmoise, Ana Moya
Abstract:
Organ-on-Chip (OoC) stands out as a highly promising approach for drug testing, presenting a cost-effective and ethically superior alternative to conventional in vivo experiments. These cutting-edge devices emerge from the integration of tissue engineering and microfluidic technology, faithfully replicating the physiological conditions of targeted organs. Consequently, they offer a more precise understanding of drug responses without the ethical concerns associated with animal testing. When addressing the limitations of OoC due to conventional and time-consuming techniques, Lab-On-Chip (LoC) emerge as a disruptive technology capable of providing real-time monitoring without compromising sample integrity. This work develops LoC platforms that can be integrated within OoC platforms to monitor essential culture media parameters, including glucose, oxygen, and pH, facilitating the straightforward exchange of sensing units within a dynamic and controlled environment without disrupting cultures. This approach preserves the experimental setup, minimizes the impact on cells, and enables efficient, prolonged measurement. The LoC system is fabricated following the patented methodology protected by EU patent EP4317957A1. One of the key challenges of integrating sensors in a biocompatible, feasible, robust, and scalable manner is addressed through fully printed sensors, ensuring a customized, cost-effective, and scalable solution. With this technique, sensor reliability is enhanced, providing high sensitivity and selectivity for accurate parameter monitoring. In the present study, LoC is validated measuring a complete culture media. The oxygen sensor provided a measurement range from 0 mgO2/L to 6.3 mgO2/L. The pH sensor demonstrated a measurement range spanning 2 pH units to 9.5 pH units. Additionally, the glucose sensor achieved a measurement range from 0 mM to 11 mM. All the measures were performed with the sensors integrated in the LoC. In conclusion, this study showcases the impactful synergy of OoC technology with LoC systems using fully printed sensors, marking a significant step forward in ethical and effective biomedical research, particularly in drug development. This innovation not only meets current demands but also lays the groundwork for future advancements in precision and customization within scientific exploration. Acknowledgments: This work was financially supported by the Catalan Government through the funding grant ACCIÓ-Eurecat (Project Traça-IMPULSENS).Keywords: organ on chip, lab on chip, real time monitoring, biosensors
Procedia PDF Downloads 24383 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 319382 A Systematic Review of Business Strategies Which Can Make District Heating a Platform for Sustainable Development of Other Sectors
Authors: Louise Ödlund, Danica Djuric Ilic
Abstract:
Sustainable development includes many challenges related to energy use, such as (1) developing flexibility on the demand side of the electricity systems due to an increased share of intermittent electricity sources (e.g., wind and solar power), (2) overcoming economic challenges related to an increased share of renewable energy in the transport sector, (3) increasing efficiency of the biomass use, (4) increasing utilization of industrial excess heat (e.g., approximately two thirds of the energy currently used in EU is lost in the form of excess and waste heat). The European Commission has been recognized DH technology as of essential importance to reach sustainability. Flexibility in the fuel mix, and possibilities of industrial waste heat utilization, combined heat, and power (CHP) production and energy recovery through waste incineration, are only some of the benefits which characterize DH technology. The aim of this study is to provide an overview of the possible business strategies which would enable DH to have an important role in future sustainable energy systems. The methodology used in this study is a systematic literature review. The study includes a systematic approach where DH is seen as a part of an integrated system that consists of transport , industrial-, and electricity sectors as well. The DH technology can play a decisive role in overcoming the sustainability challenges related to our energy use. The introduction of biofuels in the transport sector can be facilitated by integrating biofuel and DH production in local DH systems. This would enable the development of local biofuel supply chains and reduce biofuel production costs. In this way, DH can also promote the development of biofuel production technologies that are not yet developed. Converting energy for running the industrial processes from fossil fuels and electricity to DH (above all biomass and waste-based DH) and delivering excess heat from industrial processes to the local DH systems would make the industry less dependent on fossil fuels and fossil fuel-based electricity, as well as the increasing energy efficiency of the industrial sector and reduce production costs. The electricity sector would also benefit from these measures. Reducing the electricity use in the industry sector while at the same time increasing the CHP production in the local DH systems would (1) replace fossil-based electricity production with electricity in biomass- or waste-fueled CHP plants and reduce the capacity requirements from the national electricity grid (i.e., it would reduce the pressure on the bottlenecks in the grid). Furthermore, by operating their central controlled heat pumps and CHP plants depending on the intermittent electricity production variation, the DH companies may enable an increased share of intermittent electricity production in the national electricity grid.Keywords: energy system, district heating, sustainable business strategies, sustainable development
Procedia PDF Downloads 170381 Support for Refugee Entrepreneurs Through International Aid
Authors: Julien Benomar
Abstract:
The World Bank report published in April 2023 called “Migrants, Refugees and Society” allows us to first distinguish migrants in search of economic opportunities and refugees that flee a situation of danger and choose their destination based on their immediate need for safety. Amongst those two categories, the report distinguished people having professional skills adapted to the labor market of the host country and those who have not. Out of that distinction of four categories, we choose to focus our research on refugees that do not have professional skills adapted to the labor market of the host country. Given that refugees generally have no recourse to public assistance schemes and cannot count on the support of their entourage or support network, we propose to examine the extent to which external assistance, such as international humanitarian action, is likely to accompany refugees' transition to financial empowerment through entrepreneurship. To this end, we propose to carry out a case study structured in three stages: (i) an exchange with a Non-Governmental Organisation (NGO) active in supporting refugee populations from Congo and Burundi to Rwanda, enabling us to (i.i) define together a financial empowerment income, and (i. ii) learn about the content of the support measures taken for the beneficiaries of the humanitarian project; (ii) monitor the population of 118 beneficiaries, including 73 refugees and 45 Rwandans (reference population); (iii) conduct a participatory analysis to identify the level of performance of the project and areas for improvement. The case study thus involved the staff of an international NGO active in helping refugees from Rwanda since 2015 and the staff of a Luxembourg NGO that has been funding this economic aid project through entrepreneurship since 2021. The case study thus involved the staff of an international NGO active in helping refugees from Rwanda since 2015 and the staff of a Luxembourg NGO, which has been funding this economic aid through an entrepreneurship project since 2021, and took place over a 48-day period between April and May 2023. The main results are of two types: (i) the need to associate indicators for monitoring the impact of the project on the indirect beneficiaries of the project (refugee community) and (ii) the identification of success factors making it possible to bring concrete and relevant responses to the constraints encountered. The first result thus made it possible to identify the following indicators: Indicator of community potential ((jobs, training or mentoring) promoted by the activity of the entrepreneur), Indicator of social contribution (tax paid by the entrepreneur), Indicator of resilience (savings and loan capacity generated, and finally impact on social cohesion. The second result made it possible to identify that among the 7 success factors tested, the sector of activity chosen and the level of experience in the sector of the future activity are those that stand out the most clearly.Keywords: entrepreuneurship, refugees, financial empowerment, international aid
Procedia PDF Downloads 81380 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques
Authors: Melese Wondatir
Abstract:
Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making
Procedia PDF Downloads 71379 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 106378 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 100