Search results for: quasi-fermi levels
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7400

Search results for: quasi-fermi levels

470 Interethnic Communication in Multicultural Areas: A Case Study of Intercultural Sensitivity Between Baloch and Persians in Iran

Authors: Mehraveh Taghizadeh

Abstract:

Iran is home to a diverse range of ethnic groups such as Baloch, Kurds, Persians, Lors, Arabs, and Turks. The Persian ethnicity is the largest group, while Baloch people are considered a minority residing on the southeastern border of the country with different language and religion. As a consequence, Political discussions have often prioritized national identity and national security over Baloch ethnic identity. However, to improve intercultural understanding and reduce cultural schemas, it's crucial to decrease ethnocentrism and increase intercultural communication. In the meantime, Kerman, a multicultural province that borders Sistan and Baluchistan, has become a destination for Baloch immigrants. By recognizing the current status of intercultural competence, we can develop effective policies for expanding intercultural communication and creating a more inclusive and peaceful society. As a result, this research aims to study the domain of intercultural sensitivity between Persians and Baloch in Kerman. Therefore, the question is how do Persians and Baloch ethnicities perceive each other? This study represents the first exploration of communication dynamics between Persians and Baloch individuals. Utilizing a qualitative approach, this study employs thematic analysis in conjunction with Bennett's intercultural sensitivities model. The model comprises two components: ethnocentrism, which spans from denial and defense to minimization, and ethno-relativism, which ranges from acceptance and adaptation to integration. To attain this objective, 30 individuals from Persian and Baloch ethnicities were interviewed using a semi-structured format. it analysis suggests that the Baluch and Persians exhibit a range of intercultural sensitivities characterized by defensive and minimizing attitudes in the ethnocentrism domain, and accepting attitudes in the ethno-relativism domain. The concept of minimization involves recognizing the shared humanity and positive schemas of both groups. Furthermore, in the adaptation domain, Persians' efforts to assimilate into Baloch culture at an acceptance level are primarily focused on the civilizational dimension, including using traditional Balochi clothing designs on their clothes. The Persians hold intercultural schemas about the Baloch people, including notions of religious fanaticism, tribalism, poverty, smuggling, and a nomadic way of life. Conversely, the Baloch people have intercultural schemas about Persians including religious fanaticism, disdain towards the Baloch, and ethnocentrism. Both groups tend to tie ethnicity to religion and judge each other accordingly. Also, the origin of these schemas is in the representation of the media and the encounter without interaction between the two ethnic groups. These findings indicate that they have not received adaptation and integration levels in ethno-relativism. Furthermore, the results indicate that developing personal communication in multicultural environments reduces intercultural sensitivity, and increases positive interactions and civilizational dialogues. People can understand each other better and perform better in their daily lives.

Keywords: intercultural communication, intercultural sensitivity, interethnic communication, Iran, Baloch, Persians

Procedia PDF Downloads 52
469 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 143
468 Urban Stratification as a Basis for Analyzing Political Instability: Evidence from Syrian Cities

Authors: Munqeth Othman Agha

Abstract:

The historical formation of urban centres in the eastern Arab world was shaped by rapid urbanization and sudden transformation from the age of the pre-industrial to a post-industrial economy, coupled with uneven development, informal urban expansion, and constant surges in unemployment and poverty rates. The city was stratified accordingly as overlapping layers of division and inequality that have been built on top of each other, creating complex horizontal and vertical divisions based on economic, social, political, and ethno-sectarian basis. This has been further exacerbated during the neoliberal era, which transferred the city into a sort of dual city that is inhabited by heterogeneous and often antagonistic social groups. Economic deprivation combined with a growing sense of marginalization and inequality across the city planted the seeds of political instability, outbreaking in 2011. Unlike other popular uprisings that occupy central squares, as in Egypt and Tunisia, the Syrian uprising in 2011 took place mainly within inner streets and neighborhood squares, mobilizing primarily on more or less upon the lines of stratification. This has emphasized the role of micro-urban and social settings in shaping mobilization and resistance tactics, which necessitates us to understand the way the city was stratified and place it at the center of the city-conflict nexus analysis. This research aims to understand to what extent pre-conflict urban stratification lines played a role in determining the different trajectories of three cities’ neighborhoods (Homs, Dara’a and Deir-ez-Zor). The main argument of the paper is that the way the Syrian city has been stratified creates various social groups within the city who have enjoyed different levels of accessibility to life chances, material resources and social statuses. This determines their relationship with other social groups in the city and, more importantly, their relationship with the state. The advent of a political opportunity will be depicted differently across the city’s different social groups according to their perceived interests and threats, which consequently leads to either political mobilization or demobilization. Several factors, including the type of social structures, built environment, and state response, determine the ability of social actors to transfer the repertoire of contention to collective action or transfer from social actors to political actors. The research uses urban stratification lines as the basis for understanding the different patterns of political upheavals in urban areas while explaining why neighborhoods with different social and urban environment settings had different abilities and capacities to mobilize, resist state repression and then descend into a military conflict. It particularly traces the transformation from social groups to social actors and political actors by applying the Explaining-outcome Process-Tracing method to depict the causal mechanisms that led to including or excluding different neighborhoods from each stage of the uprising, namely mobilization (M1), response (M2), and control (M3).

Keywords: urban stratification, syrian conflict, social movement, process tracing, divided city

Procedia PDF Downloads 73
467 Comparison of the Chest X-Ray and Computerized Tomography Scans Requested from the Emergency Department

Authors: Sahabettin Mete, Abdullah C. Hocagil, Hilal Hocagil, Volkan Ulker, Hasan C. Taskin

Abstract:

Objectives and Goals: An emergency department is a place where people can come for a multitude of reasons 24 hours a day. As it is an easy, accessible place, thanks to self-sacrificing people who work in emergency departments. But the workload and overcrowding of emergency departments are increasing day by day. Under these circumstances, it is important to choose a quick, easily accessible and effective test for diagnosis. This results in laboratory and imaging tests being more than 40% of all emergency department costs. Despite all of the technological advances in imaging methods and available computerized tomography (CT), chest X-ray, the older imaging method, has not lost its appeal and effectiveness for nearly all emergency physicians. Progress in imaging methods are very convenient, but physicians should consider the radiation dose, cost, and effectiveness, as well as imaging methods to be carefully selected and used. The aim of the study was to investigate the effectiveness of chest X-ray in immediate diagnosis against the advancing technology by comparing chest X-ray and chest CT scan results of the patients in the emergency department. Methods: Patients who applied to Bulent Ecevit University Faculty of Medicine’s emergency department were investigated retrospectively in between 1 September 2014 and 28 February 2015. Data were obtained via MIAMED (Clear Canvas Image Server v6.2, Toronto, Canada), information management system which patients’ files are saved electronically in the clinic, and were retrospectively scanned. The study included 199 patients who were 18 or older, had both chest X-ray and chest CT imaging. Chest X-ray images were evaluated by the emergency medicine senior assistant in the emergency department, and the findings were saved to the study form. CT findings were obtained from already reported data by radiology department in the clinic. Chest X-ray was evaluated with seven questions in terms of technique and dose adequacy. Patients’ age, gender, application complaints, comorbid diseases, vital signs, physical examination findings, diagnosis, chest X-ray findings and chest CT findings were evaluated. Data saved and statistical analyses have made via using SPSS 19.0 for Windows. And the value of p < 0.05 were accepted statistically significant. Results: 199 patients were included in the study. In 38,2% (n=76) of all patients were diagnosed with pneumonia and it was the most common diagnosis. The chest X-ray imaging technique was appropriate in patients with the rate of 31% (n=62) of all patients. There was not any statistically significant difference (p > 0.05) between both imaging methods (chest X-ray and chest CT) in terms of determining the rates of displacement of the trachea, pneumothorax, parenchymal consolidation, increased cardiothoracic ratio, lymphadenopathy, diaphragmatic hernia, free air levels in the abdomen (in sections including the image), pleural thickening, parenchymal cyst, parenchymal mass, parenchymal cavity, parenchymal atelectasis and bone fractures. Conclusions: When imaging findings, showing cases that needed to be quickly diagnosed, were investigated, chest X-ray and chest CT findings were matched at a high rate in patients with an appropriate imaging technique. However, chest X-rays, evaluated in the emergency department, were frequently taken with an inappropriate technique.

Keywords: chest x-ray, chest computerized tomography, chest imaging, emergency department

Procedia PDF Downloads 193
466 The Impact of Regulation of Energy Prices on Public Trust in Europe during Energy Crisis: A Cross-Sectional Study in the Aftermath of the Russia-Ukraine Conflict

Authors: Sempiga Olivier, Dominika Latusek-Jurczak

Abstract:

The conflict in Ukraine has had far-reaching economic consequences, not only for the countries directly involved in it but also for their trading partners and allies, and on the global economy in general. Different European Union (EU) countries, being some of Ukraine and Russia's major trading partners, have also felt the impact of the conflict on their economy. In a special way, the energy sector has suffered the most due to the fact that Russia is a huge exporter of gas and other energy sources on which rely European countries. Energy is a locomotive of the economy and once energy prices skyrocket there is a spill over effects in other areas causing different commodities’ prices to rise thereby affecting people’s social economic lifestyles. To minimise the impact energy crisis’ socio-political and economic consequences, the EU and countries have tightened their regulatory mechanisms to stop some energy firms exploit the crisis at the expense of the vulnerable mass. The key question is to what extent these regulatory instruments put in place during the energy crisis times have an affect on citizen trust in the governing institutions. The question is of paramount importance after years of declining trust in the EU and in most countries in Europe. Earlier research have analysed how wars or global political risks relate to citizen trust in government and organizations but very few empirical research have examined the relationship between regulatory instruments during the time of crisis on citizen trust in government and institutions. Using data from INSEE (the French National Institute of Statistics and Economic Studies) and European Social Survey (ESS), it carry out a multilinear regression analysis and investigate the impact of regulation both from the EU and different countries on energy prices on citizen trust. To understand the dynamics between regulatory actions during crises and citizen trust, this study draws on the theoretical framework of institutional trust and regulatory legitimacy. Institutional trust theory posits that citizens’ trust in government and institutions is influenced by perceptions of fairness, transparency, and efficacy in governance. Regulatory legitimacy, a related concept, suggests that regulatory measures, especially in response to crises, are more effective when perceived as just, necessary, and in the public interest. Results of this cross sectional study show that regulatory frameworks strongly affect the levels of trust, the association varying from strong to moderate depending on countries and period. This study contributes to the understanding of the vital relationship between regulatory measures implemented during crises and citizen trust in government institutions. By identifying the conditions under which trust is fostered or eroded, the findings provide policymakers with valuable insights into effective strategies for enhancing public confidence, ultimately guiding interventions that can mitigate the socio-political impacts of future energy crises.

Keywords: energy crisis, price, regulation, russia-Ukraine conflict, trust

Procedia PDF Downloads 11
465 Associations Between Pornography Use Motivations and Sexual Satisfaction in Gender Diverse and Cisgender Individuals in the 43-Country International Sex Survey

Authors: Aurélie Michaud, Émilie Gaudet, Mónika Koós, Léna Nagy, Zsolt Demetrovics, Shane W. Kraus, Marc N. Potenza, Beáta Bőthe

Abstract:

Pornography use is prevalent among adults worldwide. Prior studies have assessed the associations between pornography use frequency and sexual satisfaction, in cisgender and heterosexual individuals, with mixed results. However, measuring pornography use solely by pornography use frequency is problematic, as it can lead to disregarding important contextual factors that may be related to pornography use’s potential effects. Pornography use motivations (PUMs) represent key predictors of sexual behaviors. Yet, their associations with different indicators of sexual wellbeing have yet to be extensively studied. This cross-cultural study examined the links between the eight PUMs most often reported in the general population (i.e. sexual pleasure, sexual curiosity, emotional distraction or suppression, fantasy, stress reduction, boredom avoidance, lack of sexual satisfaction, and self-exploration) and sexual satisfaction in gender diverse and cisgender individuals. Given the lack of scientific data on associations between individuals’ PUMs and sexual satisfaction, these links were examined in an exploratory manner. A total of 43 countries from five continents were included in the International Sex Survey (ISS). A secure online platform was used to collect self-report, anonymous data from 82,243 participants (39.6% men, 57% women, 3.4% gender diverse individuals; M = 32.4 years, SD = 12.5). Gender-based differences in levels of sexual pleasure, sexual curiosity, emotional distraction, fantasy, stress reduction, boredom avoidance, lack of sexual satisfaction, and self-exploration PUMs were examined using one-way ANOVAs. Then, for each gender group, the associations between each PUM and sexual satisfaction were examined using multiple linear regression, controlling for frequency of masturbation. One-way ANOVAs indicated significant differences between men, women, and gender diverse individuals on all PUMs. For sexual pleasure, sexual curiosity, fantasy, boredom avoidance, lack of sexual satisfaction, emotional distraction, and stress reduction PUMs, men showed the highest scores, followed by gender-diverse individuals, and women. However, for self-exploration, gender-diverse individuals had higher average scores than men. For all PUMs, women’s average scores were the lowest. After controlling for frequency of masturbation, for all genders, sexual pleasure, sexual curiosity and boredom avoidance were significant positive predictors of sexual satisfaction, while lack of sexual satisfaction PUM was a significant negative predictor. Fantasy, stress reduction and self-exploration PUMs were positive significant predictors of sexual satisfaction, and fantasy was a negative significant predictor, but only for women. Findings highlight important gender differences in regards to the main motivations underlying pornography use and their relations to sexual satisfaction. While men and gender diverse individuals show similar motivation profiles, woman report a particularly unique experience, with fantasy, stress reduction and self-exploration being associated to their sexual satisfaction. This work outlines the importance of considering the role of pornography use motivations when studying the links between pornography viewing and sexual well-being, and may provide basis for gender-based considerations when working with individuals seeking help for their pornography use or sexual satisfaction.

Keywords: pornography, sexual satifsaction, cross-cultural, gender diversity

Procedia PDF Downloads 107
464 Green Space and Their Possibilities of Enhancing Urban Life in Dhaka City, Bangladesh

Authors: Ummeh Saika, Toshio Kikuchi

Abstract:

Population growth and urbanization is a global phenomenon. As the rapid progress of technology, many cities in the international community are facing serious problems of urbanization. There is no doubt that the urbanization will proceed to have significant impact on the ecology, economy and society at local, regional, and global levels. The inhabitants of Dhaka city suffer from lack of proper urban facilities. The green spaces are needed for different functional and leisure activities of the urban dwellers. Again growing densification, a number of green space are transferred into open space in the Dhaka city. As a result greenery of the city's decreases gradually. Moreover, the existing green space is frequently threatened by encroachment. The role of green space, both at community and city level, is important to improve the natural environment and social ties for future generations. Therefore, it seems that the green space needs to be more effective for public interaction. The main objective of this study is to address the effectiveness of urban green space (Urban Park) of Dhaka City. Two approaches are selected to fulfill the study. Firstly, analyze the long-term spatial changes of urban green space using GIS and secondly, investigate the relationship of urban park network with physical and social environment. The case study site covers eight urban parks of Dhaka metropolitan area of Bangladesh. Two aspects (Physical and Social) are applied for this study. For physical aspect, satellite images and aerial photos of different years are used to find out the changes of urban parks. And for social aspect, methods are used as questionnaire survey, interview, observation, photographs, sketch and previous information of parks to analyze about the social environment of parks. After calculation of all data by descriptive statistics, result is shown by maps using GIS. According to physical size, parks of Dhaka city are classified into four types: Small, Medium, Large and Extra Large parks. The observed result showed that the physical and social environment of urban parks varies with their size. In small size parks physical environment is moderate by newly tree plantation and area expansion. However, in medium size parks physical environment are poor, example- tree decrease, exposed soil increase. On the other hand, physical environment of large size and extra large size parks are in good condition, because of plenty of vegetation and well management. Again based on social environment, in small size parks people mainly come from surroundings area and mainly used as waiting place. In medium-size parks, people come to attend various occasion from different places. In large size and extra large size parks, people come from every part of the city area for tourism purpose. Urban parks are important source of green space. Its influence both physical and social environment of urban area. Nowadays green space area gradually decreases and transfer into open space. The consequence of this research reveals that changes of urban parks influence both physical and social environment and also impact on urban life.

Keywords: physical environment, social environment, urban life, urban parks

Procedia PDF Downloads 430
463 The Incidental Linguistic Information Processing and Its Relation to General Intellectual Abilities

Authors: Evgeniya V. Gavrilova, Sofya S. Belova

Abstract:

The present study was aimed at clarifying the relationship between general intellectual abilities and efficiency in free recall and rhymed words generation task after incidental exposure to linguistic stimuli. The theoretical frameworks stress that general intellectual abilities are based on intentional mental strategies. In this context, it seems to be crucial to examine the efficiency of incidentally presented information processing in cognitive task and its relation to general intellectual abilities. The sample consisted of 32 Russian students. Participants were exposed to pairs of words. Each pair consisted of two common nouns or two city names. Participants had to decide whether a city name was presented in each pair. Thus words’ semantics was processed intentionally. The city names were considered to be focal stimuli, whereas common nouns were considered to be peripheral stimuli. Along with that each pair of words could be rhymed or not be rhymed, but this phonemic aspect of stimuli’s characteristic (rhymed and non-rhymed words) was processed incidentally. Then participants were asked to produce as many rhymes as they could to new words. The stimuli presented earlier could be used as well. After that, participants had to retrieve all words presented earlier. In the end, verbal and non-verbal abilities were measured with number of special psychometric tests. As for free recall task intentionally processed focal stimuli had an advantage in recall compared to peripheral stimuli. In addition all the rhymed stimuli were recalled more effectively than non-rhymed ones. The inverse effect was found in words generation task where participants tended to use mainly peripheral stimuli compared to focal ones. Furthermore peripheral rhymed stimuli were most popular target category of stimuli that was used in this task. Thus the information that was processed incidentally had a supplemental influence on efficiency of stimuli processing as well in free recall as in word generation task. Different patterns of correlations between intellectual abilities and efficiency in different stimuli processing in both tasks were revealed. Non-verbal reasoning ability correlated positively with free recall of peripheral rhymed stimuli, but it was not related to performance on rhymed words’ generation task. Verbal reasoning ability correlated positively with free recall of focal stimuli. As for rhymed words generation task, verbal intelligence correlated negatively with generation of focal stimuli and correlated positively with generation of all peripheral stimuli. The present findings lead to two key conclusions. First, incidentally processed stimuli had an advantage in free recall and word generation task. Thus incidental information processing appeared to be crucial for subsequent cognitive performance. Secondly, it was demonstrated that incidentally processed stimuli were recalled more frequently by participants with high nonverbal reasoning ability and were more effectively used by participants with high verbal reasoning ability in subsequent cognitive tasks. That implies that general intellectual abilities could benefit from operating by different levels of information processing while cognitive problem solving. This research was supported by the “Grant of President of RF for young PhD scientists” (contract № is 14.Z56.17.2980- MK) and the Grant № 15-36-01348a2 of Russian Foundation for Humanities.

Keywords: focal and peripheral stimuli, general intellectual abilities, incidental information processing

Procedia PDF Downloads 231
462 A Comparison of Two and Three Dimensional Motion Capture Methodologies in the Analysis of Underwater Fly Kicking Kinematics

Authors: Isobel M. Thompson, Dorian Audot, Dominic Hudson, Martin Warner, Joseph Banks

Abstract:

Underwater fly kick is an essential skill in swimming, which can have a considerable impact upon overall race performance in competition, especially in sprint events. Reduced wave drags acting upon the body under the surface means that the underwater fly kick will potentially be the fastest the swimmer is travelling throughout the race. It is therefore critical to understand fly kicking techniques and determining biomechanical factors involved in the performance. Most previous studies assessing fly kick kinematics have focused on two-dimensional analysis; therefore, the three-dimensional elements of the underwater fly kick techniques are not well understood. Those studies that have investigated fly kicking techniques using three-dimensional methodologies have not reported full three-dimensional kinematics for the techniques observed, choosing to focus on one or two joints. There has not been a direct comparison completed on the results obtained using two-dimensional and three-dimensional analysis, and how these different approaches might affect the interpretation of subsequent results. The aim of this research is to quantify the differences in kinematics observed in underwater fly kicks obtained from both two and three-dimensional analyses of the same test conditions. In order to achieve this, a six-camera underwater Qualisys system was used to develop an experimental methodology suitable for assessing the kinematics of swimmer’s starts and turns. The cameras, capturing at a frequency of 100Hz, were arranged along the side of the pool spaced equally up to 20m creating a capture volume of 7m x 2m x 1.5m. Within the measurement volume, error levels were estimated at 0.8%. Prior to pool trials, participants completed a landside calibration in order to define joint center locations, as certain markers became occluded once the swimmer assumed the underwater fly kick position in the pool. Thirty-four reflective markers were placed on key anatomical landmarks, 9 of which were then removed for the pool-based trials. The fly-kick swimming conditions included in the analysis are as follows: maximum effort prone, 100m pace prone, 200m pace prone, 400m pace prone, and maximum pace supine. All trials were completed from a push start to 15m to ensure consistent kick cycles were captured. Both two-dimensional and three-dimensional kinematics are calculated from joint locations, and the results are compared. Key variables reported include kick frequency and kick amplitude, as well as full angular kinematics of the lower body. Key differences in these variables obtained from two-dimensional and three-dimensional analysis are identified. Internal rotation (up to 15º) and external rotation (up to -28º) were observed using three-dimensional methods. Abduction (5º) and adduction (15º) were also reported. These motions are not observed in the two-dimensional analysis. Results also give an indication of different techniques adopted by swimmers at various paces and orientations. The results of this research provide evidence of the strengths of both two dimensional and three dimensional motion capture methods in underwater fly kick, highlighting limitations which could affect the interpretation of results from both methods.

Keywords: swimming, underwater fly kick, performance, motion capture

Procedia PDF Downloads 136
461 The Budget Impact of the DISCERN™ Diagnostic Test for Alzheimer’s Disease in the United States

Authors: Frederick Huie, Lauren Fusfeld, William Burchenal, Scott Howell, Alyssa McVey, Thomas F. Goss

Abstract:

Alzheimer’s Disease (AD) is a degenerative brain disease characterized by memory loss and cognitive decline that presents a substantial economic burden for patients and health insurers in the US. This study evaluates the payer budget impact of the DISCERN™ test in the diagnosis and management of patients with symptoms of dementia evaluated for AD. DISCERN™ comprises three assays that assess critical factors related to AD that regulate memory, formation of synaptic connections among neurons, and levels of amyloid plaques and neurofibrillary tangles in the brain and can provide a quicker, more accurate diagnosis than tests in the current diagnostic pathway (CDP). An Excel-based model with a three-year horizon was developed to assess the budget impact of DISCERN™ compared with CDP in a Medicare Advantage plan with 1M beneficiaries. Model parameters were identified through a literature review and were verified through consultation with clinicians experienced in diagnosis and management of AD. The model assesses direct medical costs/savings for patients based on the following categories: •Diagnosis: costs of diagnosis using DISCERN™ and CDP. •False Negative (FN) diagnosis: incremental cost of care avoidable with a correct AD diagnosis and appropriately directed medication. •True Positive (TP) diagnosis: AD medication costs; cost from a later TP diagnosis with the CDP versus DISCERN™ in the year of diagnosis, and savings from the delay in AD progression due to appropriate AD medication in patients who are correctly diagnosed after a FN diagnosis.•False Positive (FP) diagnosis: cost of AD medication for patients who do not have AD. A one-way sensitivity analysis was conducted to assess the effect of varying key clinical and cost parameters ±10%. An additional scenario analysis was developed to evaluate the impact of individual inputs. In the base scenario, DISCERN™ is estimated to decrease costs by $4.75M over three years, equating to approximately $63.11 saved per test per year for a cohort followed over three years. While the diagnosis cost is higher with DISCERN™ than with CDP modalities, this cost is offset by the higher overall costs associated with CDP due to the longer time needed to receive a TP diagnosis and the larger number of patients who receive a FN diagnosis and progress more rapidly than if they had received appropriate AD medication. The sensitivity analysis shows that the three parameters with the greatest impact on savings are: reduced sensitivity of DISCERN™, improved sensitivity of the CDP, and a reduction in the percentage of disease progression that is avoided with appropriate AD medication. A scenario analysis in which DISCERN™ reduces the utilization for patients of computed tomography from 21% in the base case to 16%, magnetic resonance imaging from 37% to 27% and cerebrospinal fluid biomarker testing, positive emission tomography, electroencephalograms, and polysomnography testing from 4%, 5%, 10%, and 8%, respectively, in the base case to 0%, results in an overall three-year net savings of $14.5M. DISCERN™ improves the rate of accurate, definitive diagnosis of AD earlier in the disease and may generate savings for Medicare Advantage plans.

Keywords: Alzheimer’s disease, budget, dementia, diagnosis.

Procedia PDF Downloads 139
460 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 618
459 Inputs and Outputs of Innovation Processes in the Colombian Services Sector

Authors: Álvaro Turriago-Hoyos

Abstract:

Most research tends to see innovation as an explanatory factor in achieving high levels of competitiveness and productivity. More recent studies have begun to analyze the determinants of innovation in the services sector as opposed to the much-discussed industrial sector of a country’s economy. This research paper focuses on the services sector in Colombia, one of Latin America’s fastest growing and biggest economies. Over the past decade, much of Colombia’s economic expansion has relied on commodity exports (mainly oil and coffee) whilst the industrial sector has performed relatively poorly. Such developments highlight the potential of the innovative role played by the services sector of the Colombian economy and its future growth prospects. This research paper analyzes the relationship between inputs, which at the same time are internal sources of innovation (such as R&D activities), and external sources that are improved by technology acquisition. The outputs are basically the four kinds of innovation that the OECD Oslo Manual recognizes: product, process, marketing and organizational innovations. The instrument used to measure this input-output relationship is based on Knowledge Production Function approaches. We run Probit models in order to identify the existing relationships between the above inputs and outputs, but also to identify spill-overs derived from interactions of the components of the value chain of the services firms analyzed: customers, suppliers, competitors, and complementary firms. Data are obtained from the Colombian National Administrative Department of Statistics for the period 2008 to 2013 published in the II and III Colombian National Innovation Survey. A short summary of the results obtained lead to conclude that firm size and a firm’s level of technological development turn out to be important discriminating factors for the description of the innovative process at the firm level. The model’s outcomes show a positive impact on the probability of introducing any kind of innovation both on R&D and Technology Acquisition investment. Also, cooperation agreements with customers, research institutes, competitors, and the suppliers are significant. Belonging to a particular industrial group is an important determinant but only to product and organizational innovation. It is possible to establish that Health Services, Education, Computer, Wholesale trade, and Financial Intermediation are the ISIC sectors, which report the highest number of frequencies of the considered set of firms. Those five sectors of the sixteen considered, in all cases, explained more than half of the total of all kinds of innovations. Product Innovation, which is followed by Marketing Innovation, gets the highest results. Displaying the same set of firms distinguishing by size, and belonging to high and low tech services sector shows that the larger the firms the larger a number of innovations, but also that always high-tech firms show a better innovation performance.

Keywords: Colombia, determinants of innovation, innovation, services sector

Procedia PDF Downloads 268
458 The South African Polycentric Water Resource Governance-Management Nexus: Parlaying an Institutional Agent and Structured Social Engagement

Authors: J. H. Boonzaaier, A. C. Brent

Abstract:

South Africa, a water scarce country, experiences the phenomenon that its life supporting natural water resources is seriously threatened by the users that are totally dependent on it. South Africa is globally applauded to have of the best and most progressive water laws and policies. There are however growing concerns regarding natural water resource quality deterioration and a critical void in the management of natural resources and compliance to policies due to increasing institutional uncertainties and failures. These are in accordance with concerns of many South African researchers and practitioners that call for a change in paradigm from talk to practice and a more constructive, practical approach to governance challenges in the management of water resources. A qualitative theory-building case study through longitudinal action research was conducted from 2014 to 2017. The research assessed whether a strategic positioned institutional agent can be parlayed to facilitate and execute WRM on catchment level by engaging multiple stakeholders in a polycentric setting. Through a critical realist approach a distinction was made between ex ante self-deterministic human behaviour in the realist realm, and ex post governance-management in the constructivist realm. A congruence analysis, including Toulmin’s method of argumentation analysis, was utilised. The study evaluated the unique case of a self-steering local water management institution, the Impala Water Users Association (WUA) in the Pongola River catchment in the northern part of the KwaZulu-Natal Province of South Africa. Exploiting prevailing water resource threats, it expanded its ancillary functions from 20,000 to 300,000 ha. Embarking on WRM activities, it addressed natural water system quality assessments, social awareness, knowledge support, and threats, such as: soil erosion, waste and effluent into water systems, coal mining, and water security dimensions; through structured engagement with 21 different catchment stakeholders. By implementing a proposed polycentric governance-management model on a catchment scale, the WUA achieved to fill the void. It developed a foundation and capacity to protect the resilience of the natural environment that is critical for freshwater resources to ensure long-term water security of the Pongola River basin. Further work is recommended on appropriate statutory delegations, mechanisms of sustainable funding, sufficient penetration of knowledge to local levels to catalyse behaviour change, incentivised support from professionals, back-to-back expansion of WUAs to alleviate scale and cost burdens, and the creation of catchment data monitoring and compilation centres.

Keywords: institutional agent, water governance, polycentric water resource management, water resource management

Procedia PDF Downloads 139
457 Wildlife Communities in the Service of Extensively Managed Fishpond Systems – Advantages of a Symbiotic Relationship

Authors: Peter Palasti, Eva Kerepeczki

Abstract:

Extensive fish farming is one of the most traditional forms of aquaculture in Europe, usually practiced in large pond systems with earthen beds, where the growth of fish is based on natural feed and supplementary foraging. These farms have semi-natural environmental conditions, sustaining diverse wildlife communities that have complex effects on fish production and also provide a livelihood for many wetland related taxa. Based on their characteristics, these communities could be sources of various ecosystem services (ESs), that could also enhance the value and enable the multifunctional use of these artificially constructed and maintained production zones. To identify and estimate the whole range of wildlife’s contribution we have conducted an integrated assessment in an extensively managed pond system in Biharugra, Hungary, where we studied 14 previously revealed ESs: fish and reed production, water storage, water and air quality regulation, CO2 absorption, groundwater recharge, aesthetics, recreational activities, inspiration, education, scientific research, presence of semi-natural habitats and useful/protected species. ESs were collected through structured interviews with the local experts of all major stakeholder groups, where we have also gathered information about the known forms, levels (none, low, high) and orientations (positive, negative) of the contributions of the wildlife community. After that, a quantitative analysis was carried out: we calculated the total mean value of the services being used between 2014-16, then we estimated the value and percentage of contributions. For the quantification, we mainly used biophysical indicators with the available data and empirical knowledge of the local experts. During the interviews, 12 of the previously listed services (85%) were mentioned to be related to wildlife community, consisting of 5 fully (e.g., recreation, reed production) and seven partially dependent ESs (e.g., inspiration, CO2 absorption) from our list. The orientation of the contributions was said to be positive almost every time; however, in the case of fish production, the feeding habit of some wild species (Phalacrocorax carbo, Lutra lutra) caused significant losses in fish stocks in the study period. During the biophysical assessment, we calculated the total mean value of the services and quantified the aid of wildlife community at the following services: fish and reed production, recreation, CO2 absorption, and the presence of semi-natural habitats and wild species. The combined results of our interviews and biophysical evaluations showed that the presence of wildlife community not just greatly increased the productivity of the fish farms in Biharugra (with ~53% of natural yield generated by planktonic and benthic communities) but also enhanced the multifunctionality of the system through expanding the quality and number of its services. With these abilities, extensively managed fishponds could play an important role in the future as refugia for wetland related services and species threatened by the effects of global warming.

Keywords: ecosystem services, fishpond systems, integrated assessment, wildlife community

Procedia PDF Downloads 118
456 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks

Authors: Mazarine Roquet, Pierre Dewallef

Abstract:

The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.

Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating

Procedia PDF Downloads 85
455 Preparation, Characterization and Photocatalytic Activity of a New Noble Metal Modified TiO2@SrTiO3 and SrTiO3 Photocatalysts

Authors: Ewelina Grabowska, Martyna Marchelek

Abstract:

Among the various semiconductors, nanosized TiO2 has been widely studied due to its high photosensitivity, low cost, low toxicity, and good chemical and thermal stability. However, there are two main drawbacks to the practical application of pure TiO2 films. One is that TiO2 can be induced only by ultraviolet (UV) light due to its intrinsic wide bandgap (3.2 eV for anatase and 3.0 eV for rutile), which limits its practical efficiency for solar energy utilization since UV light makes up only 4-5% of the solar spectrum. The other is that a high electron-hole recombination rate will reduce the photoelectric conversion efficiency of TiO2. In order to overcome the above drawbacks and modify the electronic structure of TiO2, some semiconductors (eg. CdS, ZnO, PbS, Cu2O, Bi2S3, and CdSe) have been used to prepare coupled TiO2 composites, for improving their charge separation efficiency and extending the photoresponse into the visible region. It has been proved that the fabrication of p-n heterostructures by combining n-type TiO2 with p-type semiconductors is an effective way to improve the photoelectric conversion efficiency of TiO2. SrTiO3 is a good candidate for coupling TiO2 and improving the photocatalytic performance of the photocatalyst because its conduction band edge is more negative than TiO2. Due to the potential differences between the band edges of these two semiconductors, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Conversely, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Then the photogenerated charge carriers can be efficiently separated by these processes, resulting in the enhancement of the photocatalytic property in the photocatalyst. Additionally, one of the methods for improving photocatalyst performance is addition of nanoparticles containing one or two noble metals (Pt, Au, Ag and Pd) deposited on semiconductor surface. The mechanisms were proposed as (1) the surface plasmon resonance of noble metal particles is excited by visible light, facilitating the excitation of the surface electron and interfacial electron transfer (2) some energy levels can be produced in the band gap of TiO2 by the dispersion of noble metal nanoparticles in the TiO2 matrix; (3) noble metal nanoparticles deposited on TiO2 act as electron traps, enhancing the electron–hole separation. In view of this, we recently obtained series of TiO2@SrTiO3 and SrTiO3 photocatalysts loaded with noble metal NPs. using photodeposition method. The M- TiO2@SrTiO3 and M-SrTiO3 photocatalysts (M= Rh, Rt, Pt) were studied for photodegradation of phenol in aqueous phase under UV-Vis and visible irradiation. Moreover, in the second part of our research hydroxyl radical formations were investigated. Fluorescence of irradiated coumarin solution was used as a method of ˙OH radical detection. Coumarin readily reacts with generated hydroxyl radicals forming hydroxycoumarins. Although the major hydroxylation product is 5-hydroxycoumarin, only 7-hydroxyproduct of coumarin hydroxylation emits fluorescent light. Thus, this method was used only for hydroxyl radical detection, but not for determining concentration of hydroxyl radicals.

Keywords: composites TiO2, SrTiO3, photocatalysis, phenol degradation

Procedia PDF Downloads 222
454 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study

Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria

Abstract:

Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.

Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations

Procedia PDF Downloads 118
453 Bioleaching of Precious Metals from an Oil-fired Ash Using Organic Acids Produced by Aspergillus niger in Shake Flasks and a Bioreactor

Authors: Payam Rasoulnia, Seyyed Mohammad Mousavi

Abstract:

Heavy fuel oil firing power plants produce huge amounts of ashes as solid wastes, which seriously need to be managed and processed. Recycling precious metals of V and Ni from these oil-fired ashes which are considered as secondary sources of metals recovery, not only has a great economic importance for use in industry, but also it is noteworthy from the environmental point of view. Vanadium is an important metal that is mainly used in the steel industry because of its physical properties of hardness, tensile strength, and fatigue resistance. It is also utilized in oxidation catalysts, titanium–aluminum alloys and vanadium redox batteries. In the present study bioleaching of vanadium and nickel from an oil-fired ash sample was conducted using Aspergillus niger fungus. The experiments were carried out using spent-medium bioleaching method in both Erlenmeyer flasks and also bubble column bioreactor, in order to compare them together. In spent-medium bioleaching the solid waste is not in direct contact with the fungus and consequently the fungal growth is not retarded and maximum organic acids are produced. In this method the metals are leached through biogenic produced organic acids present in the medium. In shake flask experiments the fungus was cultured for 15 days, where the maximum production of organic acids was observed, while in bubble column bioreactor experiments a 7 days fermentation period was applied. The amount of produced organic acids were measured using high performance liquid chromatography (HPLC) and the results showed that depending on the fermentation period and the scale of experiments, the fungus has different major lixiviants. In flask tests, citric acid was the main produced organic acid by the fungus and the other organic acids including gluconic, oxalic, and malic were excreted in much lower concentrations, while in the bioreactor oxalic acid was the main lixiviant and it was produced considerably. In Erlenmeyer flasks during 15 days fermentation of Aspergillus niger, 8080 ppm citric acid and 1170 ppm oxalic acid was produced, while in bubble column bioreactor over 7 days of fungal growth, 17185 ppm oxalic acid and 1040 ppm citric acid was secreted. The leaching tests using the spent-media obtained from both of fermentation experiments, were performed at the same conditions of leaching duration of 7 days, leaching temperature of 60 °C and pulp density up to 3% (w/v). The results revealed that in Erlenmeyer flask experiments 97% of V and 50% of Ni were extracted while using spent medium produced in bubble column bioreactor, V and Ni recoveries were achieved to 100% and 33%, respectively. These recovery yields indicate that in both scales almost total vanadium can be recovered, while nickel recovery was lower. With help of the bioreactor spent-medium nickel recovery yield was lower than that of obtained from the flask experiments, which it could be due to precipitation of some values of Ni in presence of high levels of oxalic acid existing in its spent medium.

Keywords: Aspergillus niger, bubble column bioreactor, oil-fired ash, spent-medium bioleaching

Procedia PDF Downloads 229
452 Relationship between Glycated Hemoglobin in Adolescents with Type 1 Diabetes Mellitus and Parental Anxiety and Depression

Authors: Evija Silina, Maris Taube, Maksims Zolovs

Abstract:

Background: Type 1 diabetes mellitus (T1D) is the most common chronic endocrine pathology in children. The management of type 1 diabetes requires a strong diet, physical activity, lifelong insulin therapy, and proper self-monitoring of blood glucose and is usually complicated and, therefore, may result in a variety of psychosocial problems for children, adolescents, and their families. Metabolic control of the disease is determined by glycated haemoglobin (HbA1c), the main criterion for diabetes compensation. A correlation was observed between anxiety and depression levels and glycaemic control in many previous studies. It is assumed that anxiety and depression symptoms negatively affect glycaemic control. Parental psychological distress was associated with higher child self-report of stress and depressive symptoms, and it had negative effects on diabetes management. Objective: The main objective of this paper is to evaluate the relationship between parental mental health conditions (depression and anxiety) and metabolic control of their adolescents with T1DM. Methods: This cross-sectional study recruited adolescents with T1D (N=251) and their parents (N=251). The respondents completed questionnaires. The 7-item Generalized Anxiety Disorder (GAD-7) scale measured anxiety level; The Patient Health Questionnaire – 9 (PHQ-9) measured depressive symptoms. Glycaemic control of patients was assessed using the last glycated haemoglobin (HbA1c) values. GLM mediation analysis was performed to determine the potential mediating effect of the parent’s mental health conditions (depression and anxiety) on the relationship between the mental health conditions (depression and anxiety) of a child on the level of glycated hemoglobin (HbA1c). To test the significance of the mediated effect (ME) for non-normally distributed data, bootstrapping procedures (10,000 bootstrapped samples) were used. Results: 502 respondents were eligible for screening to detect anxiety and depression symptoms. Mediation analysis was performed to assess the mediating role of parent GAD-7 on the linkage between a dependent variable (HbA1c) and independent variables (child GAD-7 un child PHQ-9). The results revealed that the total effect of child GAD-7 (B = 0.479, z = 4.30, p < 0.001) on HbA1c was significant but the total effect of child PHQ-9 (B = 0.166, z = 1.49, p = 0.135) was not significant. With the inclusion of the mediating variable (parent GAD-7), the impact of child GAD-7 on HbA1c was found insignificant (B = 0.113, z=0.98, p = 0.326), the impact of child PHQ-9 on HbA1c was found also insignificant (B = 0.068, z=0.74, p = 0.458). The indirect effect of child GAD-7 on HbA1c through parent GAD-7 was found significant (B = 0.366, z = 4.31, p < 0.001) and the indirect effect of child PHQ-9 on HbA1c through parent GAD-7 was found also significant (B = 0.098, z = 2.56, p = 0.010). This indicates that the relationship between a dependent variable (HbA1c) and independent variables (child GAD-7 un child PHQ-9) is fully mediated by parent GAD-7. Conclusion: The main result suggests that glycated haemoglobin in adolescents with Type 1 diabetes is related to adolescents’ mental health via parents’ anxiety. It means that parents’ anxiety plays a more significant role in the level of glycated haemoglobin in adolescents than depression and anxiety in the adolescent.

Keywords: type 1 diabetes, adolescents, parental diabetes-specific mental health conditions, glycated haemoglobin, anxiety, depression

Procedia PDF Downloads 78
451 Molecular Dynamics Simulation Study of Sulfonated Polybenzimidazole Polymers as Promising Forward Osmosis Membranes

Authors: Seyedeh Pardis Hosseini

Abstract:

With increased levels of clean and affordable water scarcity crises in many countries, wastewater treatment has been chosen as a viable method to produce freshwater for various consumptions. Even though reverse osmosis dominates the wastewater treatment market, forward osmosis (FO) processes have significant advantages, such as potentially using a renewable and low-grade energy source and improving water quality. FO is an osmotically driven membrane process that uses a high concentrated draw solution and a relatively low concentrated feed solution across a semi-permeable membrane. Among many novel FO membranes that have been introduced over the past decades, polybenzimidazole (PBI) membranes, a class of aromatic heterocyclic-based polymers, have shown high thermal and chemical stability because of their unique chemical structure. However, the studies reviewed indicate that the hydrophilicity of PBI membranes is comparatively low. Hence, there is an urgent need to develop novel FO membranes with modified PBI polymers to promote hydrophilicity. A few studies have been undertaken to improve the PBI hydrophilicity by fabricating mixed matrix polymeric membranes and surface modification. Thereby, in this study, two different sulfonated polybenzimidazole (SPBI) polymers with the same backbone but different functional groups, namely arylsulfonate PBI (PBI-AS) and propylsulfonate PBI (PBI-PS), are introduced as FO membranes and studied via the molecular dynamics (MD) simulation method. The FO simulation box consists of three distinct regions: a saltwater region, a membrane region, and a pure-water region. The pure-water region is situated at the upper part of the simulation box, while the saltwater region, which contains an aqueous salt solution of Na+ and Cl− ions along with water molecules, occupies the lower part of the simulation box. Specifically, the saltwater region includes 710 water molecules and 24 Na+ and 24 Cl− ions, resulting in a combined concentration of 10 weight percent (wt%). The pure-water region comprises 788 water molecules. Both the saltwater and pure-water regions have a density of 1.0 g/cm³. The membrane region, positioned between the saltwater and pure-water regions, is constructed from three types of polymers: PBI, PBI-AS, and PBI-PS, each consisting of three polymer chains with 30 monomers per chain. The structural and thermophysical properties of the polymers, water molecules, and Na+ and Cl− ions were analyzed using the COMPASS forcefield. All simulations were conducted using the BIOVIA Materials Studio 2020 software. By monitoring the variation in the number of water molecules over the simulation time within the saltwater region, the water permeability of the polymer membranes was calculated and subsequently compared. The results indicated that SPBI polymers exhibited higher water permeability compared to PBI polymers. This enhanced permeability can be attributed to the structural and compositional differences between SPBI and PBI polymers, which likely facilitate more efficient water transport through the membrane. Consequently, the adoption of SPBI polymers in the FO process is anticipated to result in significantly improved performance. This improvement could lead to higher water flux rates, better salt rejection, and overall more efficient use of resources in desalination and water purification applications.

Keywords: forward osmosis, molecular dynamics simulation, sulfonated polybenzimidazole, water permeability

Procedia PDF Downloads 29
450 Feasibility and Acceptability of Mindfulness-Based Cognitive Therapy in People with Depression and Cardiovascular Disorders: A Feasibility Randomised Controlled Trial

Authors: Modi Alsubaie, Chris Dickens, Barnaby Dunn, Andy Gibson, Obioha Ukoumunned, Alison Evans, Rachael Vicary, Manish Gandhi, Willem Kuyken

Abstract:

Background: Depression co-occurs in 20% of people with cardiovascular disorders, can persist for years and predicts worse physical health outcomes. While psychosocial treatments have been shown to effectively treat acute depression in those with comorbid cardiovascular disorders, to date there has been no evaluation of approaches aiming to prevent relapse and treat residual depression symptoms in this group. Therefore, the current study aimed to examine the feasibility and acceptability of a randomised controlled trial design evaluating an adapted version of mindfulness-based cognitive therapy (MBCT) designed specifically for people with co-morbid depression and cardiovascular disorders. Methods: A 3-arm feasibility randomised controlled trial was conducted, comparing MBCT adapted for people with cardiovascular disorders plus treatment as usual (TAU), mindfulness-based stress reduction (MBSR) plus TAU, and TAU alone. Participants completed a set of self-report measures of depression severity, anxiety, quality of life, illness perceptions, mindfulness, self-compassion and affect and had their blood pressure taken immediately before, immediately after, and three months following the intervention. Those in the adapted-MBCT arm additionally underwent a qualitative interview to gather their views about the adapted intervention. Results: 3400 potentially eligible participants were approached when attending an outpatient appointment at a cardiology clinic or via a GP letter following a case note search. 242 (7.1%) were interested in taking part, 59 (1.7%) were screened as being suitable, and 33 (<1%) were eventually randomised to the three groups. The sample was heterogeneous in terms of whether they reported current depression or had a history of depression and the time since the onset of cardiovascular disease (one to 25 years). Of 11 participants randomised to adapted MBCT seven completed the full course, levels of home mindfulness practice were high, and positive qualitative feedback about the intervention was given. Twenty-nine out of 33 participants randomised completed all the assessment measures at all three-time points. With regards to the primary outcome (depression), five out of the seven people who completed the adapted MBCT and three out of five under MBSR showed significant clinical change, while in TAU no one showed any clinical change at the three-month follow-up. Conclusions: The adapted MBCT intervention was feasible and acceptable to participants. However, aspects of the trial design were not feasible. In particular, low recruitment rates were achieved, and there was a high withdrawal rate between screening and randomisation. Moreover, the heterogeneity in the sample was high meaning the adapted intervention was unlikely to be well tailored to all participants needs. This suggests that if the decision is made to move to a definitive trial, study recruitment procedures will need to be revised to more successfully recruit a target sample that optimally matches the adapted intervention.

Keywords: mindfulness-based cognitive therapy (MBCT), depression, cardiovascular disorders, feasibility, acceptability

Procedia PDF Downloads 219
449 Voices of the Students From a Fully Inclusive Classroom

Authors: Ashwini Tiwari

Abstract:

Introduction: Inclusive education for all is a multifaceted approach that requires system thinking and the promotion of a "Culture of Inclusion." Such can only be achieved through the collaboration of multiple stakeholders at the community, regional, state, national, and international levels. Researchers have found effective practices used in inclusive general classrooms are beneficial to all students, including students with disabilities, those who experience challenges academically and socially, and students without disabilities as well. However, to date, no statistically significant effects on the academic performance of students without disabilities in the presence of students with disabilities have been revealed. Therefore, proponents against inclusive education practices, based solely on their beliefs regarding the detrimental effects of students without disabilities, appears to have unfounded perceptions. This qualitative case study examines students' perspectives and beliefs about inclusive education in a middle school in South Texas. More specifically, this study examined students understanding of how inclusive education practices intersect with the classroom community. The data was collected from the students attending fully inclusive classrooms through interviews and focus groups. The findings suggest that peer integration and friendships built during classes are an essential part of schooling for both disabled and non-disabled students. Research Methodology: This qualitative case study used observations and focus group interviews with 12 middle school students attending an inclusive classroom at a public school located in South Texas. The participant of this study includes eight females and five males. All the study participants attend a fully inclusive middle school with special needs peers. Five of the students had disabilities. The focus groups and interviews were conducted during for entire academic year, with an average of one focus group and observation each month. The data were analyzed using the constant comparative method. The data from the focus group and observation were continuously compared for emerging codes during the data collection process. Codes were further refined and merged. Themes emerged as a result of the interpretation at the end of the data analysis process. Findings and discussion: This study was conducted to examine disabled and non-disabled students' perspectives on the inclusion of disabled students. The study revealed that non-disabled students generally have positive attitudes toward their disabled peers. The students in the study did not perceive inclusion as a special provision; rather, they perceived inclusion as a way of instructional practice. Most of the participants in the study spoke about the multiple benefits of inclusion. They emphasized that peer integration and friendships built during classes are an essential part of their schooling. Students believed that it was part of their responsibility to assist their peers in the ways possible. This finding is in line with the literature that the personality of children with disabilities is not determined by their disability but rather by their social environment and its interaction with the child. Interactions with peers are one of the most important socio-cultural conditions for the development of children with disabilities.

Keywords: inclusion, special education, k-12 education, student voices

Procedia PDF Downloads 81
448 Lake of Neuchatel: Effect of Increasing Storm Events on Littoral Transport and Coastal Structures

Authors: Charlotte Dreger, Erik Bollaert

Abstract:

This paper presents two environmentally-friendly coastal structures realized on the Lake of Neuchâtel. Both structures reflect current environmental issues of concern on the lake and have been strongly affected by extreme meteorological conditions between their period of design and their actual operational period. The Lake of Neuchatel is one of the biggest Swiss lakes and measures around 38 km in length and 8.2 km in width, for a maximum water depth of 152 m. Its particular topographical alignment, situated in between the Swiss Plateau and the Jura mountains, combines strong winds and large fetch values, resulting in significant wave heights during storm events at both north-east and south-west lake extremities. In addition, due to flooding concerns, historically, lake levels have been lowered by several meters during the Jura correction works in the 19th and 20th century. Hence, during storm events, continuous erosion of the vulnerable molasse shorelines and sand banks generate frequent and abundant littoral transport from the center of the lake to its extremities. This phenomenon does not only cause disturbances of the ecosystem, but also generates numerous problems at natural or man-made infrastructures located along the shorelines, such as reed plants, harbor entrances, canals, etc. A first example is provided at the southwestern extremity, near the city of Yverdon, where an ensemble of 11 small islands, the Iles des Vernes, have been artificially created in view of enhancing biological conditions and food availability for bird species during their migration process, replacing at the same time two larger islands that were affected by lack of morphodynamics and general vegetalization of their surfaces. The article will present the concept and dimensioning of these islands based on 2D numerical modelling, as well as the realization and follow-up campaigns. In particular, the influence of several major storm events that occurred immediately after the works will be pointed out. Second, a sediment retention dike is discussed at the northeastern extremity, at the entrance of the Canal de la Broye into the lake. This canal is heavily used for navigation and suffers from frequent and significant sedimentation at its outlet. The new coastal structure has been designed to minimize sediment deposits around the exutory of the canal into the lake, by retaining the littoral transport during storm events. The article will describe the basic assumptions used to design the dike, as well as the construction works and follow-up campaigns. Especially the huge influence of changing meteorological conditions on the littoral transport of the Lake of Neuchatel since project design ten years ago will be pointed out. Not only the intensity and frequency of storm events are increasing, but also the main wind directions alter, affecting in this way the efficiency of the coastal structure in retaining the sediments.

Keywords: meteorological evolution, sediment transport, lake of Neuchatel, numerical modelling, environmental measures

Procedia PDF Downloads 86
447 Monitoring of Educational Achievements of Kazakhstani 4th and 9th Graders

Authors: Madina Tynybayeva, Sanya Zhumazhanova, Saltanat Kozhakhmetova, Merey Mussabayeva

Abstract:

One of the leading indicators of the education quality is the level of students’ educational achievements. The processes of modernization of Kazakhstani education system have predetermined the need to improve the national system by assessing the quality of education. The results of assessment greatly contribute to addressing questions about the current state of the educational system in the country. The monitoring of students’ educational achievements (MEAS) is the systematic measurement of the quality of education for compliance with the state obligatory standard of Kazakhstan. This systematic measurement is independent of educational organizations and approved by the order of the Minister of Education and Scienceof Kazakhstan. The MEAS was conducted in the regions of Kazakhstanfor the first time in 2022 by the National Testing Centre. The measurement does not have legal consequences either for students or for educational organizations. Students’ achievements were measured in three subject areas: reading, mathematics and science literacy. MEAS was held for the first time in April this year, 105 thousand students from 1436 schools of Kazakhstan took part in the testing. The monitoring was accompanied by a survey of students, teachers, and school leaders. The goal is to identify which contextual factors affect learning outcomes. The testing was carried out in a computer format. The test tasks of MEAS are ranked according to the three levels of difficulty: basic, medium, and high. Fourth graders are asked to complete 30 closed-type tasks. The average score of the results is 21 points out of 30, which means 70% of tasks were successfully completed. The total number of test tasks for 9th grade students – 75 questions. The results of ninth graders are comparatively lower, the success rate of completing tasks is 63%. MEAS participants did not reveal a statistically significant gap in results in terms of the language of instruction, territorial status, and type of school. The trend of reducing the gap in these indicators is also noted in the framework of recent international studies conducted across the country, in particular PISA for schools in Kazakhstan. However, there is a regional gap in MOES performance. The difference in the values of the indicators of the highest and lowest scores of the regions was 11% of the success of completing tasks in the 4th grade, 14% in the 9thgrade. The results of the 4th grade students in reading, mathematics, and science literacy are: 71.5%, 70%, and 66.9%, respectively. The results of ninth-graders in reading, mathematics, and science literacy are 69.6%, 54%, and 60.8%, respectively. From the surveys, it was revealed that the educational achievements of students are considerably influenced by such factors as the subject competences of teachers, as well as the school climate and motivation of students. Thus, the results of MEAS indicate the need for an integrated approach to improving the quality of education. In particular, the combination of improving the content of curricula and textbooks, internal and external assessment of the educational achievements of students, educational programs of pedagogical specialties, and advanced training courses is required.

Keywords: assessment, secondary school, monitoring, functional literacy, kazakhstan

Procedia PDF Downloads 108
446 Development of a Risk Governance Index and Examination of Its Determinants: An Empirical Study in Indian Context

Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav

Abstract:

Risk management has been gaining extensive focus from international organizations like Committee of Sponsoring Organizations and Financial Stability Board, and, the foundation of such an effective and efficient risk management system lies in a strong risk governance structure. In view of this, an attempt (perhaps a first of its kind) has been made to develop a risk governance index, which could be used as proxy for quality of risk governance structures. The index (normative framework) is based on eleven variables, namely, size of board, board diversity in terms of gender, proportion of executive directors, executive/non-executive status of chairperson, proportion of independent directors, CEO duality, chief risk officer (CRO), risk management committee, mandatory committees, voluntary committees and existence/non-existence of whistle blower policy. These variables are scored on a scale of 1 to 5 with the exception of the variables, namely, status of chairperson and CEO duality (which have been scored on a dichotomous scale with the score of 3 or 5). In case there is a legal/statutory requirement in respect of above-mentioned variables and there is a non-compliance with such requirement a score of one has been envisaged. Though there is no legal requirement, for the larger part of study, in context of CRO, risk management committee and whistle blower policy, still a score of 1 has been assigned in the event of their non-existence. Recognizing the importance of these variables in context of risk governance structure and the fact that the study basically focuses on risk governance, the absence of these variables has been equated to non-compliance with a legal/statutory requirement. Therefore, based on this the minimum score is 15 and the maximum possible is 55. In addition, an attempt has been made to explore the determinants of this index. For this purpose, the sample consists of non-financial companies (429) that constitute S&P CNX500 index. The study covers a 10 years period from April 1, 2005 to March 31, 2015. Given the panel nature of data, Hausman test was applied, and it suggested that fixed effects regression would be appropriate. The results indicate that age and size of firms have significant positive impact on its risk governance structures. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of governance structures. In contrast, profitability (positive relationship), leverage (negative relationship) and growth (negative relationship) do not have significant impact on quality of risk governance structures. The value of rho indicates that about 77.74% variation in risk governance structures is due to firm specific factors. Given the fact that each firm is unique in terms of its risk exposure, risk culture, risk appetite, and risk tolerance levels, it appears reasonable to assume that the specific conditions and circumstances that a company is beset with, could be the biggest determinants of its risk governance structures. Given the recommendations put forth in the paper (particularly for regulators and companies), the study is expected to be of immense utility in an important yet neglected aspect of risk management.

Keywords: corporate governance, ERM, risk governance, risk management

Procedia PDF Downloads 253
445 Raman Spectroscopic Detection of the Diminishing Toxic Effect of Renal Waste Creatinine by Its in vitro Reaction with Drugs N-Acetylcysteine and Taurine

Authors: Debraj Gangopadhyay, Moumita Das, Ranjan K. Singh, Poonam Tandon

Abstract:

Creatinine is a toxic chemical waste generated from muscle metabolism. Abnormally high levels of creatinine in the body fluid indicate possible malfunction or failure of the kidneys. This leads to a condition termed as creatinine induced nephrotoxicity. N-acetylcysteine is an antioxidant drug which is capable of preventing creatinine induced nephrotoxicity and is helpful to treat renal failure in its early stages. Taurine is another antioxidant drug which serves similar purpose. The kidneys have a natural power that whenever reactive oxygen species radicals increase in the human body, the kidneys make an antioxidant shell so that these radicals cannot harm the kidney function. Taurine plays a vital role in increasing the power of that shell such that the glomerular filtration rate can remain in its normal level. Thus taurine protects the kidneys against several diseases. However, taurine also has some negative effects on the body as its chloramine derivative is a weak oxidant by nature. N-acetylcysteine is capable of inhibiting the residual oxidative property of taurine chloramine. Therefore, N-acetylcysteine is given to a patient along with taurine and this combination is capable of suppressing the negative effect of taurine. Both N-acetylcysteine and taurine being affordable, safe, and widely available medicines, knowledge of the mechanism of their combined effect on creatinine, the favored route of administration, and the proper dose may be highly useful in their use for treating renal patients. Raman spectroscopy is a precise technique to observe minor structural changes taking place when two or more molecules interact. The possibility of formation of a complex between a drug molecule and an analyte molecule in solution can be explored by analyzing the changes in the Raman spectra. The formation of a stable complex of creatinine with N-acetylcysteinein vitroin aqueous solution has been observed with the help of Raman spectroscopic technique. From the Raman spectra of the mixtures of aqueous solutions of creatinine and N-acetylcysteinein different molar ratios, it is observed that the most stable complex is formed at 1:1 ratio of creatinine andN-acetylcysteine. Upon drying, the complex obtained is gel-like in appearance and reddish yellow in color. The complex is hygroscopic and has much better water solubility compared to creatinine. This highlights that N-acetylcysteineplays an effective role in reducing the toxic effect of creatinine by forming this water soluble complex which can be removed through urine. Since the drug taurine is also known to be useful in reducing nephrotoxicity caused by creatinine, the aqueous solution of taurine with those of creatinine and N-acetylcysteinewere mixed in different molar ratios and were investigated by Raman spectroscopic technique. It is understood that taurine itself does not undergo complexation with creatinine as no additional changes are observed in the Raman spectra of creatinine when it is mixed with taurine. However, when creatinine, N-acetylcysteine and taurine are mixed in aqueous solution in molar ratio 1:1:3, several changes occurring in the Raman spectra of creatinine suggest the diminishing toxic effect of creatinine in the presence ofantioxidant drugs N-acetylcysteine and taurine.

Keywords: creatinine, creatinine induced nephrotoxicity, N-acetylcysteine, taurine

Procedia PDF Downloads 151
444 Validation of an Educative Manual for Patients with Breast Cancer Submitted to Radiation Therapy

Authors: Flavia Oliveira de A. M. Cruz, Edison Tostes Faria, Paula Elaine D. Reis

Abstract:

When the breast is submitted to radiation therapy (RT), the most common effects are pain, skin changes, mobility restrictions, local sensory alteration, and fatigue. These effects, if not managed properly, may reduce the quality of life of cancer patients and may lead to the treatment discontinuation. Therefore, promoting knowledge and guidelines for symptom management remain a high priority for patients and a challenge for health professionals, due to the need to handle side effects in a population with a life-threatening disease. Printed materials are important strategies for supporting educative activities since they help the individual to assimilate and understand the amount of information transmitted. Nurses' behavior can be systematized through the use of an educative manual, which may be effective in promoting information regarding the treatment, self-care and how to control the effects of RT at home. In view of the importance of guaranteeing the validity of the material before its use, the objective of this research was to validate the content and appearance of an educative manual for breast cancer patients undergoing RT. The Theory of Psychometrics was used for the validation process in this descriptive methodological research. A minimum agreement rate (AR) of 80% was considered to guarantee the validity of the material. The data were collected from October to December 2017, by means of two assessments tools, constructed in the form of a Likert scale, with five levels of understanding. These instruments addressed different aspects of the evaluation, in view of two different groups of participants; 17 experts in the theme area of the educative manual, and 12 women that received RT previously to treat breast cancer. The manual was titled 'Orientation Manual: radiation therapy in breast', and was focused on breast cancer patients attended at the Department of Oncology of the Brasília University Hospital (UNACON/HUB). The research project was submitted to the Research Ethics Committee at the School of Health Sciences of the University of Brasília (CAAE: 24592213.1.0000.0030). Only two items of the assessment tool for the experts, one related to the manual's ability to promote behavioral and attitude changes and the other related to the extent of its use for other health services, obtained AR < 80% and were reformulated based on the participants' suggestions and in the literature. All other items were considered appropriate and/or complete appropriate in the three blocks proposed for the experts: objectives - 89%, structure and form - 93%, and relevance - 93%; and good and/or very good in the five blocks of analysis proposed for patients: objectives - 100%, organization - 100%, writing style - 100%, appearance - 100%, and motivation. The appearance and content validation of the educative manual proposed were attended to. The educative manual was considered relevant and pertinent and may contribute to the understanding of the therapeutic process by breast cancer patients during RT, as well as support clinical practice through the nursing consultation.

Keywords: oncology nursing, nursing care, validation studies, educational technology

Procedia PDF Downloads 128
443 A 500 MWₑ Coal-Fired Power Plant Operated under Partial Oxy-Combustion: Methodology and Economic Evaluation

Authors: Fernando Vega, Esmeralda Portillo, Sara Camino, Benito Navarrete, Elena Montavez

Abstract:

The European Union aims at strongly reducing their CO₂ emissions from energy and industrial sector by 2030. The energy sector contributes with more than two-thirds of the CO₂ emission share derived from anthropogenic activities. Although efforts are mainly focused on the use of renewables by energy production sector, carbon capture and storage (CCS) remains as a frontline option to reduce CO₂ emissions from industrial process, particularly from fossil-fuel power plants and cement production. Among the most feasible and near-to-market CCS technologies, namely post-combustion and oxy-combustion, partial oxy-combustion is a novel concept that can potentially reduce the overall energy requirements of the CO₂ capture process. This technology consists in the use of higher oxygen content in the oxidizer that should increase the CO₂ concentration of the flue gas once the fuel is burnt. The CO₂ is then separated from the flue gas downstream by means of a conventional CO₂ chemical absorption process. The production of a higher CO₂ concentrated flue gas should enhance the CO₂ absorption into the solvent, leading to further reductions of the CO₂ separation performance in terms of solvent flow-rate, equipment size, and energy penalty related to the solvent regeneration. This work evaluates a portfolio of CCS technologies applied to fossil-fuel power plants. For this purpose, an economic evaluation methodology was developed in detail to determine the main economical parameters for CO₂ emission removal such as the levelized cost of electricity (LCOE) and the CO₂ captured and avoided costs. ASPEN Plus™ software was used to simulate the main units of power plant and solve the energy and mass balance. Capital and investment costs were determined from the purchased cost of equipment, also engineering costs and project and process contingencies. The annual capital cost and operating and maintenance costs were later obtained. A complete energy balance was performed to determine the net power produced in each case. The baseline case consists of a supercritical 500 MWe coal-fired power plant using anthracite as a fuel without any CO₂ capture system. Four cases were proposed: conventional post-combustion capture, oxy-combustion and partial oxy-combustion using two levels of oxygen-enriched air (40%v/v and 75%v/v). CO₂ chemical absorption process using monoethanolamine (MEA) was used as a CO₂ separation process whereas the O₂ requirement was achieved using a conventional air separation unit (ASU) based on Linde's cryogenic process. Results showed a reduction of 15% of the total investment cost of the CO₂ separation process when partial oxy-combustion was used. Oxygen-enriched air production also reduced almost half the investment costs required for ASU in comparison with oxy-combustion cases. Partial oxy-combustion has a significant impact on the performance of both CO₂ separation and O₂ production technologies, and it can lead to further energy reductions using new developments on both CO₂ and O₂ separation processes.

Keywords: carbon capture, cost methodology, economic evaluation, partial oxy-combustion

Procedia PDF Downloads 149
442 Visuospatial Perspective Taking and Theory of Mind in a Clinical Approach: Development of a Task for Adults

Authors: Britt Erni, Aldara Vazquez Fernandez, Roland Maurer

Abstract:

Visuospatial perspective taking (VSPT) is a process that allows to integrate spatial information from different points of view, and to transform the mental images we have of the environment to properly orient our movements and anticipate the location of landmarks during navigation. VSPT is also related to egocentric perspective transformations (imagined rotations or translations of one's point of view) and to infer the visuospatial experiences of another person (e.g. if and how another person sees objects). This process is deeply related to a wide-ranging capacity called the theory of mind (ToM), an essential cognitive function that allows us to regulate our social behaviour by attributing mental representations to individuals in order to make behavioural predictions. VSPT is often considered in the literature as the starting point of the development of the theory of mind. VSPT and ToM include several levels of knowledge that have to be assessed by specific tasks. Unfortunately, the lack of tasks assessing these functions in clinical neuropsychology leads to underestimate, in brain-damaged patients, deficits of these functions which are essential, in everyday life, to regulate our social behaviour (ToM) and to navigate in known and unknown environments (VSPT). Therefore, this study aims to create and standardize a VSPT task in order to explore the cognitive requirements of VSPT and ToM, and to specify their relationship in healthy adults and thereafter in brain-damaged patients. Two versions of a computerized VSPT task were administered to healthy participants (M = 28.18, SD = 4.8 years). In both versions the environment was a 3D representation of 10 different geometric shapes placed on a circular base. Two sets of eight pictures were generated from this: of the environment with an avatar somewhere on its periphery (locations) and of what the avatar sees from that place (views). Two types of questions were asked: a) identify the location from the view, and b) identify the view from the location. Twenty participants completed version 1 of the task and 20 completed the second version, where the views were offset by ±15° (i.e., clockwise or counterclockwise) and participants were asked to choose the closest location or the closest view. The preliminary findings revealed that version 1 is significantly easier than version 2 for accuracy (with ceiling scores for version 1). In version 2, participants responded significantly slower when they had to infer the avatar's view from the latter's location, probably because they spent more time visually exploring the different views (responses). Furthermore, men significantly performed better than women in version 1 but not in version 2. Most importantly, a sensitive task (version 2) has been created for which the participants do not seem to easily and automatically compute what someone is looking at yet which does not involve more heavily other cognitive functions. This study is further completed by including analysis on non-clinical participants with low and high degrees of schizotypy, different socio-educational status, and with a range of older adults to examine age-related and other differences in VSPT processing.

Keywords: mental transformation, spatial cognition, theory of mind, visuospatial perspective taking

Procedia PDF Downloads 205
441 Atypical Intoxication Due to Fluoxetine Abuse with Symptoms of Amnesia

Authors: Ayse Gul Bilen

Abstract:

Selective serotonin reuptake inhibitors (SSRIs) are commonly prescribed antidepressants that are used clinically for the treatment of anxiety disorders, obsessive-compulsive disorder (OCD), panic disorders and eating disorders. The first SSRI, fluoxetine (sold under the brand names Prozac and Sarafem among others), had an adverse effect profile better than any other available antidepressant when it was introduced because of its selectivity for serotonin receptors. They have been considered almost free of side effects and have become widely prescribed, however questions about the safety and tolerability of SSRIs have emerged with their continued use. Most SSRI side effects are dose-related and can be attributed to serotonergic effects such as nausea. Continuous use might trigger adverse effects such as hyponatremia, tremor, nausea, weight gain, sleep disturbance and sexual dysfunction. Moderate toxicity can be safely observed in the hospital for 24 hours, and mild cases can be safely discharged (if asymptomatic) from the emergency department once cleared by Psychiatry in cases of intentional overdose and after 6 to 8 hours of observation. Although fluoxetine is relatively safe in terms of overdose, it might still be cardiotoxic and inhibit platelet secretion, aggregation, and plug formation. There have been reported clinical cases of seizures, cardiac conduction abnormalities, and even fatalities associated with fluoxetine ingestions. While the medical literature strongly suggests that most fluoxetine overdoses are benign, emergency physicians need to remain cognizant that intentional, high-dose fluoxetine ingestions may induce seizures and can even be fatal due to cardiac arrhythmia. Our case is a 35-year old female patient who was sent to ER with symptoms of confusion, amnesia and loss of orientation for time and location after being found wandering in the streets unconsciously by police forces that informed 112. Upon laboratory examination, no pathological symptom was found except sinus tachycardia in the EKG and high levels of aspartate transaminase (AST) and alanine transaminase (ALT). Diffusion MRI and computed tomography (CT) of the brain all looked normal. Upon physical and sexual examination, no signs of abuse or trauma were found. Test results for narcotics, stimulants and alcohol were negative as well. There was a presence of dysrhythmia which required admission to the intensive care unit (ICU). The patient gained back her conscience after 24 hours. It was discovered from her story afterward that she had been using fluoxetine due to post-traumatic stress disorder (PTSD) for 6 months and that she had attempted suicide after taking 3 boxes of fluoxetine due to the loss of a parent. She was then transferred to the psychiatric clinic. Our study aims to highlight the need to consider toxicologic drug use, in particular, the abuse of selective serotonin reuptake inhibitors (SSRIs), which have been widely prescribed due to presumed safety and tolerability, for diagnosis of patients applying to the emergency room (ER).

Keywords: abuse, amnesia, fluoxetine, intoxication, SSRI

Procedia PDF Downloads 201