Search results for: working ability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7197

Search results for: working ability

447 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)

Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen

Abstract:

Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.

Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity

Procedia PDF Downloads 121
446 The Importance of Dialogue, Self-Respect, and Cultural Etiquette in Multicultural Society: An Islamic and Secular Perspective

Authors: Julia A. Ermakova

Abstract:

In today's multicultural societies, dialogue, self-respect, and cultural etiquette play a vital role in fostering mutual respect and understanding. Whether viewed from an Islamic or secular perspective, the importance of these values cannot be overstated. Firstly, dialogue is essential in multicultural societies as it allows individuals from different cultural backgrounds to exchange ideas, opinions, and experiences. To engage in dialogue, one must be open and willing to listen, understand, and respect the views of others. This requires a level of self-awareness, where individuals must know themselves and their interlocutors to create a productive and respectful conversation. Secondly, self-respect is crucial for individuals living in multicultural societies (McLarney). One must have adequately high self-esteem and self-confidence to interact with others positively. By valuing oneself, individuals can create healthy relationships and foster mutual respect, which is essential in diverse communities. Thirdly, cultural etiquette is a way of demonstrating the beauty of one's culture by exhibiting good temperament (Al-Ghazali). Adab, a concept that encompasses good manners, praiseworthy words and deeds, and the pursuit of what is considered good, is highly valued in Islamic teachings. By adhering to Adab, individuals can guard against making mistakes and demonstrate respect for others. Islamic teachings provide etiquette for every situation in life, making up the way of life for Muslims. In the Islamic view, an elegant Muslim woman has several essential qualities, including cultural speech and erudition, speaking style, awareness of how to greet, the ability to receive compliments, lack of desire to argue, polite behavior, avoiding personal insults, and having good intentions (Al-Ghazali). The Quran highlights the inclination of people towards arguing, bickering, and disputes (Qur'an, 4:114). Therefore, it is imperative to avoid useless arguments and disputes, for they are poison that poisons our lives. The Prophet Muhammad, peace and blessings be upon him, warned that the most hateful person to Allah is an irreconcilable disputant (Al-Ghazali). By refraining from such behavior, individuals can foster respect and understanding in multicultural societies. From a secular perspective, respecting the views of others is crucial to engage in productive dialogue. The rule of argument emphasizes the importance of showing respect for the other person's views, allowing for the possibility of error on one's part, and avoiding telling someone they are wrong (Atamali). By exhibiting polite behavior and having respect for everyone, individuals can create a welcoming environment and avoid conflict. In conclusion, the importance of dialogue, self-respect, and cultural etiquette in multicultural societies cannot be overstated. By engaging in dialogue, respecting oneself and others, and adhering to cultural etiquette, individuals can foster mutual respect and understanding in diverse communities. Whether viewed from an Islamic or secular perspective, these values are essential for creating harmonious societies.

Keywords: multiculturalism, self-respect, cultural etiquette, adab, ethics, secular perspective

Procedia PDF Downloads 86
445 Quantifying Firm-Level Environmental Innovation Performance: Determining the Sustainability Value of Patent Portfolios

Authors: Maximilian Elsen, Frank Tietze

Abstract:

The development and diffusion of green technologies are crucial for achieving our ambitious climate targets. The Paris Agreement commits its members to develop strategies for achieving net zero greenhouse gas emissions by the second half of the century. Governments, executives, and academics are working on net-zero strategies and the business of rating organisations on their environmental, social and governance (ESG) performance has grown tremendously in its public interest. ESG data is now commonly integrated into traditional investment analysis and an important factor in investment decisions. Creating these metrics, however, is inherently challenging as environmental and social impacts are hard to measure and uniform requirements on ESG reporting are lacking. ESG metrics are often incomplete and inconsistent as they lack fully accepted reporting standards and are often of qualitative nature. This study explores the use of patent data for assessing the environmental performance of companies by focusing on their patented inventions in the space of climate change mitigation and adaptation technologies (CCMAT). The present study builds on the successful identification of CCMAT patents. In this context, the study adopts the Y02 patent classification, a fully cross-sectional tagging scheme that is fully incorporated in the Cooperative Patent Classification (CPC), to identify Climate Change Adaptation Technologies. The Y02 classification was jointly developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) and provides means to examine technologies in the field of mitigation and adaptation to climate change across relevant technologies. This paper develops sustainability-related metrics for firm-level patent portfolios. We do so by adopting a three-step approach. First, we identify relevant CCMAT patents based on their classification as Y02 CPC patents. Second, we examine the technological strength of the identified CCMAT patents by including more traditional metrics from the field of patent analytics while considering their relevance in the space of CCMAT. Such metrics include, among others, the number of forward citations a patent receives, as well as the backward citations and the size of the focal patent family. Third, we conduct our analysis on a firm level by sector for a sample of companies from different industries and compare the derived sustainability performance metrics with the firms’ environmental and financial performance based on carbon emissions and revenue data. The main outcome of this research is the development of sustainability-related metrics for firm-level environmental performance based on patent data. This research has the potential to complement existing ESG metrics from an innovation perspective by focusing on the environmental performance of companies and putting them into perspective to conventional financial performance metrics. We further provide insights into the environmental performance of companies on a sector level. This study has implications of both academic and practical nature. Academically, it contributes to the research on eco-innovation and the literature on innovation and intellectual property (IP). Practically, the study has implications for policymakers by deriving meaningful insights into the environmental performance from an innovation and IP perspective. Such metrics are further relevant for investors and potentially complement existing ESG data.

Keywords: climate change mitigation, innovation, patent portfolios, sustainability

Procedia PDF Downloads 81
444 Comparing Perceived Restorativeness in Natural and Urban Environment: A Meta-Analysis

Authors: Elisa Menardo, Margherita Pasini, Margherita Brondino

Abstract:

A growing body of empirical research from different areas of inquiry suggests that brief contact with natural environment restore mental resources. The Attention Restoration Theory (ART) is the widespread used and empirical founded theory developed to explain why exposure to nature helps people to recovery cognitive resources. It assumes that contact with nature allows people to free (and then recovery) voluntary attention resources and thus allows them to recover from a cognitive fatigue situation. However, it was suggested that some people could have more cognitive benefit after exposure to urban environment. The objective of this study is to report the results of a meta-analysis on studies (peer-reviewed articles) comparing the restorativeness (the quality to be restorative) perceived in natural environments than those perceived in urban environments. This meta-analysis intended to estimate how much nature environments (forests, parks, boulevards) are perceived to be more restorativeness than urban ones (i.e., the magnitude of the perceived restorativeness’ difference). Moreover, given the methodological difference between study, it studied the potential role of moderator variables as participants (student or other), instrument used (Perceived Restorativeness Scale or other), and procedure (in laboratory or in situ). PsycINFO, PsycARTICLES, Scopus, SpringerLINK, Web of Science online database were used to identify all peer-review articles on restorativeness published to date (k = 167). Reference sections of obtained papers were examined for additional studies. Only 22 independent studies (with a total of 1371 participants) met inclusion criteria (direct exposure to environment, comparison between one outdoor environment with natural element and one without natural element, and restorativeness measured by self-report scale) and were included in meta-analysis. To estimate the average effect size, a random effect model (Restricted Maximum-likelihood estimator) was used because the studies included in the meta-analysis were conducted independently and using different methods in different populations, so no common effect-size was expected. The presence of publication bias was checked using trim and fill approach. Univariate moderator analysis (mixed effect model) were run to determine whether the variable coded moderated the perceived restorativeness difference. Results show that natural environments are perceived to be more restorativeness than urban environments, confirming from an empirical point of view what is now considered a knowledge gained in environmental psychology. The relevant information emerging from this study is the magnitude of the estimated average effect size, which is particularly high (d = 1.99) compared to those that are commonly observed in psychology. Significant heterogeneity between study was found (Q(19) = 503.16, p < 0.001;) and studies’ variability was very high (I2[C.I.] = 96.97% [94.61 - 98.62]). Subsequent univariate moderator analyses were not significant. Methodological difference (participants, instrument, and procedure) did not explain variability between study. Other methodological difference (e.g., research design, environment’s characteristics, light’s condition) could explain this variability between study. In the mine while, studies’ variability could be not due to methodological difference but to individual difference (age, gender, education level) and characteristics (connection to nature, environmental attitude). Furthers moderator analysis are working in progress.

Keywords: meta-analysis, natural environments, perceived restorativeness, urban environments

Procedia PDF Downloads 168
443 Active Filtration of Phosphorus in Ca-Rich Hydrated Oil Shale Ash Filters: The Effect of Organic Loading and Form of Precipitated Phosphatic Material

Authors: Päärn Paiste, Margit Kõiv, Riho Mõtlep, Kalle Kirsimäe

Abstract:

For small-scale wastewater management, the treatment wetlands (TWs) as a low cost alternative to conventional treatment facilities, can be used. However, P removal capacity of TW systems is usually problematic. P removal in TWs is mainly dependent on the physico–chemical and hydrological properties of the filter material. Highest P removal efficiency has been shown trough Ca-phosphate precipitation (i.e. active filtration) in Ca-rich alkaline filter materials, e.g. industrial by-products like hydrated oil shale ash (HOSA), metallurgical slags. In this contribution we report preliminary results of a full-scale TW system using HOSA material for P removal for a municipal wastewater at Nõo site, Estonia. The main goals of this ongoing project are to evaluate: a) the long-term P removal efficiency of HOSA using real waste water; b) the effect of high organic loading rate; c) variable P-loading effects on the P removal mechanism (adsorption/direct precipitation); and d) the form and composition of phosphate precipitates. Onsite full-scale experiment with two concurrent filter systems for treatment of municipal wastewater was established in September 2013. System’s pretreatment steps include septic tank (2 m2) and vertical down-flow LECA filters (3 m2 each), followed by horizontal subsurface HOSA filters (effective volume 8 m3 each). Overall organic and hydraulic loading rates of both systems are the same. However, the first system is operated in a stable hydraulic loading regime and the second in variable loading regime that imitates the wastewater production in an average household. Piezometers for water and perforated sample containers for filter material sampling were incorporated inside the filter beds to allow for continuous in-situ monitoring. During the 18 months of operation the median removal efficiency (inflow to outflow) of both systems were over 99% for TP, 93% for COD and 57% for TN. However, we observed significant differences in the samples collected in different points inside the filter systems. In both systems, we observed development of preferred flow paths and zones with high and low loadings. The filters show formation and a gradual advance of a “dead” zone along the flow path (zone with saturated filter material characterized by ineffective removal rates), which develops more rapidly in the system working under variable loading regime. The formation of the “dead” zone is accompanied by the growth of organic substances on the filter material particles that evidently inhibit the P removal. Phase analysis of used filter materials using X-ray diffraction method reveals formation of minor amounts of amorphous Ca-phosphate precipitates. This finding is supported by ATR-FTIR and SEM-EDS measurements, which also reveal Ca-phosphate and authigenic carbonate precipitation. Our first experimental results demonstrate that organic pollution and loading regime significantly affect the performance of hydrated ash filters. The material analyses also show that P is incorporated into a carbonate substituted hydroxyapatite phase.

Keywords: active filtration, apatite, hydrated oil shale ash, organic pollution, phosphorus

Procedia PDF Downloads 272
442 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life

Authors: Sandra Young

Abstract:

The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.

Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics

Procedia PDF Downloads 132
441 Autonomic Nervous System and CTRA Gene Expression among Healthy Young Adults in Japan

Authors: Yoshino Murakami, Takeshi Hashimoto, Steve Cole

Abstract:

The autonomic nervous system (ANS), particularly the sympathetic (SNS) and parasympathetic (PNS) branches, plays a vital role in modulating immune function and physiological homeostasis. In recent years, the Conserved Transcriptional Response to Adversity (CTRA) has emerged as a key marker of the body's response to chronic stress. This gene expression profile is characterized by SNS-mediated upregulation of pro-inflammatory genes (such as IL1B and TNF) and downregulation of antiviral response genes (e.g., IFI and MX families). CTRA has been observed in individuals exposed to prolonged stressors like loneliness, social isolation, and bereavement. Some research suggests that PNS activity, as indicated by heart rate variability (HRV), may help counteract the CTRA. However, previous PNS-CTRA studies have focused on Western populations, raising questions about the generalizability of these findings across different cultural and ethnic backgrounds. This study aimed to examine the relationship between HRV and CTRA gene expression in young, healthy adults in Japan. We hypothesized that HRV would be inversely related to CTRA gene expression, similar to patterns observed in previous Western studies. A total of 49 participants aged 20 to 39 were recruited, and after data exclusions, 26 participants' HRV and CTRA data were analyzed. HRV was measured using an electrocardiogram (ECG), and two time-domain indices were utilized: the root mean square of successive differences (RMSSD) and the standard deviation of NN intervals (SDNN). Blood samples were collected for gene expression analysis, focusing on a standard set of 47 CTRA indicator gene transcripts. it findings revealed a significant inverse relationship between HRV and CTRA gene expression, with higher HRV correlating with reduced pro-inflammatory gene activity and increased antiviral response. These results are consistent with findings from Western populations and demonstrate that the relationship between ANS function and immune response generalizes to an East Asian population. The study highlights the importance of HRV as a biomarker for psychophysiological health, reflecting the body's ability to buffer stress and maintain immune balance. These findings have implications for understanding how physiological systems interact across different cultures and ethnicities. Given the influence of chronic stress in promoting inflammation and disease risk, interventions aimed at improving HRV, such as mindfulness-based practices or physical exercise, could provide significant health benefits. Future research should focus on larger sample sizes and experimental interventions to better understand the causal pathways linking HRV to CTRA gene expression, and determine whether improving HRV may help mitigate the harmful effects of stress on health by reducing inflammation.

Keywords: autonomic nervous activity, neuroendocrine system, inflammation, Japan

Procedia PDF Downloads 12
440 Effects of Culture Conditions on the Adhesion of Yeast Candida spp. and Pichia spp. to Stainless Steel with Different Polishing and Their Control

Authors: Ružica Tomičić, Zorica Tomičić, Peter Raspor

Abstract:

An abundant growth of unwanted yeasts in food processing plants can lead to problems in quality and safety with significant financial losses. Candida and Pichia are the genera mainly involved in spoilage of products in the food and beverage industry. These contaminating microorganisms can form biofilms on food contact surfaces, being difficult to eradicate, increasing the probability of microbial survival and further dissemination during food processing. It is well known that biofilms are more resistant to antimicrobial agents compared to planktonic cells and this makes them difficult to eliminate. Among the strategies used to overcome resistance to antifungal drugs and preservatives, the use of natural substances such as plant extracts has shown particular promise, and many natural substances have been found to exhibit antifungal properties. This study aimed to investigated the impact of growth medium (Malt Extract broth (MEB) or Yeast Peptone Dextrose (YPD) broth) and temperatures (7°C, 37°C, 43°C for Candida strains and 7°C, 27°C, 32°C for Pichia strains) on the adhesion of Candida spp. and Pichia spp. to stainless steel (AISI 304) discs with different degrees of surface roughness (Ra = 25.20 – 961.9 nm), a material commonly used in the food industry. We also evaluated the antifungal and antiadhesion activity of plant extracts such as Humulus lupulus, Alpinia katsumadai and Evodia rutaecarpa against C. albicans, C glabrata and P. membranifaciens and investigated whether these plant extracts can interfere with biofilm formation. The adhesion was assessed by the crystal violet staining method, while the broth microdilution method CLSI M27-A3 was used to determine the minimum inhibitory concentration (MIC) of plant extracts. Our results indicated that the nutrient content of the medium significantly influenced the amount of adhered cells of the tested yeasts. The growth medium which resulted in a higher adhesion of C. albicans and C. glabrata was MEB, while for C. parapsilosis and C. krusei was YPD. In the case of P. pijperi and P. membranifaciens, YPD broth was more effective in promoting adhesion than MEB. Regarding the effect of temperature, C. albicans strain adhered to stainless steel surfaces in significantly higher level at a temperature of 43°C, while on the other hand C. glabrata, C. parapsilosis and C. krusei showed a different behavior with significantly higher adhesion at 37°C than at 7°C and 43°C. Further, the adherence ability of Pichia strains was highest at 27°C. Based on the MIC values, all plant extracts exerted significant antifungal effects with MIC values ranged from 100 to 400 μg/mL. It was observed that biofilm of C. glabrata were more resistance to plant extracts as compared to C. albicans. However, extracts of A. katsumadai and E. rutaecarpa promoted the growth and development of the preformed biofilm of P. membranifaciens. Thus, the knowledge of how these microorganisms adhere and which factors affect this phenomenon is of great importance in order to avoid their colonization on food contact surfaces.

Keywords: adhesion, Candida spp., Pichia spp., plant extracts

Procedia PDF Downloads 188
439 Consensus, Federalism and Inter-State Water Disputes in India

Authors: Amrisha Pandey

Abstract:

Indian constitution has distributed the powers to govern and legislate between the centre and the state governments based on the list of subject-matter provided in the seventh schedule. By that schedule, the states are authorized to regulate the water resource within their territory. However, the centre/union government is authorized to regulate the inter-state water disputes. The powers entrusted to the union government mainly deals with the sharing of river water which flows through the territory of two or more states. For that purpose, a provision enumerated in Article 262 of the Constitution of India which empowers the parliament to resolve any such inter-state river water dispute. Therefore, the parliament has enacted the - ‘Inter-State River Water Dispute Tribunal, Act’, which allows the central/union government to constitute the tribunal for the adjudication of the disputes and expressly bars the jurisdiction of the judiciary in the concerned matter. This arrangement was intended to resolve the dispute using political or diplomatic means, without deliberately interfering with the sovereign power of the states to govern the water resource. The situation in present context is complicated and sensitive. Due to the change in climatic conditions; increasing demand for the limited resource; and the advanced understanding of the freshwater cycle, which is missing from the existing legal regime. The obsolete legal and political tools, the existing legislative mechanism and the institutional units do not seem to accommodate the rising challenge to regulate the resource. Therefore, resulting in the rise of the politicization of the inter-state water disputes. Against this background, this paper will investigate the inter-state river water dispute in India and will critically analyze the ability of the existing constitutional, and institutional units involved in the task. Moreover, the competence of the tribunal as the adjudicating body in present context will be analyzed using the long ongoing inter-state water dispute in India – The Cauvery Water Dispute, as the case study. To conduct the task undertaken in this paper the doctrinal methodology of the research is adopted. The disputes will also be investigated through the lens of sovereignty, which is accorded to the states using the theory of ‘separation of power’ and the ‘grant of internal sovereignty’, to its federal units of governance. The issue of sovereignty in this paper is discussed in two ways: 1) as the responsibility of the state - to govern the resource; and 2) as the obligation of the state - to govern the resource, arising from the sovereign power of the state. Furthermore, the duality of the sovereign power coexists in this analysis; the overall sovereign authority of the nation-state, and the internal sovereignty of the states as its federal units of governance. As a result, this investigation will propose institutional, legislative and judicial reforms. Additionally, it will suggest certain amendments to the existing constitutional provisions in order to avoid the contradictions in their scope and meaning in the light of the advanced hydrological understanding.

Keywords: constitution of India, federalism, inter-state river water dispute tribunal of India, sovereignty

Procedia PDF Downloads 148
438 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 174
437 Therapeutic Potential of GSTM2-2 C-Terminal Domain and Its Mutants, F157A and Y160A on the Treatment of Cardiac Arrhythmias: Effect on Ca2+ Transients in Neonatal Ventricular Cardiomyocytes

Authors: R. P. Hewawasam, A. F. Dulhunty

Abstract:

The ryanodine receptor (RyR) is an intracellular ion channel that releases Ca2+ from the sarcoplasmic reticulum and is essential for the excitation-contraction coupling and contraction in striated muscle. Human muscle specific glutathione transferase M2-2 (GSTM2-2) is a highly specific inhibitor of cardiac ryanodine receptor (RyR2) activity. Single channel-lipid bilayer studies and Ca2+ release assays performed using the C-terminal half of the GSTM2-2 and its mutants F157A and Y160A confirmed the ability of the C terminal domain of GSTM2-2 to specifically inhibit the cardiac ryanodine receptor activity. Objective of the present study is to determine the effect of C terminal domain of GSTM2-2 (GSTM2-2C) and the mutants, F157A and Y160A on the Ca2+ transients of neonatal ventricular cardiomyocytes. Primary cardiomyocytes were cultured from neonatal rats. They were treated with GSTM2-2C and the two mutants F157A and Y160A at 15µM and incubated for 2 hours. Then the cells were led with Fluo-4AM, fluorescent Ca2+ indicator, and the field stimulated (1 Hz, 3V and 2ms) cells were excited using the 488 nm argon laser. Contractility of the cells were measured and the Ca2+ transients in the stained cells were imaged using Leica SP5 confocal microscope. Peak amplitude of the Ca2+ transient, rise time and decay time from the peak were measured for each transient. In contrast to GSTM2C which significantly reduced the % shortening (42.8%) in the field stimulated cells, F157A and Y160A failed to reduce the % shortening.Analysis revealed that the average amplitude of the Ca2+ transient was significantly reduced (P<0.001) in cells treated with the wild type GSTM2-2C compared to that of untreated cells. Cells treated with the mutants F157A and Y160A didn’t change the Ca2+ transient significantly compared to the control. A significant increase in the rise time (P< 0.001) and a significant reduction in the decay time (P< 0.001) were observed in cardiomyocytes treated with GSTM2-2C compared to the control but not with F157A and Y160A. These results are consistent with the observation that GSTM2-2C reduced the Ca2+ release from the cardiac SR significantly whereas the mutants, F157A and Y160A didn’t show any effect compared to the control. GSTM2-2C has an isoform-specific effect on the cardiac ryanodine receptor activity and also it inhibits RyR2 channel activity only during diastole. Selective inhibition of RyR2 by GSTM2-2C has significant clinical potential in the treatment of cardiac arrhythmias and heart failure. Since GSTM2-2C-terminal construct has no GST enzyme activity, its introduction to the cardiomyocyte would not exert any unwanted side effects that may alter its enzymatic action. The present study further confirms that GSTM2-2C is capable of decreasing the Ca2+ release from the cardiac SR during diastole. These results raise the future possibility of using GSTM2-2C as a template for therapeutics that can depress RyR2 function when the channel is hyperactive in cardiac arrhythmias and heart failure.

Keywords: arrhythmia, cardiac muscle, cardiac ryanodine receptor, GSTM2-2

Procedia PDF Downloads 278
436 Temperature Distribution Inside Hybrid photovoltaic-Thermoelectric Generator Systems and their Dependency on Exposition Angles

Authors: Slawomir Wnuk

Abstract:

Due to widespread implementation of the renewable energy development programs the, solar energy use increasing constantlyacross the world. Accordingly to REN21, in 2020, both on-grid and off-grid solar photovoltaic systems installed capacity reached 760 GWDCand increased by 139 GWDC compared to previous year capacity. However, the photovoltaic solar cells used for primary solar energy conversion into electrical energy has exhibited significant drawbacks. The fundamentaldownside is unstable andlow efficiencythe energy conversion being negatively affected by a rangeof factors. To neutralise or minimise the impact of those factors causing energy losses, researchers have come out withvariedideas. One ofpromising technological solutionsoffered by researchers is PV-MTEG multilayer hybrid system combiningboth photovoltaic cells and thermoelectric generators advantages. A series of experiments was performed on Glasgow Caledonian University laboratory to investigate such a system in operation. In the experiments, the solar simulator Sol3A series was employed as a stable solar irradiation source, and multichannel voltage and temperature data loggers were utilised for measurements. The two layer proposed hybrid systemsimulation model was built up and tested for its energy conversion capability under a variety of the exposure angles to the solar irradiation with a concurrent examination of the temperature distribution inside proposed PV-MTEG structure. The same series of laboratory tests were carried out for a range of various loads, with the temperature and voltage generated being measured and recordedfor each exposure angle and load combination. It was found that increase of the exposure angle of the PV-MTEG structure to an irradiation source causes the decrease of the temperature gradient ΔT between the system layers as well as reduces overall system heating. The temperature gradient’s reduction influences negatively the voltage generation process. The experiments showed that for the exposureangles in the range from 0° to 45°, the ‘generated voltage – exposure angle’ dependence is reflected closely by the linear characteristics. It was also found that the voltage generated by MTEG structures working with the optimal load determined and applied would drop by approximately 0.82% per each 1° degree of the exposure angle increase. This voltage drop occurs at the higher loads applied, getting more steep with increasing the load over the optimal value, however, the difference isn’t significant. Despite of linear character of the generated by MTEG voltage-angle dependence, the temperature reduction between the system structure layers andat tested points on its surface was not linear. In conclusion, the PV-MTEG exposure angle appears to be important parameter affecting efficiency of the energy generation by thermo-electrical generators incorporated inside those hybrid structures. The research revealedgreat potential of the proposed hybrid system. The experiments indicated interesting behaviour of the tested structures, and the results appear to provide valuable contribution into thedevelopment and technological design process for large energy conversion systems utilising similar structural solutions.

Keywords: photovoltaic solar systems, hybrid systems, thermo-electrical generators, renewable energy

Procedia PDF Downloads 86
435 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 326
434 Retrospective Assessment of the Safety and Efficacy of Percutaneous Microwave Ablation in the Management of Hepatic Lesions

Authors: Suang K. Lau, Ismail Goolam, Rafid Al-Asady

Abstract:

Background: The majority of patients with hepatocellular carcinoma (HCC) are not suitable for curative treatment, in the form of surgical resection or transplantation, due to tumour extent and underlying liver dysfunction. In these non-resectable cases, a variety of non-surgical therapies are available, including microwave ablation (MWA), which has shown increasing popularity due to its low morbidity, low reported complication rate, and the ability to perform multiple ablations simultaneously. Objective: The aim of this study was to evaluate the validity of MWA as a viable treatment option in the management of HCC and hepatic metastatic disease, by assessing its efficacy and complication rate at a tertiary hospital situated in Westmead (Australia). Methods: A retrospective observational study was performed evaluating patients that underwent MWA between 1/1/2017–31/12/2018 at Westmead Hospital, NSW, Australia. Outcome measures, including residual disease, recurrence rates, as well as major and minor complication rates, were retrospectively analysed over a 12-months period following MWA treatment. Excluded patients included those whose lesions were treated on the basis of residual or recurrent disease from previous treatment, which occurred prior to the study window (11 patients) and those who were lost to follow up (2 patients). Results: Following treatment of 106 new hepatic lesions, the complete response rate (CR) was 86% (91/106) at 12 months follow up. 10 patients had the residual disease at post-treatment follow up imaging, corresponding to an incomplete response (ICR) rate of 9.4% (10/106). The local recurrence rate (LRR) was 4.6% (5/106) with follow-up period up to 12 months. The minor complication rate was 9.4% (10/106) including asymptomatic pneumothorax (n=2), asymptomatic pleural effusions (n=2), right lower lobe pneumonia (n=3), pain requiring admission (n=1), hypotension (n=1), cellulitis (n=1) and intraparenchymal hematoma (n=1). There was 1 major complication reported, with pleuro-peritoneal fistula causing recurrent large pleural effusion necessitating repeated thoracocentesis (n=1). There was no statistically significant association between tumour size, location or ablation factors, and risk of recurrence or residual disease. A subset analysis identified 6 segment VIII lesions, which were treated via a trans-pleural approach. This cohort demonstrated an overall complication rate of 33% (2/6), including 1 minor complication of asymptomatic pneumothorax and 1 major complication of pleuro-peritoneal fistula. Conclusions: Microwave ablation therapy is an effective and safe treatment option in cases of non-resectable hepatocellular carcinoma and liver metastases, with good local tumour control and low complication rates. A trans-pleural approach for high segment VIII lesions is associated with a higher complication rate and warrants greater caution.

Keywords: hepatocellular carcinoma, liver metastases, microwave ablation, trans-pleural approach

Procedia PDF Downloads 132
433 Engineered Control of Bacterial Cell-to-Cell Signaling Using Cyclodextrin

Authors: Yuriko Takayama, Norihiro Kato

Abstract:

Quorum sensing (QS) is a cell-to-cell communication system in bacteria to regulate expression of target genes. In gram-negative bacteria, activation on QS is controlled by a concentration increase of N-acylhomoserine lactone (AHL), which can diffuse in and out of the cell. Effective control of QS is expected to avoid virulence factor production in infectious pathogens, biofilm formation, and antibiotic production because various cell functions in gram-negative bacteria are controlled by AHL-mediated QS. In this research, we applied cyclodextrins (CDs) as artificial hosts for the AHL signal to reduce the AHL concentration in the culture broth below its threshold for QS activation. The AHL-receptor complex induced under the high AHL concentration activates transcription of the QS-target gene. Accordingly, artificial reduction of the AHL concentration is one of the effective strategies to inhibit the QS. A hydrophobic cavity of the CD can interact with the acyl-chain of the AHL due to hydrophobic interaction in aqueous media. We studied N-hexanoylhomoserine lactone (C6HSL)-mediated QS in Serratia marcescens; accumulation of C6HSL is responsible for regulation of the expression of pig cluster. Inhibitory effects of added CDs on QS were demonstrated by determination of prodigiosin amount inside cells after reaching stationary phase, because production of prodigiosin depends on the C6HSL-mediated QS. By adding approximately 6 wt% hydroxypropyl-β-CD (HP-β-CD) in Luria-Bertani (LB) medium prior to inoculation of S. maecescens AS-1, the intracellularly accumulated prodigiosin was drastically reduced to 7-10%, which was determined after the extraction of prodigiosin in acidified ethanol. The AHL retention ability of HP-β-CD was also demonstrated by Chromobacterium violacuem CV026 bioassay. The CV026 strain is an AHL-synthase defective mutant that activates QS solely by adding AHLs from outside of cells. A purple pigment violacein is induced by activation of the AHL-mediated QS. We demonstrated that the violacein production was effectively suppressed when the C6HSL standard solution was spotted on a LB agar plate dispersing CV026 cells and HP-β-CD. Physico-chemical analysis was performed to study the affinity between the immobilized CD and added C6HSL using a quartz crystal microbalance (QCM) sensor. The COOH-terminated self-assembled monolayer was prepared on a gold electrode of 27-MHz AT-cut quartz crystal. Mono(6-deoxy-6-N, N-diethylamino)-β-CD was immobilized on the electrode using water-soluble carbodiimide. The C6HSL interaction with the β-CD cavity was studied by injecting the C6HSL solution to a cup-type sensor cell filled with buffer solution. A decrement of resonant frequency (ΔFs) clearly showed the effective C6HSL complexation with immobilized β-CD and its stability constant for MBP-SpnR-C6HSL complex was on the order of 102 M-1. The CD has high potential for engineered control of QS because it is safe for human use.

Keywords: acylhomoserine lactone, cyclodextrin, intracellular signaling, quorum sensing

Procedia PDF Downloads 229
432 Differentiating Third Instar Larvae of Three Species of Flies (Family: Sarcophagidae) of Potential Forensic Importance in Jamaica, Using Morphological Characteristics

Authors: Rochelle Daley, Eric Garraway, Catherine Murphy

Abstract:

Crime is a major problem in Jamaica as well as the high number of unsolved violent crimes. The introduction of forensic entomology in criminal investigations has the potential to decrease the number of unsolved violent crimes through the estimation of PMI (post-mortem interval) or time since death. Though it has great potential, forensic entomology requires data from insects specific to a geographical location to be credibly applied in legal investigations. It is a relatively new area of study in the Caribbean, with multiple pioneer research opportunities. Of critical importance in forensic entomology is the ability to identify the species of interest. Larvae are commonly collected at crime scenes and a means of rapid identification is crucial. Moreover, a low-cost method is critical in countries with limited budget available for crime fighting. Sarcophagids are one of the most important colonisers of a carcass however, they are difficult to distinguish using morphology due to their similarities, however, there is a lack of research on the larvae of this family. This research contributes to that, having identified the larvae of three species from the family Sarcophagidae: Peckia nicasia, Peckia chrysostoma and Blaesoxipha plinthopyga; important agents in flesh decomposition. Adults of Sarcophidae are also difficult to differentiate, often requiring study of the genitalia; the use of larvae in species identification is important in such cases. Adult Sarcophagids were attracted using bottle traps baited with pig liver. These adults larviposited and the larvae were collected and colonises (generation 2 and 3) reared at room temperature for morphological work (n=50). The posterior ends of the larvae from segments 9 or 10 were removed and mounted posterior end upwards to allow study using a light microscope at magnification X200 (posterior cavity and intersegmental spine bands) and X640 (anterior and posterior spiracle). The remaining sections of the larvae were cleared in 10 % KOH and the cephalopharyngeal skeleton dissected out and measured at different points. The cephalopharyngeal skeletons show observable differences in the shapes and sizes of the mouth hooks as well as the length of the ventral cornua. The most notable difference between species is in the general shape of the anal segments and the shape of the posterior spiracles. Intersegmental spine bands of these larvae become less pigmented and visible as the larvae change instars. Spine bands along with anterior spiracle are not recommended as features for species distinction. Larvae can potentially be used to distinguish Sarcophagids to the level of species, with observable differences in the anal segments and the cephalopharyngeal skeletons. However, this method of identification should be tested by comparing these morphological features with other Jamaican Sarcophagids to further support this conclusion.

Keywords: 3rd instar larval morphology, forensic entomology, Jamaica, Sarcophagidae

Procedia PDF Downloads 142
431 Rheological Characterization of Polysaccharide Extracted from Camelina Meal as a New Source of Thickening Agent

Authors: Mohammad Anvari, Helen S. Joyner (Melito)

Abstract:

Camelina sativa (L.) Crantz is an oilseed crop currently used for the production of biofuels. However, the low price of diesel and gasoline has made camelina an unprofitable crop for farmers, leading to declining camelina production in the US. Hence, the ability to utilize camelina byproduct (defatted meal) after oil extraction would be a pivotal factor for promoting the economic value of the plant. Camelina defatted meal is rich in proteins and polysaccharides. The great diversity in the polysaccharide structural features provides a unique opportunity for use in food formulations as thickeners, gelling agents, emulsifiers, and stabilizers. There is currently a great degree of interest in the study of novel plant polysaccharides, as they can be derived from readily accessible sources and have potential application in a wide range of food formulations. However, there are no published studies on the polysaccharide extracted from camelina meal, and its potential industrial applications remain largely underexploited. Rheological properties are a key functional feature of polysaccharides and are highly dependent on the material composition and molecular structure. Therefore, the objective of this study was to evaluate the rheological properties of the polysaccharide extracted from camelina meal at different conditions to obtain insight on the molecular characteristics of the polysaccharide. Flow and dynamic mechanical behaviors were determined under different temperatures (5-50°C) and concentrations (1-6% w/v). Additionally, the zeta potential of the polysaccharide dispersion was measured at different pHs (2-11) and a biopolymer concentration of 0.05% (w/v). Shear rate sweep data revealed that the camelina polysaccharide displayed shear thinning (pseudoplastic) behavior, which is typical of polymer systems. The polysaccharide dispersion (1% w/v) showed no significant changes in viscosity with temperature, which makes it a promising ingredient in products requiring texture stability over a range of temperatures. However, the viscosity increased significantly with increased concentration, indicating that camelina polysaccharide can be used in food products at different concentrations to produce a range of textures. Dynamic mechanical spectra showed similar trends. The temperature had little effect on viscoelastic moduli. However, moduli were strongly affected by concentration: samples exhibited concentrated solution behavior at low concentrations (1-2% w/v) and weak gel behavior at higher concentrations (4-6% w/v). These rheological properties can be used for designing and modeling of liquid and semisolid products. Zeta potential affects the intensity of molecular interactions and molecular conformation and can alter solubility, stability, and eventually, the functionality of the materials as their environment changes. In this study, the zeta potential value significantly decreased from 0.0 to -62.5 as pH increased from 2 to 11, indicating that pH may affect the functional properties of the polysaccharide. The results obtained in the current study showed that camelina polysaccharide has significant potential for application in various food systems and can be introduced as a novel anionic thickening agent with unique properties.

Keywords: Camelina meal, polysaccharide, rheology, zeta potential

Procedia PDF Downloads 242
430 Simulation of Hydraulic Fracturing Fluid Cleanup for Partially Degraded Fracturing Fluids in Unconventional Gas Reservoirs

Authors: Regina A. Tayong, Reza Barati

Abstract:

A stable, fast and robust three-phase, 2D IMPES simulator has been developed for assessing the influence of; breaker concentration on yield stress of filter cake and broken gel viscosity, varying polymer concentration/yield stress along the fracture face, fracture conductivity, fracture length, capillary pressure changes and formation damage on fracturing fluid cleanup in tight gas reservoirs. This model has been validated as against field data reported in the literature for the same reservoir. A 2-D, two-phase (gas/water) fracture propagation model is used to model our invasion zone and create the initial conditions for our clean-up model by distributing 200 bbls of water around the fracture. A 2-D, three-phase IMPES simulator, incorporating a yield-power-law-rheology has been developed in MATLAB to characterize fluid flow through a hydraulically fractured grid. The variation in polymer concentration along the fracture is computed from a material balance equation relating the initial polymer concentration to total volume of injected fluid and fracture volume. All governing equations and the methods employed have been adequately reported to permit easy replication of results. The effect of increasing capillary pressure in the formation simulated in this study resulted in a 10.4% decrease in cumulative production after 100 days of fluid recovery. Increasing the breaker concentration from 5-15 gal/Mgal on the yield stress and fluid viscosity of a 200 lb/Mgal guar fluid resulted in a 10.83% increase in cumulative gas production. For tight gas formations (k=0.05 md), fluid recovery increases with increasing shut-in time, increasing fracture conductivity and fracture length, irrespective of the yield stress of the fracturing fluid. Mechanical induced formation damage combined with hydraulic damage tends to be the most significant. Several correlations have been developed relating pressure distribution and polymer concentration to distance along the fracture face and average polymer concentration variation with injection time. The gradient in yield stress distribution along the fracture face becomes steeper with increasing polymer concentration. The rate at which the yield stress (τ_o) is increasing is found to be proportional to the square of the volume of fluid lost to the formation. Finally, an improvement on previous results was achieved through simulating yield stress variation along the fracture face rather than assuming constant values because fluid loss to the formation and the polymer concentration distribution along the fracture face decreases as we move away from the injection well. The novelty of this three-phase flow model lies in its ability to (i) Simulate yield stress variation with fluid loss volume along the fracture face for different initial guar concentrations. (ii) Simulate increasing breaker activity on yield stress and broken gel viscosity and the effect of (i) and (ii) on cumulative gas production within reasonable computational time.

Keywords: formation damage, hydraulic fracturing, polymer cleanup, multiphase flow numerical simulation

Procedia PDF Downloads 128
429 The Importance of Value Added Services Provided by Science and Technology Parks to Boost Entrepreneurship Ecosystem in Turkey

Authors: Faruk Inaltekin, Imran Gurakan

Abstract:

This paper will aim to discuss the importance of value-added services provided by Science and Technology Parks for entrepreneurship development in Turkey. Entrepreneurship is vital subject for all countries. It has not only fostered economic development but also promoted innovation at local and international levels. To foster high tech entrepreneurship ecosystem, Technopark (Science and Technology Park/STP) concept was initiated with the establishment of Silicon Valley in the 1950s. The success and rise of Silicon Valley led to the spread of technopark activities. Developed economies have been setting up projects to plan and build STPs since the 1960s and 1970s. To promote the establishment of STPs, necessary legislations were made by Ministry of Science, Industry, and Technology in 2001, Technology Development Zones Law (No. 4691) and it has been revised in 2016 to provide more supports. STPs’ basic aim is to provide customers high-quality office spaces with various 'value added services' such as business development, network connections, cooperation programs, investor/customers meetings and internationalization services. For this aim, STPs should help startups deal with difficulties in the early stages and to support mature companies’ export activities in the foreign market. STPs should support the production, commercialization and more significantly internationalization of technology-intensive business and foster growth of companies. Nowadays within this value-added services, internationalization is very popular subject in the world. Most of STPs design clusters or accelerator programs in order to support their companies in the foreign market penetration. If startups are not ready for international competition, STPs should help them to get ready for foreign market with training and mentoring sessions. These training and mentoring sessions should take a goal based approach to working with companies. Each company has different needs and goals. Therefore the definition of ‘success' varies for each company. For this reason, it is very important to create customized value added services to meet the needs of startups. After local supports, STPs should also be able to support their startups in foreign market. Organizing well defined international accelerator program plays an important role in this mission. Turkey is strategically placed between key markets in Europe, Russia, Central Asia and the Middle East. Its population is young and well educated. So both government agencies and the private sectors endeavor to foster and encourage entrepreneurship ecosystem with many supports. In sum, the task of technoparks with these and similar value added services is very important for developing entrepreneurship ecosystem. The priorities of all value added services are to identify the commercialization and growth obstacles faced by entrepreneurs and get rid of them with the one-to-one customized services. Also, in order to have a healthy startup ecosystem and create sustainable entrepreneurship, stakeholders (technoparks, incubators, accelerators, investors, universities, governmental organizations etc.) should fulfill their roles and/or duties and collaborate with each other. STPs play an important role as bridge for these stakeholders & entrepreneurs. STPs always should benchmark and renew services offered to how to help the start-ups to survive, develop their business and benefit from these stakeholders.

Keywords: accelerator, cluster, entrepreneurship, startup, technopark, value added services

Procedia PDF Downloads 141
428 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach

Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz

Abstract:

Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.

Keywords: machine learning, noise reduction, preterm birth, sleep habit

Procedia PDF Downloads 143
427 The Effect of Teachers' Personal Values on the Perceptions of the Effective Principal and Student in School

Authors: Alexander Zibenberg, Rima’a Da’As

Abstract:

According to the author’s knowledge, individuals are naturally inclined to classify people as leaders and followers. Individuals utilize cognitive structures or prototypes specifying the traits and abilities that characterize the effective leader (implicit leadership theories) and effective follower in an organization (implicit followership theories). Thus, the present study offers insights into understanding how teachers' personal values (self-enhancement and self-transcendence) explain the preference for styles of effective leader (i.e., principal) and assumptions about the traits and behaviors that characterize effective followers (i.e., student). Beyond the direct effect on perceptions of effective types of leader and follower, the present study argues that values may also interact with organizational and personal contexts in influencing perceptions. Thus authors suggest that teachers' managerial position may moderate the relationships between personal values and perception of the effective leader and follower. Specifically, two key questions are addressed in the present research: (1) Is there a relationship between personal values and perceptions of the effective leader and effective follower? and (2) Are these relationships stable or could they change across different contexts? Two hundred fifty-five Israeli teachers participated in this study, completing questionnaires – about the effective student and effective principal. Results of structural equations modeling (SEM) with maximum likelihood estimation showed: first: the model fit the data well. Second: researchers found a positive relationship between self-enhancement and anti-prototype of the effective principal and anti-prototype of the effective student. The relationship between self-transcendence value and both perceptions were found significant as well. Self-transcendence positively related to the way the teacher perceives the prototype of the effective principal and effective student. Besides, authors found that teachers' managerial position moderates these relationships. The article contributes to the literature both on perceptions and on personal values. Although several earlier studies explored issues of implicit leadership theories and implicit followership theories, personality characteristics (values) have garnered less attention in this matter. This study shows that personal values which are deeply rooted, abstract motivations that guide justify or explain attitudes, norms, opinions and actions explain differences in perception of the effective leader and follower. The results advance the theoretical understanding of the relationship between personal values and individuals’ perceptions in organizations. An additional contribution of this study is the application of the teacher's managerial position to explain a potential boundary condition of the translation of personal values into outcomes. The findings suggest that through the management process in the organization, teachers acquire knowledge and skills which augment their ability (beyond their personal values) to predict perceptions of ideal types of principal and student. The study elucidates the unique role of personal values in understanding an organizational thinking in organization. It seems that personal values might explain the differences in individual preferences of the organizational paradigm (mechanistic vs organic).

Keywords: implicit leadership theories, implicit followership theories, organizational paradigms, personal values

Procedia PDF Downloads 153
426 Bedouin Dispersion in Israel: Between Sustainable Development and Social Non-Recognition

Authors: Tamir Michal

Abstract:

The subject of Bedouin dispersion has accompanied the State of Israel from the day of its establishment. From a legal point of view, this subject has offered a launchpad for creative judicial decisions. Thus, for example, the first court decision in Israel to recognize affirmative action (Avitan), dealt with a petition submitted by a Jew appealing the refusal of the State to recognize the Petitioner’s entitlement to the long-term lease of a plot designated for Bedouins. The Supreme Court dismissed the petition, holding that there existed a public interest in assisting Bedouin to establish permanent urban settlements, an interest which justifies giving them preference by selling them plots at subsidized prices. In another case (The Forum for Coexistence in the Negev) the Supreme Court extended equitable relief for the purpose of constructing a bridge, even though the construction infringed the Law, in order to allow the children of dispersed Bedouin to reach school. Against this background, the recent verdict, delivered during the Protective Edge military campaign, which dismissed a petition aimed at forcing the State to spread out Protective Structures in Bedouin villages in the Negev against the risk of being hit from missiles launched from Gaza (Abu Afash) is disappointing. Even if, in arguendo, no selective discrimination was involved in the State’s decision not to provide such protection, the decision, and its affirmation by the Court, is problematic when examined through the prism of the Theory of Recognition. The article analyses the issue by tools of theory of Recognition, according to which people develop their identities through mutual relations of recognition in different fields. In the social context, the path to recognition is cognitive respect, which is provided by means of legal rights. By seeing other participants in Society as bearers of rights and obligations, the individual develops an understanding of his legal condition as reflected in the attitude to others. Consequently, even if the Court’s decision may be justified on strict legal grounds, the fact that Jewish settlements were protected during the military operation, whereas Bedouin villages were not, is a setback in the struggle to make the Bedouin citizens with equal rights in Israeli society. As the Court held, ‘Beyond their protective function, the Migunit [Protective Structures] may make a moral and psychological contribution that should not be undervalued’. This contribution is one that the Bedouin did not receive in the Abu Afash verdict. The basic thesis is that the Court’s verdict analyzed above clearly demonstrates that the reliance on classical liberal instruments (e.g., equality) cannot secure full appreciation of all aspects of Bedouin life, and hence it can in fact prejudice them. Therefore, elements of the recognition theory should be added, in order to find the channel for cognitive dignity, thereby advancing the Bedouins’ ability to perceive themselves as equal human beings in the Israeli society.

Keywords: bedouin dispersion, cognitive respect, recognition theory, sustainable development

Procedia PDF Downloads 348
425 Evaluation of the Performance Measures of Two-Lane Roundabout and Turbo Roundabout with Varying Truck Percentages

Authors: Evangelos Kaisar, Anika Tabassum, Taraneh Ardalan, Majed Al-Ghandour

Abstract:

The economy of any country is dependent on its ability to accommodate the movement and delivery of goods. The demand for goods movement and services increases truck traffic on highways and inside the cities. The livability of most cities is directly affected by the congestion and environmental impacts of trucks, which are the backbone of the urban freight system. Better operation of heavy vehicles on highways and arterials could lead to the network’s efficiency and reliability. In many cases, roundabouts can respond better than at-level intersections to enable traffic operations with increased safety for both cars and heavy vehicles. Recently emerged, the concept of turbo-roundabout is a viable alternative to the two-lane roundabout aiming to improve traffic efficiency. The primary objective of this study is to evaluate the operation and performance level of an at-grade intersection, a conventional two-lane roundabout, and a basic turbo roundabout for freight movements. To analyze and evaluate the performances of the signalized intersections and the roundabouts, micro simulation models were developed PTV VISSIM. The networks chosen for this analysis in this study are to experiment and evaluate changes in the performance of the movement of vehicles with different geometric and flow scenarios. There are several scenarios that were examined when attempting to assess the impacts of various geometric designs on vehicle movements. The overall traffic efficiency depends on the geometric layout of the intersections, which consists of traffic congestion rate, hourly volume, frequency of heavy vehicles, type of road, and the ratio of major-street versus side-street traffic. The traffic performance was determined by evaluating the delay time, number of stops, and queue length of each intersection for varying truck percentages. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. More specifically, it is clear that two-lane roundabouts are seen to have shorter queue lengths compared to signalized intersections and turbo-roundabouts. For instance, considering the scenario where the volume is highest, and the truck movement and left turn movement are maximum, the signalized intersection has 3 times, and the turbo-roundabout has 5 times longer queue length than a two-lane roundabout in major roads. Similarly, on minor roads, signalized intersections and turbo-roundabouts have 11 times longer queue lengths than two-lane roundabouts for the same scenario. As explained from all the developed scenarios, while the traffic demand lowers, the queue lengths of turbo-roundabouts shorten. This proves that turbo roundabouts perform well for low and medium traffic demand. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. Finally, this study provides recommendations on the conditions under which different intersections perform better than each other.

Keywords: At-grade intersection, simulation, turbo-roundabout, two-lane roundabout

Procedia PDF Downloads 144
424 Consumer Utility Analysis of Halal Certification on Beef Using Discrete Choice Experiment: A Case Study in the Netherlands

Authors: Rosa Amalia Safitri, Ine van der Fels-Klerx, Henk Hogeveen

Abstract:

Halal is a dietary law observed by people following Islamic faith. It is considered as a type of credence food quality which cannot be easily assured by consumers even upon and after consumption. Therefore, Halal certification takes place as a practical tool for the consumers to make an informed choice particularly in a non-Muslim majority country, including the Netherlands. Discrete choice experiment (DCE) was employed in this study for its ability to assess the importance of attributes attached to Halal beef in the Dutch market and to investigate consumer utilities. Furthermore, willingness to pay (WTP) for the desired Halal certification was estimated. Four most relevant attributes were selected, i.e., the slaughter method, traceability information, place of purchase, and Halal certification. Price was incorporated as an attribute to allow estimation of willingness to pay for Halal certification. There were 242 Muslim respondents who regularly consumed Halal beef completed the survey, from Dutch (53%) and non-Dutch consumers living in the Netherlands (47%). The vast majority of the respondents (95%) were within the age of 18-45 years old, with the largest group being student (43%) followed by employee (30%) and housewife (12%). Majority of the respondents (76%) had disposable monthly income less than € 2,500, while the rest earned more than € 2,500. The respondents assessed themselves of having good knowledge of the studied attributes, except for traceability information with 62% of the respondents considered themselves not knowledgeable. The findings indicated that slaughter method was valued as the most important attribute, followed by Halal certificate, place of purchase, price, and traceability information. This order of importance varied across sociodemographic variables, except for the slaughter method. Both Dutch and non-Dutch subgroups valued Halal certification as the third most important attributes. However, non-Dutch respondents valued it with higher importance (0,20) than their Dutch counterparts (0,16). For non-Dutch, the price was more important than Halal certification. The ideal product preferred by the consumers indicated the product serving the highest utilities for consumers, and characterized by beef obtained without pre-slaughtering stunning, with traceability info, available at Halal store, certified by an official certifier, and sold at 2.75 € per 500 gr. In general, an official Halal certifier was mostly preferred. However, consumers were not willing to pay for premium for any type of Halal certifiers, indicated by negative WTP of -0.73 €, -0.93 €, and -1,03€ for small, official, and international certifiers, respectively. This finding indicated that consumers tend to lose their utility when confronted with price. WTP estimates differ across socio-demographic variables with male and non-Dutch respondents had the lowest WTP. The unfamiliarity to traceability information might cause respondents to perceive it as the least important attribute. In the context of Halal certified meat, adding traceability information into meat packaging can serve two functions, first consumers can justify for themselves whether the processes comply with Halal requirements, for example, the use of pre-slaughtering stunning, and secondly to assure its safety. Therefore, integrating traceability info into meat packaging can help to make informed decision for both Halal status and food safety.

Keywords: consumer utilities, discrete choice experiments, Halal certification, willingness to pay

Procedia PDF Downloads 124
423 A development of Innovator Teachers Training Curriculum to Create Instructional Innovation According to Active Learning Approach to Enhance learning Achievement of Private School in Phayao Province

Authors: Palita Sooksamran, Katcharin Mahawong

Abstract:

This research aims to offer the development of innovator teachers training curriculum to create instructional innovation according to active learning approach to enhance learning achievement. The research and development process is carried out in 3 steps: Step 1 The study of the needs necessary to develop a training curriculum: the inquiry was conducted by a sample of teachers in private schools in Phayao province that provide basic education at the level of education. Using a questionnaire of 176 people, the sample was defined using a table of random numbers and stratified samples, using the school as a random layer. Step 2 Training curriculum development: the tools used are developed training curriculum and curriculum assessments, with nine experts checking the appropriateness of the draft curriculum. The statistic used in data analysis is the average ( ) and standard deviation (S.D.) Step 3 study on effectiveness of training curriculum: one group pretest/posttest design applied in this study. The sample consisted of 35 teachers from private schools in Phayao province. The participants volunteered to attend on their own. The results of the research showed that: 1.The essential demand index needed with the list of essential needs in descending order is the choice and create of multimedia media, videos, application for learning management at the highest level ,Developed of multimedia, video and applications for learning management and selection of innovative learning management techniques and methods of solve the problem Learning , respectively. 2. The components of the training curriculum include principles, aims, scope of content, training activities, learning materials and resources, supervision evaluation. The scope of the curriculum consists of basic knowledge about learning management innovation, active learning, lesson plan design, learning materials and resources, learning measurement and evaluation, implementation of lesson plans into classroom and supervision and motoring. The results of the evaluation of quality of the draft training curriculum at the highest level. The Experts suggestion is that the purpose of the course should be used words that convey the results. 3. The effectiveness of training curriculum 1) Cognitive outcomes of the teachers in creating innovative learning management was at a high level of relative gain score. 2) The assessment results of learning management ability according to the active learning approach to enhance learning achievement by assessing from 2 education supervisor as a whole were very high , 3) Quality of innovation learning management based on active learning approach to enhance learning achievement of the teachers, 7 instructional Innovations were evaluated as outstanding works and 26 instructional Innovations passed the standard 4) Overall learning achievement of students who learned from 35 the sample teachers was at a high level of relative gain score 5) teachers' satisfaction towards the training curriculum was at the highest level.

Keywords: training curriculum, innovator teachers, active learning approach, learning achievement

Procedia PDF Downloads 46
422 Developing Computational Thinking in Early Childhood Education

Authors: Kalliopi Kanaki, Michael Kalogiannakis

Abstract:

Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.

Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses

Procedia PDF Downloads 116
421 Enhanced Physiological Response of Blood Pressure and Improved Performance in Successive Divided Attention Test Seen with Classical Instrumental Background Music Compared to Controls

Authors: Shantala Herlekar

Abstract:

Introduction: Entrainment effect of music on cardiovascular parameters is well established. Music is being used in the background by medical students while studying. However, does it really help them relax faster and concentrate better? Objectives: This study was done to compare the effects of classical instrumental background music versus no music on blood pressure response over time and on successively performed divided attention test in Indian and Malaysian 1st-year medical students. Method: 60 Indian and 60 Malaysian first year medical students, with an equal number of girls and boys were randomized into two groups i.e music group and control group thus creating four subgroups. Three different forms of Symbol Digit Modality Test (to test concentration ability) were used as a pre-test, during music/control session and post-test. It was assessed using total, correct and error score. Simultaneously, multiple Blood Pressure recordings were taken as pre-test, during 1, 5, 15, 25 minutes during music/control (+SDMT) and post-test. The music group performed the test with classical instrumental background music while the control group performed it in silence. Results were analyzed using students paired t test. p value < 0.05 was taken as statistically significant. A drop in BP recording was indicative of relaxed state and a rise in BP with task performance was indicative of increased arousal. Results: In Symbol Digit Modality Test (SDMT) test, Music group showed significant better results for correct (p = 0.02) and total (p = 0.029) scores during post-test while errors reduced (p = 0.002). Indian music group showed decline in post-test error scores (p = 0.002). Malaysian music group performed significantly better in all categories. Blood pressure response was similar in music and control group with following variations, a drop in BP at 5minutes, being significant in music group (p < 0.001), a steep rise in values till 15minutes (corresponding to SDMT test) also being significant only in music group (p < 0.001) and the Systolic BP readings in controls during post-test were at lower levels compared to music group. On comparing the subgroups, not much difference was noticed in recordings of Indian student’s subgroups while all the paired-t test values in the Malaysian music group were significant. Conclusion: These recordings indicate an increased relaxed state with classical instrumental music and an increased arousal while performing a concentration task. Music used in our study was beneficial to students irrespective of their nationality and preference of music type. It can act as an “active coping” strategy and alleviate stress within a very short period of time, in our study within a span of 5minutes. When used in the background, during task performance, can increase arousal which helps the students perform better. Implications: Music can be used between lectures for a short time to relax the students and help them concentrate better for the subsequent classes, especially for late afternoon sessions.

Keywords: blood pressure, classical instrumental background music, ethnicity, symbol digit modality test

Procedia PDF Downloads 139
420 Using Human-Centred Service Design and Partnerships as a Model to Promote Cross-Sector Social Responsibility in Disaster Resilience: An Australian Case Study

Authors: Keith Diamond, Tracy Collier, Ciara Sterling, Ben Kraal

Abstract:

The increased frequency and intensity of disaster events in the Asia-Pacific region is likely to require organisations to better understand how their initiatives, and the support they provide to their customers, intersect with other organisations aiming to support communities in achieving disaster resilience. While there is a growing awareness that disaster response and recovery rebuild programmes need to adapt to more integrated, community-led approaches, there is often a discrepancy between how programmes intend to work and how they are collectively experienced in the community, creating undesired effects on community resilience. Following Australia’s North Queensland Monsoon Disaster of 2019, this research set out to understand and evaluate how the service and support ecosystem impacted on the local community’s experience and influenced their ability to respond and recover. The purpose of this initiative was to identify actionable, cross-sector, people-centered improvements that support communities to recover and thrive when faced with disaster. The challenge arose as a group of organisations, including utility providers, banks, insurers, and community organisations, acknowledged that improving their own services would have limited impact on community wellbeing unless the other services people need are also improved and aligned. The research applied human-centred service design methods, typically applied to single products or services, to design a new way to understand a whole-of-community journey. Phase 1 of the research conducted deep contextual interviews with residents and small business owners impacted by the North Queensland Monsoon and qualitative data was analysed to produce community journey maps that detailed how individuals navigated essential services, such as accommodation, finance, health, and community. Phase 2 conducted interviews and focus groups with frontline workers who represented industries that provided essential services to assist the community. Data from Phase 1 and Phase 2 of the research was analysed and combined to generate a systems map that visualised the positive and negative impacts that occurred across the disaster response and recovery service ecosystem. Insights gained from the research has catalysed collective action to address future Australian disaster events. The case study outlines a transformative way for sectors and industries to rethink their corporate social responsibility activities towards a cross-sector partnership model that shares responsibility and approaches disaster response and recovery as a single service that can be designed to meet the needs of communities.

Keywords: corporate social responsibility, cross sector partnerships, disaster resilience, human-centred design, service design, systems change

Procedia PDF Downloads 149
419 Monitoring the Responses to Nociceptive Stimuli During General Anesthesia Based on Electroencephalographic Signals in Surgical Patients Undergoing General Anesthesia with Laryngeal Mask Airway (LMA)

Authors: Ofelia Loani Elvir Lazo, Roya Yumul, Sevan Komshian, Ruby Wang, Jun Tang

Abstract:

Background: Monitoring the anti-nociceptive drug effect is useful because a sudden and strong nociceptive stimulus may result in untoward autonomic responses and muscular reflex movements. Monitoring the anti-nociceptive effects of perioperative medications has long been desiredas a way to provide anesthesiologists information regarding a patient’s level of antinociception and preclude any untoward autonomic responses and reflexive muscular movements from painful stimuli intraoperatively.To this end, electroencephalogram (EEG) based tools includingBIS and qCON were designed to provide information about the depth of sedation whileqNOXwas produced to informon the degree of antinociception.The goal of this study was to compare the reliability of qCON/qNOX to BIS asspecific indicators of response to nociceptive stimulation. Methods: Sixty-two patients undergoing general anesthesia with LMA were included in this study. Institutional Review Board(IRB) approval was obtained, and informed consent was acquired prior to patient enrollment. Inclusion criteria included American Society of Anesthesiologists (ASA) class I-III, 18 to 80 years of age, and either gender. Exclusion criteria included the inability to consent. Withdrawal criteria included conversion to endotracheal tube and EEG malfunction. BIS and qCON/qNOX electrodes were simultaneously placed o62n all patientsprior to induction of anesthesia and were monitored throughout the case, along with other perioperative data, including patient response to noxious stimuli. All intraoperative decisions were made by the primary anesthesiologist without influence from qCON/qNOX. Student’s t-distribution, prediction probability (PK), and ANOVA were used to statistically compare the relative ability to detect nociceptive stimuli for each index. Twenty patients were included for the preliminary analysis. Results: A comparison of overall intraoperative BIS, qCON and qNOX indices demonstrated no significant difference between the three measures (N=62, p> 0.05). Meanwhile, index values for qNOX (62±18) were significantly higher than those for BIS (46±14) and qCON (54±19) immediately preceding patient responses to nociceptive stimulation in a preliminary analysis (N=20, * p= 0.0408). Notably, certain hemodynamic measurements demonstrated a significant increase in response to painful stimuli (MAP increased from74±13 mm Hg at baseline to 84± 18 mm Hg during noxious stimuli [p= 0.032] and HR from 76±12 BPM at baseline to 80±13BPM during noxious stimuli[p=0.078] respectively). Conclusion: In this observational study, BIS and qCON/qNOX provided comparable information on patients’ level of sedation throughout the course of an anesthetic. Meanwhile, increases in qNOX values demonstrated a superior correlation to an imminent response to stimulation relative to all other indices.

Keywords: antinociception, bispectral index (BIS), general anesthesia, laryngeal mask airway, qCON/qNOX

Procedia PDF Downloads 90
418 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism

Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran

Abstract:

Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.

Keywords: CT PA, D dimer, pulmonary embolism, wells score

Procedia PDF Downloads 229