Search results for: evaluation capacity building
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13507

Search results for: evaluation capacity building

487 Narratives of Self-Renewal: Looking for A Middle Earth In-Between Psychoanalysis and the Search for Consciousness

Authors: Marilena Fatigante

Abstract:

Contemporary psychoanalysis is increasingly acknowledging the existential demands of clients in psychotherapy. A significant aspect of the personal crises that patients face today is often rooted in the difficulty to find meaning in their own existence, even after working through or resolving traumatic memories and experiences. Tracing back to the correspondence between Freud and Romain Rolland (1927), psychoanalysis could not ignore that investigation of the psyche also encompasses the encounter with deep, psycho-sensory experiences, which involve a sense of "being one with the external world as a whole", the well-known “oceanic feeling”, as Rolland posed it. Despite the recognition of Non-ordinary States of Consciousness (NSC) as catalysts for transformation in clinical practice, highlighted by neuroscience and results from psychedelic-assisted therapies, there is few research on how psychoanalytic knowledge can integrate with other treatment traditions. These traditions, commonly rooted in non -Western, unconventional, and non-formal psychological knowledge, emphasize the individual’s innate tendency toward existential integrity and transcendence of self-boundaries. Inspired by an autobiographical account, this paper examines narratives of 12 individuals, who engaged in psychoanalytic therapy and also underwent treatment involving a non-formal helping relationship with an expert guide in consciousness, which included experience of this nature. The guide relies on 35 yrs of experience in Psychological, multidisciplinary studies in Human Sciences and Art, and demonstrates knowledge of many wisdom traditions, ranging from Eastern to Western philosophy, including Psychoanalysis and its development in cultural perspective (e.g, Ethnopsychiatry). Analyses focused primarily on two dimensions that research has identified as central in assessing the degree of treatment “success” in the patients’ narrative accounts of their therapies: agency and coherence, defined respectively as the increase, expressed in language, of the client’s perceived ability to manage his/her own challenges and the capacity, inherent in “narrative” itself as a resource for meaning making (Bruner, 1990), to provide the subject with a sense of unity, endowing his /her life experience with temporal and logical sequentiality. The present study reports that, in all narratives from the participants, agency and coherence are described differently than in “common” psychotherapy narratives. Although the participants consistently identified themselves as responsible agentic subject, the sense of agency derived from the non-conventional guidance pathway is never reduced to a personal, individual accomplishment. Rather, the more a new, fuller sense of “Life” (more than “Self”) develops out of the guidance pathway they engage with the expert guide, the more they “surrender” their own sense of autonomy and self-containment. Something, which Safran (2016) identified as well talking about the sense of surrender and “grace” in psychoanalytic sessions. Secondly, narratives of individuals engaging with the expert guide describe coherence not as repairing or enforcing continuity but as enhancing their ability to navigate dramatic discontinuities, falls, abrupt leaps and passages marked by feelings of loss and bereavement. The paper ultimately explores whether valid criteria can be established to analyze experiences of non-conventional paths of self-evolution. These paths are not opposed or alternative to conventional ones, and should not be simplistically dismissed as exotic or magical.

Keywords: oceanic feeling, non conventional guidance, consciousness, narratives, treatment outcomes

Procedia PDF Downloads 38
486 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.

Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation

Procedia PDF Downloads 239
485 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports

Authors: Carolyn Hatch, Laurent Simon

Abstract:

Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.

Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs

Procedia PDF Downloads 181
484 Creation and Evaluation of an Academic Blog of Tools for the Self-Correction of Written Production in English

Authors: Brady, Imelda Katherine, Da Cunha Fanego, Iria

Abstract:

Today's university students are considered digital natives and the use of Information Technologies (ITs) forms a large part of their study and learning. In the context of language studies, applications that help with revisions of grammar or vocabulary are particularly useful, especially if they are open access. There are studies that show the effectiveness of this type of application in the learning of English as a foreign language and that using IT can help learners become more autonomous in foreign language acquisition, given that these applications can enhance awareness of the learning process; this means that learners are less dependent on the teacher for corrective feedback. We also propose that the exploitation of these technologies also enhances the work of the language instructor wishing to incorporate IT into his/her practice. In this context, the aim of this paper is to present the creation of a repository of tools that provide support in the writing and correction of texts in English and the assessment of their usefulness on behalf of university students enrolled in the English Studies Degree. The project seeks to encourage the development of autonomous learning through the acquisition of skills linked to the self-correction of written work in English. To comply with the above, our methodology follows five phases. First of all, a selection of the main open-access online applications available for the correction of written texts in English is made: AutoCrit, Hemingway, Grammarly, LanguageTool, OutWrite, PaperRater, ProWritingAid, Reverso, Slick Write, Spell Check Plus and Virtual Writing Tutor. Secondly, the functionalities of each of these tools (spelling, grammar, style correction, etc.) are analyzed. Thirdly, explanatory materials (texts and video tutorials) are prepared on each tool. Fourth, these materials are uploaded into a repository of our university in the form of an institutional blog, which is made available to students and the general public. Finally, a survey was designed to collect students’ feedback. The survey aimed to analyse the usefulness of the blog and the quality of the explanatory materials as well as the degree of usefulness that students assigned to each of the tools offered. In this paper, we present the results of the analysis of data received from 33 students in the 1st semester of the 21-22 academic year. One result we highlight in our paper is that the students have rated this resource very highly, in addition to offering very valuable information on the perceived usefulness of the applications provided for them to review. Our work, carried out within the framework of a teaching innovation project funded by our university, emphasizes that teachers need to design methodological strategies that help their students improve the quality of their productions written in English and, by extension, to improve their linguistic competence.

Keywords: academic blog, open access tools, online self-correction, written production in English, university learning

Procedia PDF Downloads 101
483 Balanced Score Card a Tool to Improve Naac Accreditation – a Case Study in Indian Higher Education

Authors: CA Kishore S. Peshori

Abstract:

Introduction: India, a country with vast diversity and huge population is going to have largest young population by 2020. Higher education has and will always be the basic requirement for making a developing nation to a developed nation. To improve any system it needs to be bench-marked. There have been various tools for bench-marking the systems. Education is delivered in India by universities which are mainly funded by government. This universities for delivering the education sets up colleges which are again funded mainly by government. Recently however there has also been autonomy given to universities and colleges. Moreover foreign universities are waiting to enter Indian boundaries. With a large number of universities and colleges it has become more and more necessary to measure this institutes for bench-marking. There have been various tools for measuring the institute. In India college assessments have been made compulsory by UGC. Naac has been offically recognised as the accrediation criteria. The Naac criteria has been based on seven criterias namely: 1. Curricular assessments, 2. Teaching learning and evaluation, 3. Research Consultancy and Extension, 4. Infrastructure and learning resources, 5. Student support and progression, 6. Governance leadership and management, 7. Innovation and best practices. The Naac tries to bench mark the institution for identification, sustainability, dissemination and adaption of best practices. It grades the institution according to this seven criteria and the funding of institution is based on these grades. Many of the colleges are struggling to get best of grades but they have not come across a systematic tool to achieve the results. Balanced Scorecard developed by Kaplan has been a successful tool for corporates to develop best of practices so as to increase their financial performance and also retain and increase their customers so as to grow the organization to next level.It is time to test this tool for an educational institute. Methodology: The paper tries to develop a prototype for college based on the secondary data. Once a prototype is developed the researcher based on questionnaire will try to test this tool for successful implementation. The success of this research will depend on its implementation of BSC on an institute and its grading improved due to this successful implementation. Limitation of time is a major constraint in this research as Naac cycle takes minimum 4 years for accreditation and reaccreditation the methodology will limit itself to secondary data and questionnaire to be circulated to colleges along with the prototype model of BSC. Conclusion: BSC is a successful tool for enhancing growth of an organization. Educational institutes are no exception to these. BSC will only have to be realigned to suit the Naac criteria. Once this prototype is developed the success will be tested only on its implementation but this research paper will be the first step towards developing this tool and will also initiate the success by developing a questionnaire and getting and evaluating the responses for moving to the next level of actual implementation

Keywords: balanced scorecard, bench marking, Naac, UGC

Procedia PDF Downloads 272
482 Evaluation of River Meander Geometry Using Uniform Excess Energy Theory and Effects of Climate Change on River Meandering

Authors: Youssef I. Hafez

Abstract:

Since ancient history rivers have been the fostering and favorite place for people and civilizations to live and exist along river banks. However, due to floods and droughts, especially sever conditions due to global warming and climate change, river channels are completely evolving and moving in the lateral direction changing their plan form either through straightening of curved reaches (meander cut-off) or increasing meandering curvature. The lateral shift or shrink of a river channel affects severely the river banks and the flood plain with tremendous impact on the surrounding environment. Therefore, understanding the formation and the continual processes of river channel meandering is of paramount importance. So far, in spite of the huge number of publications about river-meandering, there has not been a satisfactory theory or approach that provides a clear explanation of the formation of river meanders and the mechanics of their associated geometries. In particular two parameters are often needed to describe meander geometry. The first one is a scale parameter such as the meander arc length. The second is a shape parameter such as the maximum angle a meander path makes with the channel mean down path direction. These two parameters, if known, can determine the meander path and geometry as for example when they are incorporated in the well known sine-generated curve. In this study, a uniform excess energy theory is used to illustrate the origin and mechanics of formation of river meandering. This theory advocates that the longitudinal imbalance between the valley and channel slopes (with the former is greater than the second) leads to formation of curved meander channel in order to reduce the excess energy through its expenditure as transverse energy loss. Two relations are developed based on this theory; one for the determination of river channel radius of curvature at the bend apex (shape parameter) and the other for the determination of river channel sinuosity. The sinuosity equation tested very well when applied to existing available field data. In addition, existing model data were used to develop a relation between the meander arc length and the Darcy-Weisback friction factor. Then, the meander wave length was determined from the equations of the arc length and the sinuosity. The developed equation compared well with available field data. Effects of the transverse bed slope and grain size on river channel sinuosity are addressed. In addition, the concept of maximum channel sinuosity is introduced in order to explain the changes of river channel plan form due to changes in flow discharges and sediment loads induced by global warming and climate changes.

Keywords: river channel meandering, sinuosity, radius of curvature, meander arc length, uniform excess energy theory, transverse energy loss, transverse bed slope, flow discharges, sediment loads, grain size, climate change, global warming

Procedia PDF Downloads 223
481 An Integrated HCV Testing Model as a Method to Improve Identification and Linkage to Care in a Network of Community Health Centers in Philadelphia, PA

Authors: Catelyn Coyle, Helena Kwakwa

Abstract:

Objective: As novel and better tolerated therapies become available, effective HCV testing and care models become increasingly necessary to not only identify individuals with active infection but also link them to HCV providers for medical evaluation and treatment. Our aim is to describe an effective HCV testing and linkage to care model piloted in a network of five community health centers located in Philadelphia, PA. Methods: In October 2012, National Nursing Centers Consortium piloted a routine opt-out HCV testing model in a network of community health centers, one of which treats HCV, HIV, and co-infected patients. Key aspects of the model were medical assistant initiated testing, the use of laboratory-based reflex test technology, and electronic medical record modifications to prompt, track, report and facilitate payment of test costs. Universal testing on all adult patients was implemented at health centers serving patients at high-risk for HCV. The other sites integrated high-risk based testing, where patients meeting one or more of the CDC testing recommendation risk factors or had a history of homelessness were eligible for HCV testing. Mid-course adjustments included the integration of dual HIV testing, development of a linkage to care coordinator position to facilitate the transition of HIV and/or HCV-positive patients from primary to specialist care, and the transition to universal HCV testing across all testing sites. Results: From October 2012 to June 2015, the health centers performed 7,730 HCV tests and identified 886 (11.5%) patients with a positive HCV-antibody test. Of those with positive HCV-antibody tests, 838 (94.6%) had an HCV-RNA confirmatory test and 590 (70.4%) progressed to current HCV infection (overall prevalence=7.6%); 524 (88.8%) received their RNA-positive test result; 429 (72.7%) were referred to an HCV care specialist and 271 (45.9%) were seen by the HCV care specialist. The best linkage to care results were seen at the test and treat the site, where of the 333 patients were current HCV infection, 175 (52.6%) were seen by an HCV care specialist. Of the patients with active HCV infection, 349 (59.2%) were unaware of their HCV-positive status at the time of diagnosis. Since the integration of dual HCV/HIV testing in September 2013, 9,506 HIV tests were performed, 85 (0.9%) patients had positive HIV tests, 81 (95.3%) received their confirmed HIV test result and 77 (90.6%) were linked to HIV care. Dual HCV/HIV testing increased the number of HCV tests performed by 362 between the 9 months preceding dual testing and first 9 months after dual testing integration, representing a 23.7% increment. Conclusion: Our HCV testing model shows that integrated routine testing and linkage to care is feasible and improved detection and linkage to care in a primary care setting. We found that prevalence of current HCV infection was higher than that seen in locally in Philadelphia and nationwide. Intensive linkage services can increase the number of patients who successfully navigate the HCV treatment cascade. The linkage to care coordinator position is an important position that acts as a trusted intermediary for patients being linked to care.

Keywords: HCV, routine testing, linkage to care, community health centers

Procedia PDF Downloads 357
480 Establishment and Evaluation of a Nutrition Therapy Guide and 7-Day Menu for Educating Hemodialysis Patients: A Case Study of Douala General Hospital, Cameroon

Authors: Ngwa Lodence Njwe

Abstract:

This study investigated the response of hemodialysis patients to an established nutrition therapy guide accompanied by a 7-day menu plan administered for a month. End Stage Renal Disease (ESRD), also known as End Stage Kidney Disease (ESKD), is a non-communicable disease primarily caused by hypertension and diabetes, posing significant challenges in both developed and developing nations. Hemodialysis is a key treatment for these patients. In this experimental study, 100 hemodialysis patients from Douala General Hospital in Cameroon participated. A questionnaire was used to collect data on sociodemographic and anthropometric characteristics, health status, and dietary intake, while medical records provided biomedical data. The levels of the biochemical parameters (Phosphorus, calcium and hemoglobin) were determined before and one month after the distribution of the nutrition education guide and the use of a 7-day menu plan. The Phosphorus and Calcium levels were measured using an LTCC03 semi-automatic chemistry analyzer. Blood was collected from each patient into a test tube, allowed to clot and centrifuged. 50µl of the serum was aspirated by the analyzer for Ca and P level analysis, and results were read from the display. The hemoglobin level was measured using the URIT–12 hemoglobin Meter. The blood sample was collected by hand prick and placed in a strip, and the results were read from the screen. The means of the biochemical parameters were then computed. The most prevalent age group was 40-49 years, with males constituting 70% and females 30% of respondents. Among these patients, 80% were hypertensive, 3% had both hypertension and diabetes, 9% were hypertensive, diabetic, and obese, and 1% suffered from hypertension and heart failure. Analysis of anthropometric parameters revealed a high prevalence of underweight, overweight, and obesity, highlighting the urgent need for targeted nutrition interventions to modify cooking methods, enhance food choices, and increase dietary variety for improved quality of life. Before the nutrition therapy guide was implemented, average calcium levels were 73.05 mg/L for males and 89.44 mg/L for females; post-implementation, these values increased to 77.55 mg/L and 91.44 mg/L, respectively. Conversely, average phosphorus levels decreased from 42.05 mg/L for males and 43.55 mg/L for females to 41.05 mg/L and 39.3 mg/L, respectively, after the intervention. Additionally, average hemoglobin levels increased from 8.35 g/dL for males and 8.5 g/dL for females to 9.2 g/dL and 8.95 g/dL, respectively. The findings confirm that the nutrition therapy guide and the 7-day menu significantly impacted the biomedical parameters of hemodialysis patients, underscoring the need for ongoing nutrition education and counseling for this population.

Keywords: end stage kidney disease, nutrition therapy guide, nutritional status, anthropometric parameters, food frequency, biomedical data

Procedia PDF Downloads 28
479 Chemical Analysis of Particulate Matter (PM₂.₅) and Volatile Organic Compound Contaminants

Authors: S. Ebadzadsahraei, H. Kazemian

Abstract:

The main objective of this research was to measure particulate matter (PM₂.₅) and Volatile Organic Compound (VOCs) as two classes of air pollutants, at Prince George (PG) neighborhood in warm and cold seasons. To fulfill this objective, analytical protocols were developed for accurate sampling and measurement of the targeted air pollutants. PM₂.₅ samples were analyzed for their chemical composition (i.e., toxic trace elements) in order to assess their potential source of emission. The City of Prince George, widely known as the capital of northern British Columbia (BC), Canada, has been dealing with air pollution challenges for a long time. The city has several local industries including pulp mills, a refinery, and a couple of asphalt plants that are the primary contributors of industrial VOCs. In this research project, which is the first study of this kind in this region it measures physical and chemical properties of particulate air pollutants (PM₂.₅) at the city neighborhood. Furthermore, this study quantifies the percentage of VOCs at the city air samples. One of the outcomes of this project is updated data about PM₂.₅ and VOCs inventory in the selected neighborhoods. For examining PM₂.₅ chemical composition, an elemental analysis methodology was developed to measure major trace elements including but not limited to mercury and lead. The toxicity of inhaled particulates depends on both their physical and chemical properties; thus, an understanding of aerosol properties is essential for the evaluation of such hazards, and the treatment of such respiratory and other related diseases. Mixed cellulose ester (MCE) filters were selected for this research as a suitable filter for PM₂.₅ air sampling. Chemical analyses were conducted using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) for elemental analysis. VOCs measurement of the air samples was performed using a Gas Chromatography-Flame Ionization Detector (GC-FID) and Gas Chromatography-Mass Spectrometry (GC-MS) allowing for quantitative measurement of VOC molecules in sub-ppb levels. In this study, sorbent tube (Anasorb CSC, Coconut Charcoal), 6 x 70-mm size, 2 sections, 50/100 mg sorbent, 20/40 mesh was used for VOCs air sampling followed by using solvent extraction and solid-phase micro extraction (SPME) techniques to prepare samples for measuring by a GC-MS/FID instrument. Air sampling for both PM₂.₅ and VOC were conducted in summer and winter seasons for comparison. Average concentrations of PM₂.₅ are very different between wildfire and daily samples. At wildfire time average of concentration is 83.0 μg/m³ and daily samples are 23.7 μg/m³. Also, higher concentrations of iron, nickel and manganese found at all samples and mercury element is found in some samples. It is able to stay too high doses negative effects.

Keywords: air pollutants, chemical analysis, particulate matter (PM₂.₅), volatile organic compound, VOCs

Procedia PDF Downloads 142
478 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling

Authors: Ghita Benayad

Abstract:

Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.

Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market

Procedia PDF Downloads 47
477 Evaluation of Differential Interaction between Flavanols and Saliva Proteins by Diffusion and Precipitation Assays on Cellulose Membranes

Authors: E. Obreque-Slier, V. Contreras-Cortez, R. López-Solís

Abstract:

Astringency is a drying, roughing, and sometimes puckering sensation that is experienced on the various oral surfaces during or immediately after tasting foods. This sensation has been closely related to the interaction and precipitation between salivary proteins and polyphenols, specifically flavanols or proanthocyanidins. In addition, the type and concentration of proanthocyanidin influences significantly the intensity of the astringency and consequently the protein/proanthocyanidin interaction. However, most of the studies are based on the interaction between saliva and highly complex polyphenols, without considering the effect of monomeric proanthoancyanidins present in different foods. The aim of this study was to evaluate the effect of different monomeric proanthocyanidins on the diffusion and precipitation of salivary proteins. Thus, solutions of catechin, epicatechin, epigallocatechin and gallocatechin (0, 2.0, 4.0, 6.0, 8.0 and 10 mg/mL) were mixed with human saliva (1: 1 v/v). After incubation for 5 min at room temperature, 15 µL aliquots of each mix were dotted on a cellulose membrane and allowed to dry spontaneously at room temperature. The membrane was fixed, rinsed and stained for proteins with Coomassie blue. After exhaustive washing in 7% acetic acid, the membrane was rinsed once in distilled water and dried under a heat lamp. Both diffusion area and stain intensity of the protein spots were semiqualitative estimates for protein-tannin interaction (diffusion test). The rest of the whole saliva-phenol solution mixtures of the diffusion assay were centrifuged, and 15-μL aliquots from each of the supernatants were dotted on a cellulose membrane. The membrane was processed for protein staining as indicated above. The blue-stained area of protein distribution corresponding to each of the extract dilution-saliva mixtures was quantified by Image J 1.45 software. Each of the assays was performed at least three times. Initially, salivary proteins display a biphasic distribution on cellulose membranes, that is, when aliquots of saliva are placed on absorbing cellulose membranes, and free diffusion of saliva is allowed to occur, a non-diffusible protein fraction becomes surrounded by highly diffusible salivary proteins. In effect, once diffusion has ended, a protein-binding dye shows an intense blue-stained roughly circular area close to the spotting site (non-diffusible fraction) (NDF) which becomes surrounded by a weaker blue-stained outer band (diffusible fraction) (DF). Likewise, the diffusion test showed that epicatechin caused the complete disappearance of DF from saliva with 2 mg/mL. Also, epigallocatechin and gallocatechin caused a similar effect with 4 mg/mL, while catechin generated the same effect at 8 mg/mL. In the precipitation test, the use of epicatechin and gallocatechin generated evident precipitates at the bottom of the Eppendorf tubes. In summary, the flavanol type differentially affects the diffusion and precipitation of saliva, which would affect the sensation of astringency perceived by consumers.

Keywords: astringency, polyphenols, tannins, tannin-protein interaction

Procedia PDF Downloads 199
476 Embodied Empowerment: A Design Framework for Augmenting Human Agency in Assistive Technologies

Authors: Melina Kopke, Jelle Van Dijk

Abstract:

Persons with cognitive disabilities, such as Autism Spectrum Disorder (ASD) are often dependent on some form of professional support. Recent transformations in Dutch healthcare have spurred institutions to apply new, empowering methods and tools to enable their clients to cope (more) independently in daily life. Assistive Technologies (ATs) seem promising as empowering tools. While ATs can, functionally speaking, help people to perform certain activities without human assistance, we hold that, from a design-theoretical perspective, such technologies often fail to empower in a deeper sense. Most technologies serve either to prescribe or to monitor users’ actions, which in some sense objectifies them, rather than strengthening their agency. This paper proposes that theories of embodied interaction could help formulating a design vision in which interactive assistive devices augment, rather than replace, human agency and thereby add to a persons’ empowerment in daily life settings. It aims to close the gap between empowerment theory and the opportunities provided by assistive technologies, by showing how embodiment and empowerment theory can be applied in practice in the design of new, interactive assistive devices. Taking a Research-through-Design approach, we conducted a case study of designing to support independently living people with ASD with structuring daily activities. In three iterations we interlaced design action, active involvement and prototype evaluations with future end-users and healthcare professionals, and theoretical reflection. Our co-design sessions revealed the issue of handling daily activities being multidimensional. Not having the ability to self-manage one’s daily life has immense consequences on one’s self-image, and also has major effects on the relationship with professional caregivers. Over the course of the project relevant theoretical principles of both embodiment and empowerment theory together with user-insights, informed our design decisions. This resulted in a system of wireless light units that users can program as a reminder for tasks, but also to record and reflect on their actions. The iterative process helped to gradually refine and reframe our growing understanding of what it concretely means for a technology to empower a person in daily life. Drawing on the case study insights we propose a set of concrete design principles that together form what we call the embodied empowerment design framework. The framework includes four main principles: Enabling ‘reflection-in-action’; making information ‘publicly available’ in order to enable co-reflection and social coupling; enabling the implementation of shared reflections into an ‘endurable-external feedback loop’ embedded in the persons familiar ’lifeworld’; and nudging situated actions with self-created action-affordances. In essence, the framework aims for the self-development of a suitable routine, or ‘situated practice’, by building on a growing shared insight of what works for the person. The framework, we propose, may serve as a starting point for AT designers to create truly empowering interactive products. In a set of follow-up projects involving the participation of persons with ASD, Intellectual Disabilities, Dementia and Acquired Brain Injury, the framework will be applied, evaluated and further refined.

Keywords: assistive technology, design, embodiment, empowerment

Procedia PDF Downloads 278
475 Enhanced Recoverable Oil in Northern Afghanistan Kashkari Oil Field by Low-Salinity Water Flooding

Authors: Zabihullah Mahdi, Khwaja Naweed Seddiqi

Abstract:

Afghanistan is located in a tectonically complex and dynamic area, surrounded by rocks that originated on the mother continent of Gondwanaland. The northern Afghanistan basin, which runs along the country's northern border, has the potential for petroleum generation and accumulation. The Amu Darya basin has the largest petroleum potential in the region. Sedimentation occurred in the Amu Darya basin from the Jurassic to the Eocene epochs. Kashkari oil field is located in northern Afghanistan's Amu Darya basin. The field structure consists of a narrow northeast-southwest (NE-SW) anticline with two structural highs, the northwest limb being mild and the southeast limb being steep. The first oil production well in the Kashkari oil field was drilled in 1976, and a total of ten wells were drilled in the area between 1976 and 1979. The amount of original oil in place (OOIP) in the Kashkari oil field, based on the results of surveys and calculations conducted by research institutions, is estimated to be around 140 MMbbls. The objective of this study is to increase recoverable oil reserves in the Kashkari oil field through the implementation of low-salinity water flooding (LSWF) enhanced oil recovery (EOR) technique. The LSWF involved conducting a core flooding laboratory test consisting of four sequential steps with varying salinities. The test commenced with the use of formation water (FW) as the initial salinity, which was subsequently reduced to a salinity level of 0.1%. Afterwards, the numerical simulation model of core scale oil recovery by LSWF was designed by Computer Modelling Group’s General Equation Modeler (CMG-GEM) software to evaluate the applicability of the technology to the field scale. Next, the Kahskari oil field simulation model was designed, and the LSWF method was applied to it. To obtain reasonable results, laboratory settings (temperature, pressure, rock, and oil characteristics) are designed as far as possible based on the condition of the Kashkari oil field, and several injection and production patterns are investigated. The relative permeability of oil and water in this study was obtained using Corey’s equation. In the Kashkari oilfield simulation model, three models: 1. Base model (with no water injection), 2. FW injection model, and 3. The LSW injection model were considered for the evaluation of the LSWF effect on oil recovery. Based on the results of the LSWF laboratory experiment and computer simulation analysis, the oil recovery increased rapidly after the FW was injected into the core. Subsequently, by injecting 1% salinity water, a gradual increase of 4% oil can be observed. About 6.4% of the field, is produced by the application of the LSWF technique. The results of LSWF (salinity 0.1%) on the Kashkari oil field suggest that this technology can be a successful method for developing Kashkari oil production.

Keywords: low salinity water flooding, immiscible displacement, kashkari oil field, twophase flow, numerical reservoir simulation model

Procedia PDF Downloads 42
474 Alternative Fuel Production from Sewage Sludge

Authors: Jaroslav Knapek, Kamila Vavrova, Tomas Kralik, Tereza Humesova

Abstract:

The treatment and disposal of sewage sludge is one of the most important and critical problems of waste water treatment plants. Currently, 180 thousand tonnes of sludge dry matter are produced in the Czech Republic, which corresponds to approximately 17.8 kg of stabilized sludge dry matter / year per inhabitant of the Czech Republic. Due to the fact that sewage sludge contains a large amount of substances that are not beneficial for human health, the conditions for sludge management will be significantly tightened in the Czech Republic since 2023. One of the tested methods of sludge liquidation is the production of alternative fuel from sludge from sewage treatment plants and paper production. The paper presents an analysis of economic efficiency of alternative fuel production from sludge and its use for fluidized bed boiler with nominal consumption of 5 t of fuel per hour. The evaluation methodology includes the entire logistics chain from sludge extraction, through mechanical moisture reduction to about 40%, transport to the pelletizing line, moisture drying for pelleting and pelleting itself. For economic analysis of sludge pellet production, a time horizon of 10 years corresponding to the expected lifetime of the critical components of the pelletizing line is chosen. The economic analysis of pelleting projects is based on a detailed analysis of reference pelleting technologies suitable for sludge pelleting. The analysis of the economic efficiency of pellet is based on the simulation of cash flows associated with the implementation of the project over the life of the project. For the entered value of return on the invested capital, the price of the resulting product (in EUR / GJ or in EUR / t) is searched to ensure that the net present value of the project is zero over the project lifetime. The investor then realizes the return on the investment in the amount of the discount used to calculate the net present value. The calculations take place in a real business environment (taxes, tax depreciation, inflation, etc.) and the inputs work with market prices. At the same time, the opportunity cost principle is respected; waste disposal for alternative fuels includes the saved costs of waste disposal. The methodology also respects the emission allowances saved due to the displacement of coal by alternative (bio) fuel. Preliminary results of testing of pellet production from sludge show that after suitable modifications of the pelletizer it is possible to produce sufficiently high quality pellets from sludge. A mixture of sludge and paper waste has proved to be a more suitable material for pelleting. At the same time, preliminary results of the analysis of the economic efficiency of this sludge disposal method show that, despite the relatively low calorific value of the fuel produced (about 10-11 MJ / kg), this sludge disposal method is economically competitive. This work has been supported by the Czech Technology Agency within the project TN01000048 Biorefining as circulation technology.

Keywords: Alternative fuel, Economic analysis, Pelleting, Sewage sludge

Procedia PDF Downloads 135
473 Use of Progressive Feedback for Improving Team Skills and Fair Marking of Group Tasks

Authors: Shaleeza Sohail

Abstract:

Self, and peer evaluations are some of the main components in almost all group assignments and projects in higher education institutes. These evaluations provide students an opportunity to better understand the learning outcomes of the assignment and/or project. A number of online systems have been developed for this purpose that provides automated assessment and feedback of students’ contribution in a group environment based on self and peer evaluations. All these systems lack a progressive aspect of these assessments and feedbacks which is the most crucial factor for ongoing improvement and life-long learning. In addition, a number of assignments and projects are designed in a manner that smaller or initial assessment components lead to a final assignment or project. In such cases, the evaluation and feedback may provide students an insight into their performance as a group member for a particular component after the submission. Ideally, it should also create an opportunity to improve for next assessment component as well. Self and Peer Progressive Assessment and Feedback System encourages students to perform better in the next assessment by providing a comparative analysis of the individual’s contribution score on an ongoing basis. Hence, the student sees the change in their own contribution scores during the complete project based on smaller assessment components. Self-Assessment Factor is calculated as an indicator of how close the self-perception of the student’s own contribution is to the perceived contribution of that student by other members of the group. Peer-Assessment Factor is calculated to compare the perception of one student’s contribution as compared to the average value of the group. Our system also provides a Group Coherence Factor which shows collectively how group members contribute to the final submission. This feedback is provided for students and teachers to visualize the consistency of members’ contribution perceived by its group members. Teachers can use these factors to judge the individual contributions of the group members in the combined tasks and allocate marks/grades accordingly. This factor is shown to students for all groups undertaking same assessment, so the group members can comparatively analyze the efficiency of their group as compared to other groups. Our System provides flexibility to the instructors for generating their own customized criteria for self and peer evaluations based on the requirements of the assignment. Students evaluate their own and other group members’ contributions on the scale from significantly higher to significantly lower. The preliminary testing of the prototype system is done with a set of predefined cases to explicitly show the relation of system feedback factors to the case studies. The results show that such progressive feedback to students can be used to motivate self-improvement and enhanced team skills. The comparative group coherence can promote a better understanding of the group dynamics in order to improve team unity and fair division of team tasks.

Keywords: effective group work, improvement of team skills, progressive feedback, self and peer assessment system

Procedia PDF Downloads 187
472 Effective Emergency Response and Disaster Prevention: A Decision Support System for Urban Critical Infrastructure Management

Authors: M. Shahab Uddin, Pennung Warnitchai

Abstract:

Currently more than half of the world’s populations are living in cities, and the number and sizes of cities are growing faster than ever. Cities rely on the effective functioning of complex and interdependent critical infrastructures networks to provide public services, enhance the quality of life, and save the community from hazards and disasters. In contrast, complex connectivity and interdependency among the urban critical infrastructures bring management challenges and make the urban system prone to the domino effect. Unplanned rapid growth, increased connectivity, and interdependency among the infrastructures, resource scarcity, and many other socio-political factors are affecting the typical state of an urban system and making it susceptible to numerous sorts of diversion. In addition to internal vulnerabilities, urban systems are consistently facing external threats from natural and manmade hazards. Cities are not just complex, interdependent system, but also makeup hubs of the economy, politics, culture, education, etc. For survival and sustainability, complex urban systems in the current world need to manage their vulnerabilities and hazardous incidents more wisely and more interactively. Coordinated management in such systems makes for huge potential when it comes to absorbing negative effects in case some of its components were to function improperly. On the other hand, ineffective management during a similar situation of overall disorder from hazards devastation may make the system more fragile and push the system to an ultimate collapse. Following the quantum, the current research hypothesizes that a hazardous event starts its journey as an emergency, and the system’s internal vulnerability and response capacity determine its destination. Connectivity and interdependency among the urban critical infrastructures during this stage may transform its vulnerabilities into dynamic damaging force. An emergency may turn into a disaster in the absence of effective management; similarly, mismanagement or lack of management may lead the situation towards a catastrophe. Situation awareness and factual decision-making is the key to win a battle. The current research proposed a contextual decision support system for an urban critical infrastructure system while integrating three different models: 1) Damage cascade model which demonstrates damage propagation among the infrastructures through their connectivity and interdependency, 2) Restoration model, a dynamic restoration process of individual infrastructure, which is based on facility damage state and overall disruptions in surrounding support environment, and 3) Optimization model that ensures optimized utilization and distribution of available resources in and among the facilities. All three models are tightly connected, mutually interdependent, and together can assess the situation and forecast the dynamic outputs of every input. Moreover, this integrated model will hold disaster managers and decision makers responsible when it comes to checking all the alternative decision before any implementation, and support to produce maximum possible outputs from the available limited inputs. This proposed model will not only support to reduce the extent of damage cascade but will ensure priority restoration and optimize resource utilization through adaptive and collaborative management. Complex systems predictably fail but in unpredictable ways. System understanding, situation awareness, and factual decisions may significantly help urban system to survive and sustain.

Keywords: disaster prevention, decision support system, emergency response, urban critical infrastructure system

Procedia PDF Downloads 227
471 Sustainability Framework for Water Management in New Zealand's Canterbury Region

Authors: Bryan Jenkins

Abstract:

Introduction: The expansion of irrigation in the Canterbury region has led to the sustainability limits being reached for water availability and the cumulative effects of land use intensification. The institutional framework under New Zealand’s Resource Management Act was found to be an inadequate basis for managing water at sustainability limits. An alternative paradigm for water management was developed based on collaborative governance and nested adaptive systems. This led to the formulation and implementation of the Canterbury Water Management Strategy. Methods: The nested adaptive system approach was adopted. Sustainability issues were identified at multiple spatial and time scales and defined potential failure pathways for the water resource system. These included biophysical and socio-economic issues such as water availability, cumulative effects on water quality due to land use intensification, projected changes in climate, public health, institutional arrangements, economic outcomes and externalities, and, social effects of changing technology. This led to the derivation of sustainability strategies to address these failure pathways. The collaborative governance approach involved stakeholder participation and community engagement to decide on a regional strategy; regional and zone committees of community and rūnanga (Māori groups) members to develop implementation programmes for the strategy; and, farmer collectives for operational management. Findings: The strategy identified improvements in the efficiency of use of water already allocated was more effective in improving water availability than a reliance on increased storage alone. New forms of storage with less adverse impacts were introduced, such as managed aquifer recharge and off-river storage. Reductions of nutrients from land use intensification by improving management practices has been a priority. Solutions packages for addressing the degradation of vulnerable lakes and rivers have been prepared. Biodiversity enhancement projects have been initiated. Greater involvement of Māori has led to the incorporation of kaitiakitanga (resource stewardship) into implementation programmes. Emerging issues are the need for improved integration of surface water and groundwater interactions, increased use of modelling of water and financial outcomes to guide decision making, and, equity in allocation among existing users as well as between existing and future users. Conclusions: However, sustainability analysis indicates that the proposed levels of management interventions are not sufficient to achieve community targets for water management. There is a need for more proactive recovery and rehabilitation measures. Managing to environmental limits is not sufficient, rather managing adaptive cycles is needed. Better measurement and management of water use efficiency is required. Proposed implementation packages are not sufficient to deliver desired water quality outcomes. Greater attention to targets important to environmental and recreational interests is needed to maintain trust in the collaborative process. Implementation programmes don’t adequately address climate change adaptations and greenhouse gas mitigation. Affordability is a constraint on adaptive capacity of farmers and communities. More funding mechanisms are required to implement proactive measures. The legislative and institutional framework needs to be changed to incorporate water framework legislation, regional sustainability strategies and water infrastructure coordination.

Keywords: collaborative governance, irrigation management, nested adaptive systems, sustainable water management

Procedia PDF Downloads 158
470 Spectroscopy and Electron Microscopy for the Characterization of CdSxSe1-x Quantum Dots in a Glass Matrix

Authors: C. Fornacelli, P. Colomban, E. Mugnaioli, I. Memmi Turbanti

Abstract:

When semiconductor particles are reduced in scale to nanometer dimension, their optical and electro-optical properties strongly differ from those of bulk crystals of the same composition. Since sampling is often not allowed concerning cultural heritage artefacts, the potentialities of two non-invasive techniques, such as Raman and Fiber Optic Reflectance Spectroscopy (FORS), have been investigated and the results of the analysis on some original glasses of different colours (from yellow to orange and deep red) and periods (from the second decade of the 20th century to present days) are reported in the present study. In order to evaluate the potentialities of the application of non-invasive techniques to the investigation of the structure and distribution of nanoparticles dispersed in a glass matrix, Scanning Electron Microscopy (SEM) and energy-disperse spectroscopy (EDS) mapping, together with Transmission Electron Microscopy (TEM) and Electron Diffraction Tomography (EDT) have also been used. Raman spectroscopy allows a fast and non-destructive measure of the quantum dots composition and size, thanks to the evaluation of the frequencies and the broadening/asymmetry of the LO phonons bands, respectively, though the important role of the compressive strain arising from the glass matrix and the possible diffusion of zinc from the matrix to the nanocrystals should be taken into account when considering the optical-phonons frequency values. The incorporation of Zn has been assumed by an upward shifting of the LO band related to the most abundant anion (S or Se), while the role of the surface phonons as well as the confinement-induced scattering by phonons with a non-zero wavevectors on the Raman peaks broadening has been verified. The optical band gap varies from 2.42 eV (pure CdS) to 1.70 eV (CdSe). For the compositional range between 0.5≤x≤0.2, the presence of two absorption edges has been related to the contribution of both pure CdS and the CdSxSe1-x solid solution; this particular feature is probably due to the presence of unaltered cubic zinc blende structures of CdS that is not taking part to the formation of the solid solution occurring only between hexagonal CdS and CdSe. Moreover, the band edge tailing originating from the disorder due to the formation of weak bonds and characterized by the Urbach edge energy has been studied and, together with the FWHM of the Raman signal, has been assumed as a good parameter to evaluate the degree of topological disorder. SEM-EDS mapping showed a peculiar distribution of the major constituents of the glass matrix (fluxes and stabilizers), especially concerning those samples where a layered structure has been assumed thanks to the spectroscopic study. Finally, TEM-EDS and EDT were used to get high-resolution information about nanocrystals (NCs) and heterogeneous glass layers. The presence of ZnO NCs (< 4 nm) dispersed in the matrix has been verified for most of the samples, while, for those samples where a disorder due to a more complex distribution of the size and/or composition of the NCs has been assumed, the TEM clearly verified most of the assumption made by the spectroscopic techniques.

Keywords: CdSxSe1-x, EDT, glass, spectroscopy, TEM-EDS

Procedia PDF Downloads 299
469 Trauma Scores and Outcome Prediction After Chest Trauma

Authors: Mohamed Abo El Nasr, Mohamed Shoeib, Abdelhamid Abdelkhalik, Amro Serag

Abstract:

Background: Early assessment of severity of chest trauma, either blunt or penetrating is of critical importance in prediction of patient outcome. Different trauma scoring systems are widely available and are based on anatomical or physiological parameters to expect patient morbidity or mortality. Up till now, there is no ideal, universally accepted trauma score that could be applied in all trauma centers and is suitable for assessment of severity of chest trauma patients. Aim: Our aim was to compare various trauma scoring systems regarding their predictability of morbidity and mortality in chest trauma patients. Patients and Methods: This study was a prospective study including 400 patients with chest trauma who were managed at Tanta University Emergency Hospital, Egypt during a period of 2 years (March 2014 until March 2016). The patients were divided into 2 groups according to the mode of trauma: blunt or penetrating. The collected data included age, sex, hemodynamic status on admission, intrathoracic injuries, and associated extra-thoracic injuries. The patients outcome including mortality, need of thoracotomy, need for ICU admission, need for mechanical ventilation, length of hospital stay and the development of acute respiratory distress syndrome were also recorded. The relevant data were used to calculate the following trauma scores: 1. Anatomical scores including abbreviated injury scale (AIS), Injury severity score (ISS), New injury severity score (NISS) and Chest wall injury scale (CWIS). 2. Physiological scores including revised trauma score (RTS), Acute physiology and chronic health evaluation II (APACHE II) score. 3. Combined score including Trauma and injury severity score (TRISS ) and 4. Chest-Specific score Thoracic trauma severity score (TTSS). All these scores were analyzed statistically to detect their sensitivity, specificity and compared regarding their predictive power of mortality and morbidity in blunt and penetrating chest trauma patients. Results: The incidence of mortality was 3.75% (15/400). Eleven patients (11/230) died in blunt chest trauma group, while (4/170) patients died in penetrating trauma group. The mortality rate increased more than three folds to reach 13% (13/100) in patients with severe chest trauma (ISS of >16). The physiological scores APACHE II and RTS had the highest predictive value for mortality in both blunt and penetrating chest injuries. The physiological score APACHE II followed by the combined score TRISS were more predictive for intensive care admission in penetrating injuries while RTS was more predictive in blunt trauma. Also, RTS had a higher predictive value for expectation of need for mechanical ventilation followed by the combined score TRISS. APACHE II score was more predictive for the need of thoracotomy in penetrating injuries and the Chest-Specific score TTSS was higher in blunt injuries. The anatomical score ISS and TTSS score were more predictive for prolonged hospital stay in penetrating and blunt injuries respectively. Conclusion: Trauma scores including physiological parameters have a higher predictive power for mortality in both blunt and penetrating chest trauma. They are more suitable for assessment of injury severity and prediction of patients outcome.

Keywords: chest trauma, trauma scores, blunt injuries, penetrating injuries

Procedia PDF Downloads 421
468 A New Perspective in Cervical Dystonia: Neurocognitive Impairment

Authors: Yesim Sucullu Karadag, Pinar Kurt, Sule Bilen, Nese Subutay Oztekin, Fikri Ak

Abstract:

Background: Primary cervical dystonia is thought to be a purely motor disorder. But recent studies revealed that patients with dystonia had additional non-motor features. Sensory and psychiatric disturbances could be included into the non-motor spectrum of dystonia. The Basal Ganglia receive inputs from all cortical areas and throughout the thalamus project to several cortical areas, thus participating to circuits that have been linked to motor as well as sensory, emotional and cognitive functions. However, there are limited studies indicating cognitive impairment in patients with cervical dystonia. More evidence is required regarding neurocognitive functioning in these patients. Objective: This study is aimed to investigate neurocognitive profile of cervical dystonia patients in comparison to healthy controls (HC) by employing a detailed set of neuropsychological tests in addition to self-reported instruments. Methods: Totally 29 (M/F: 7/22) cervical dystonia patients and 30 HC (M/F: 10/20) were included into the study. Exclusion criteria were depression and not given informed consent. Standard demographic, educational data and clinical reports (disease duration, disability index) were recorded for all patients. After a careful neurological evaluation, all subjects were given a comprehensive battery of neuropsychological tests: Self report of neuropsychological condition (by visual analogue scale-VAS, 0-100), RAVLT, STROOP, PASAT, TMT, SDMT, JLOT, DST, COWAT, ACTT, and FST. Patients and HC were compared regarding demographic, clinical features and neurocognitive tests. Also correlation between disease duration, disability index and self report -VAS were assessed. Results: There was no difference between patients and HCs regarding socio-demographic variables such as age, gender and years of education (p levels were 0.36, 0.436, 0.869; respectively). All of the patients were assessed at the peak of botulinum toxine effect and they were not taking an anticholinergic agent or benzodiazepine. Dystonia patients had significantly impaired verbal learning and memory (RAVLT, p<0.001), divided attention and working memory (ACTT, p<0.001), attention speed (TMT-A and B, p=0.008, 0.050), executive functions (PASAT, p<0.001; SDMT, p= 0.001; FST, p<0.001), verbal attention (DST, p=0.001), verbal fluency (COWAT, p<0.001), visio-spatial processing (JLOT, p<0.001) in comparison to healthy controls. But focused attention (STROOP-spontaneous correction) was not different between two groups (p>0.05). No relationship was found regarding disease duration and disability index with any neurocognitive tests. Conclusions: Our study showed that neurocognitive functions of dystonia patients were worse than control group with the similar age, sex, and education independently clinical expression like disease duration and disability index. This situation may be the result of possible cortical and subcortical changes in dystonia patients. Advanced neuroimaging techniques might be helpful to explain these changes in cervical dystonia patients.

Keywords: cervical dystonia, neurocognitive impairment, neuropsychological test, dystonia disability index

Procedia PDF Downloads 420
467 Increasing Recoverable Oil in Northern Afghanistan Kashkari Oil Field by Low-Salinity Water Flooding

Authors: Zabihullah Mahdi, Khwaja Naweed Seddiqi

Abstract:

Afghanistan is located in a tectonically complex and dynamic area, surrounded by rocks that originated on the mother continent of Gondwanaland. The northern Afghanistan basin, which runs along the country's northern border, has the potential for petroleum generation and accumulation. The Amu Darya basin has the largest petroleum potential in the region. Sedimentation occurred in the Amu Darya basin from the Jurassic to the Eocene epochs. Kashkari oil field is located in northern Afghanistan's Amu Darya basin. The field structure consists of a narrow northeast-southwest (NE-SW) anticline with two structural highs, the northwest limb being mild and the southeast limb being steep. The first oil production well in the Kashkari oil field was drilled in 1976, and a total of ten wells were drilled in the area between 1976 and 1979. The amount of original oil in place (OOIP) in the Kashkari oil field, based on the results of surveys and calculations conducted by research institutions, is estimated to be around 140 MMbbls. The objective of this study is to increase recoverable oil reserves in the Kashkari oil field through the implementation of low-salinity water flooding (LSWF) enhanced oil recovery (EOR) technique. The LSWF involved conducting a core flooding laboratory test consisting of four sequential steps with varying salinities. The test commenced with the use of formation water (FW) as the initial salinity, which was subsequently reduced to a salinity level of 0.1%. Afterward, the numerical simulation model of core scale oil recovery by LSWF was designed by Computer Modelling Group’s General Equation Modeler (CMG-GEM) software to evaluate the applicability of the technology to the field scale. Next, the Kahskari oil field simulation model was designed, and the LSWF method was applied to it. To obtain reasonable results, laboratory settings (temperature, pressure, rock, and oil characteristics) are designed as far as possible based on the condition of the Kashkari oil field, and several injection and production patterns are investigated. The relative permeability of oil and water in this study was obtained using Corey’s equation. In the Kashkari oilfield simulation model, three models: 1. Base model (with no water injection), 2. FW injection model, and 3. The LSW injection model was considered for the evaluation of the LSWF effect on oil recovery. Based on the results of the LSWF laboratory experiment and computer simulation analysis, the oil recovery increased rapidly after the FW was injected into the core. Subsequently, by injecting 1% salinity water, a gradual increase of 4% oil can be observed. About 6.4% of the field is produced by the application of the LSWF technique. The results of LSWF (salinity 0.1%) on the Kashkari oil field suggest that this technology can be a successful method for developing Kashkari oil production.

Keywords: low-salinity water flooding, immiscible displacement, Kashkari oil field, two-phase flow, numerical reservoir simulation model

Procedia PDF Downloads 39
466 Production, Characterization and In vitro Evaluation of [223Ra]RaCl2 Nanomicelles for Targeted Alpha Therapy of Osteosarcoma

Authors: Yang Yang, Luciana Magalhães Rebelo Alencar, Martha Sahylí Ortega Pijeira, Beatriz da Silva Batista, Alefe Roger Silva França, Erick Rafael Dias Rates, Ruana Cardoso Lima, Sara Gemini-Piperni, Ralph Santos-Oliveira

Abstract:

Radium-²²³ dichloride ([²²³Rₐ]RₐCl₂) is an alpha particle-emitting radiopharmaceutical currently approved for the treatment of patients with castration-resistant prostate cancer, symptomatic bone metastases, and no known visceral metastatic disease. [²²³Rₐ]RₐCl₂ is bone-seeking calcium mimetic that bonds into the newly formed bone stroma, especially osteoblastic or sclerotic metastases, killing the tumor cells by inducing DNA breaks in a potent and localized manner. Nonetheless, the successful therapy of osteosarcoma as primary bone tumors is still a challenge. Nanomicelles are colloidal nanosystems widely used in drug development to improve blood circulation time, bioavailability, and specificity of therapeutic agents, among other applications. In addition, the enhanced permeability and retention effect of the nanosystems, and the renal excretion of the nanomicelles reported in most cases so far, are very attractive to achieve selective and increased accumulation in tumor site as well as to increase the safety of [²²³Rₐ]RₐCl₂ in the clinical routine. In the present work, [²²³Rₐ]RₐCl₂ nanomicelles were produced, characterized, in vitro evaluated, and compared with pure [²²³Rₐ]RₐCl2 solution using SAOS2 osteosarcoma cells. The [²²³Rₐ]RₐCl₂ nanomicelles were prepared using the amphiphilic copolymer Pluronic F127. The dynamic light scattering analysis of freshly produced [²²³Rₐ]RₐCl₂ nanomicelles demonstrated a mean size of 129.4 nm with a polydispersity index (PDI) of 0.303. After one week stored in the refrigerator, the mean size of the [²²³Rₐ]RₐCl₂ nanomicelles increased to 169.4 with a PDI of 0.381. Atomic force microscopy analysis of [223Rₐ]RₐCl₂ nanomicelles exhibited spherical structures whose heights reach 1 µm, suggesting the filling of 127-Pluronic nanomicelles with [²²³Rₐ]RₐCl₂. The viability assay with [²²³Rₐ]RₐCl₂ nanomicelles displayed a dose-dependent response as it was observed using pure [²²³Rₐ]RₐCl2. However, at the same dose, [²²³Rₐ]RₐCl₂ nanomicelles were 20% higher efficient in killing SAOS2 cells when compared with pure [²²³Rₐ]RₐCl₂. These findings demonstrated the effectiveness of the nanosystem validating the application of nanotechnology in targeted alpha therapy with [²²³Ra]RₐCl₂. In addition, the [²²³Rₐ]RaCl₂nanomicelles may be decorated and incorporated with a great variety of agents and compounds (e.g., monoclonal antibodies, aptamers, peptides) to overcome the limited use of [²²³Ra]RₐCl₂.

Keywords: nanomicelles, osteosarcoma, radium dichloride, targeted alpha therapy

Procedia PDF Downloads 117
465 Clinical Audit on the Introduction of Apremilast into Ireland

Authors: F. O’Dowd, G. Murphy, M. Roche, E. Shudell, F. Keane, M. O’Kane

Abstract:

Intoduction: Apremilast (Otezla®) is an oral phosphodiesterase-4 (PDE4) inhibitor indicated for treatment of adult patients with moderate to severe plaque psoriasis who have contraindications to have failed or intolerant of standard systemic therapy and/or phototherapy; and adult patients with active psoriatic arthritis. Apremilast influences intracellular regulation of inflammatory mediators. Two randomized, placebo-controlled trials evaluating apremilast in 1426 patients with moderate to severe plague psoriasis (ESTEEM 1 and 2) demonstrated that the commonest adverse reactions (AE’s) leading to discontinuation were nausea (1.6%), diarrhoea (1.0%), and headaches (0.8%). The overall proportion of subjects discontinuing due to adverse reactions was 6.1%. At week 16 these trials demonstrated significant more apremilast-treated patients (33.1%) achieved the primary end point PASI-75 than placebo (5.3%). We began prescribing apremilast in July 2015. Aim: To evaluate efficacy and tolerability of apremilast in an Irish teaching hospital psoriasis population. Methods: A proforma documenting clinical evaluation parameters, prior treatment experience and AE’s; was completed prospectively on all patients commenced on apremilast since July 2015 – July 2017. Data was collected at week 0,6,12,24,36 and week 52 with 20/71 patients having passed week 52. Efficacy was assessed using Psoriasis Area and Severity Index (PASI) and Dermatology Life Quality Index (DLQI). AE’s documented included GI effects, infections, changes in weight and mood. Retrospective chart review and telephone review was utilised for missing data. Results: A total of 71 adult subjects (38 male, 33 female; age range 23-57), with moderate to severe psoriasis, were evaluated. Prior treatment: 37/71 (52%) were systemic/biologic/phototherapy naïve; 14/71 (20%) has prior phototherapy alone;20/71 (28%) had previous systemic/biologic exposure; 12/71 (17%) had both psoriasis and psoriatic arthritis. PASI responses: mean baseline PASI was 10.1 and DLQI was 15.Week 6: N=71, n=15 (21%) achieved PASI 75. Week 12: N= 48, n=6 (13%) achieved a PASI 100%; n=16 (34.5%) achieved a PASI 75. Week 24: N=40, n=10 (25%) achieved a PASI 100; n=15 (37.5%) achieved a PASI 75. Week 52: N= 20, n=4 (20%) achieved a PASI 100; n= 16 (80%) achieved a PASI 75. (N= number of pts having passed the time point indicated, n= number of pts (out of N) achieving PASI or DLQI responses at that time). DLQI responses: week 24: N= 40, n=30 (75%) achieved a DLQI score of 0; n=5 (12.5%) achieved a DLQI score of 1; n=1 (2.5%) achieved a DLQI score of 10 (due to lack of efficacy). Adverse Events: The proportion of patients that discontinued treatment due to AE’s was n=7 (9.8%). One patient experienced nausea alleviated by dose reduction; another developed significant dysgeusia for certain foods, both continued therapy. Two patients lost 2-3 kg. Conclusion: Initial Irish patient experience of Apremilast appears comparable to that observed in trials with good efficacy and tolerability.

Keywords: Apremilast, introduction, Ireland, clinical audit

Procedia PDF Downloads 149
464 Chronic Fatigue Syndrome/Myalgic Encephalomyelitis in Younger Children: A Qualitative Analysis of Families’ Experiences of the Condition and Perspective on Treatment

Authors: Amberly Brigden, Ali Heawood, Emma C. Anderson, Richard Morris, Esther Crawley

Abstract:

Background: Paediatric chronic fatigue syndrome (CFS)/myalgic encephalomyelitis (ME) is characterised by persistent, disabling fatigue. Health services see patients below the age of 12. This age group experience high levels of disability, with low levels of school attendance, high levels of fatigue, anxiety, functional disability and pain. CFS/ME interventions have been developed for adolescents, but the developmental needs of younger children suggest treatment should be tailored to this age group. Little is known about how intervention should be delivered to this age group, and further work is needed to explore this. Qualitative research aids patient-centered design of health intervention. Methods: Five to 11-year-olds and their parents were recruited from a specialist CFS/ME service. Semi-structured interviews explored the families’ experience of the condition and perspectives on treatment. Interactive and arts-based methods were used. Interviews were audio-recorded, transcribed and analysed thematically. Qualitative Results: 14 parents and 7 children were interviewed. Early analysis of the interviews revealed the importance of the social-ecological setting of the child, which led to themes being developed in the context of Systems Theory. Theme one relates to the level of the child, theme two the family system, theme three the organisational and societal systems, and theme four cuts-across all levels. Theme1: The child’s capacity to describe, understand and manage their condition. Younger children struggled to describe their internal experiences, such as physical symptoms. Parents felt younger children did not understand some concepts of CFS/ME and did not have the capabilities to monitor and self-regulate their behaviour, as required by treatment. A spectrum of abilities was described; older children (10-11-year-olds) were more involved in clinical sessions and had more responsibility for self-management. Theme2: Parents’ responsibility for managing their child’s condition. Parents took responsibility for regulating their child’s behaviour in accordance with the treatment programme. They structured their child’s environment, gave direct instructions to their child, and communicated the needs of their child to others involved in care. Parents wanted their child to experience a 'normal' childhood and took steps to shield their child from medicalization, including diagnostic labels and clinical discussions. Theme3: Parental isolation and the role of organisational and societal systems. Parents felt unsupported in their role of managing the condition and felt negative responses from primary care health services and schools were underpinned by a lack of awareness and knowledge about CFS/ME in younger children. This sometimes led to a protracted time to diagnosis. Parents felt that schools have the potential important role in managing the child’s condition. Theme4: Complexity and uncertainty. Many parents valued specialist treatment (which included activity management, physiotherapy, sleep management, dietary advice, medical management and psychological support), but felt it needed to account for the complexity of the condition in younger children. Some parents expressed uncertainty about the diagnosis and the treatment programme. Conclusions: Interventions for younger children need to consider the 'systems' (family, organisational and societal) involved in the child’s care. Future research will include interviews with clinicians and schools supporting younger children with CFS/ME.

Keywords: chronic fatigue syndrome (CFS)/myalgic encephalomyelitis (ME), pediatric, qualitative, treatment

Procedia PDF Downloads 140
463 To Access the Knowledge, Awareness and Factors Associated With Diabetes Mellitus in Buea, Cameroon

Authors: Franck Acho

Abstract:

This is a chronic metabolic disorder which is a fast-growing global problem with a huge social, health, and economic consequences. It is estimated that in 2010 there were globally 285 million people (approximately 6.4% of the adult population) suffering from this disease. This number is estimated to increase to 430 million in the absence of better control or cure. An ageing population and obesity are two main reasons for the increase. Diabetes mellitus is a chronic heterogeneous metabolic disorder with a complex pathogenesis. It is characterized by elevated blood glucose levels or hyperglycemia, which results from abnormalities in either insulin secretion or insulin action or both. Hyperglycemia manifests in various forms with a varied presentation and results in carbohydrate, fat, and protein metabolic dysfunctions. Long-term hyperglycemia often leads to various microvascular and macrovascular diabetic complications, which are mainly responsible for diabetes-associated morbidity and mortality. Hyperglycemia serves as the primary biomarker for the diagnosis of diabetes as well. Furthermore, it has been shown that almost 50% of the putative diabetics are not diagnosed until 10 years after onset of the disease, hence the real prevalence of global diabetes must be astronomically high. This study was conducted in a locality to access the level of knowledge, awareness and risk factors associated with people leaving with diabetes mellitus. A month before the screening was to be conducted, a health screening in some selected churches and on the local community radio as well as on relevant WhatsApp groups were advertised. A general health talk was delivered by the head of the screening unit to all attendees who were all educated on the procedure to be carried out with benefits and any possible discomforts after which the attendee’s consent was obtained. Evaluation of the participants for any leads to the diabetes selected for the screening was done by taking adequate history and physical examinations such as excessive thirst, increased urination, tiredness, hunger, unexplained weight loss, feeling irritable or having other mood changes, having blurry vision, having slow-healing sores, getting a lot of infections, such as gum, skin and vaginal infections. Out of the 94 participants the finding show that 78 were females and 16 were males, 70.21% of participants with diabetes were between the ages of 60-69yrs.The study found that only 10.63% of respondents declared a good level of knowledge of diabetes. Out of 3 symptoms of diabetes analyzed in this study, high blood sugar (58.5%) and chronic fatigue (36.17%) were the most recognized. Out of 4 diabetes risk factors analyzed in this study, obesity (21.27%) and unhealthy diet (60.63%) were the most recognized diabetes risk factors, while only 10.6% of respondents indicated tobacco use. The diabetic foot was the most recognized diabetes complication (50.57%), but some the participants indicated vision problems (30.8%),or cardiovascular diseases (20.21%) as diabetes complications.

Keywords: diabetes mellitus, non comunicable disease, general health talk, hyperglycemia

Procedia PDF Downloads 56
462 The Underground Ecosystem of Credit Card Frauds

Authors: Abhinav Singh

Abstract:

Point Of Sale (POS) malwares have been stealing the limelight this year. They have been the elemental factor in some of the biggest breaches uncovered in past couple of years. Some of them include • Target: A Retail Giant reported close to 40 million credit card data being stolen • Home Depot : A home product Retailer reported breach of close to 50 million credit records • Kmart: A US retailer recently announced breach of 800 thousand credit card details. Alone in 2014, there have been reports of over 15 major breaches of payment systems around the globe. Memory scrapping malwares infecting the point of sale devices have been the lethal weapon used in these attacks. These malwares are capable of reading the payment information from the payment device memory before they are being encrypted. Later on these malwares send the stolen details to its parent server. These malwares are capable of recording all the critical payment information like the card number, security number, owner etc. All these information are delivered in raw format. This Talk will cover the aspects of what happens after these details have been sent to the malware authors. The entire ecosystem of credit card frauds can be broadly classified into these three steps: • Purchase of raw details and dumps • Converting them to plastic cash/cards • Shop! Shop! Shop! The focus of this talk will be on the above mentioned points and how they form an organized network of cyber-crime. The first step involves buying and selling of the stolen details. The key point to emphasize are : • How is this raw information been sold in the underground market • The buyer and seller anatomy • Building your shopping cart and preferences • The importance of reputation and vouches • Customer support and replace/refunds These are some of the key points that will be discussed. But the story doesn’t end here. As of now the buyer only has the raw card information. How will this raw information be converted to plastic cash? Now comes in picture the second part of this underground economy where-in these raw details are converted into actual cards. There are well organized services running underground that can help you in converting these details into plastic cards. We will discuss about this technique in detail. At last, the final step involves shopping with the stolen cards. The cards generated with the stolen details can be easily used to swipe-and-pay for purchased goods at different retail shops. Usually these purchases are of expensive items that have good resale value. Apart from using the cards at stores, there are underground services that lets you deliver online orders to their dummy addresses. Once the package is received it will be delivered to the original buyer. These services charge based on the value of item that is being delivered. The overall underground ecosystem of credit card fraud works in a bulletproof way and it involves people working in close groups and making heavy profits. This is a brief summary of what I plan to present at the talk. I have done an extensive research and have collected good deal of material to present as samples. Some of them include: • List of underground forums • Credit card dumps • IRC chats among these groups • Personal chat with big card sellers • Inside view of these forum owners. The talk will be concluded by throwing light on how these breaches are being tracked during investigation. How are credit card breaches tracked down and what steps can financial institutions can build an incidence response over it.

Keywords: POS mawalre, credit card frauds, enterprise security, underground ecosystem

Procedia PDF Downloads 439
461 The Affective Motivation of Women Miners in Ghana

Authors: Adesuwa Omorede, Rufai Haruna Kilu

Abstract:

Affective motivation (motivation that is emotionally laden usually related to affect, passion, emotions, moods) in the workplace stimulates individuals to reinforce, persist and commit to their task, which leads to the individual and organizational performance. This leads individuals to reach goals especially in situations where task are highly challenging and hostile. In such situations, individuals are more disposed to be more creative, innovative and see new opportunities from the loopholes in their workplace. However, when individuals feel displaced and less important, an adverse reaction may suffice which may be detrimental to the organization and its performance. One sector where affective motivation is eminently present and relevant, is the mining industry. Due to its intense work environment; mostly dominated by men and masculinity cultures; and deliberate exclusion of women in this environment which, makes the women working in these environments to feel marginalized. In Ghana, the mining industry is mostly seen as a very physical environment especially underground and mostly considerd as 'no place for a woman'. Despite the fact that these women feel less 'needed' or 'appreciated' in such environments, they still have to juggle between intense work shifts; face violence and other health risks with their families, which put a strain on their affective motivational reaction. Beyond these challenges, however, several mining companies in Ghana today are working towards providing a fair and equal working situation for both men and women miners, by recognizing them as key stakeholders, as well as including them in the stages of mining projects from the planning and designing phase to the evaluation and implementation stage. Drawing from the psychology and gender literature, this study takes a narrative approach to identify and understand the shifting gender dynamics within the mine works in Ghana, occasioning a change in background disposition of miners, which leads to more women taking up mine jobs in the country. In doing so, a qualitative study was conducted using semi-structured interviews from Ghana. Several women working within the mining industries in Ghana shared their experiences and how they felt and still feel in their workplace. In addition, archival documents were gathered to support the findings. The results suggest a change in enrolment regimes in a mining and technology university in Ghana, making room for a more gender equal enrolments in the university. A renowned university that train and feed mine work professional into the industry. The results further acknowledge gender equal and diversity recruitment policies and initiatives among the mining companies of Ghana. This study contributes to the psychology and gender literature by highlighting the hindrances women face in the mining industry as well as highlighting several of their affective reactions towards gender inequality. The study also provides several suggestions for decision makers in the mining industry of what can be done in the future to reduce the gender inequality gap within the industry.

Keywords: affective motivation, gender shape shifting, mining industry, women miners

Procedia PDF Downloads 301
460 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops

Authors: Simon Komesker, Achim Wagner, Martin Ruskowski

Abstract:

In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.

Keywords: holonic manufacturing system, modular production system, planning, and control, system structure

Procedia PDF Downloads 169
459 Case Report: A Rare Presentation of Fowler's Syndrome in Pregnancy with Mitrofanoff Procedure

Authors: Humaira Saeed Malik, Salma Saad

Abstract:

Introduction: Fowler's syndrome, first described by Clare Fowler in 1985, is a rare urological condition characterized by difficulty in urination due to the abnormal function of the urethral sphincter. It predominantly affects young women and leads to chronic urinary retention. The main concern in managing this condition is ensuring regular bladder emptying. Clam cystoplasty is a bladder augmentation surgery in which the bladder is clam-shelled open, and a segment of the intestine is used to increase the bladder's capacity and reduce bladder pressure. The Mitrofanoff procedure, a surgical creation of a continent urinary diversion, is often performed in patients with Fowler's syndrome who require long-term catheterization. This procedure involves creating a conduit (from the appendix or a segment of the small intestine) between the bladder and the skin, allowing for intermittent self-catheterization to manage urinary retention. Study: This case study examines a 39-year-old gravida 3, para 0+2 woman with a BMI of 40, Fowler's syndrome, type I diabetes, and post-traumatic stress disorder (PTSD), presenting at Dumfries and Galloway Royal Infirmary at 8 weeks of gestation. Diagnosed with Fowler's syndrome at 23, . A sacral nerve stimulator (SNS) device was initially placed but was subsequently removed after one year due to malfunction caused by trauma, subsequently she had undergone clam cystoplasty and the Mitrofanoff procedure for bladder management. Her pregnancy was complicated by vaginal bleeding at 10 weeks, treated with progesterone pessaries, and a urinary tract infection at 14 weeks, managed with antibiotics. Despite these challenges, she continued self-catheterization through the Mitrofanoff stoma and was placed on prophylactic antibiotics. Her diabetes was well-controlled on insulin, and a 20-week fetal anomaly scan was normal. The multidisciplinary team, including an obstetrician and a urologist, planned for serial growth scans and the initiation of low molecular weight heparin (LMWH) from 28 weeks due to the intermediate risk of venous thromboembolism (VTE) and to continue six weeks after delivery. A planned cesarean delivery at 37 weeks was arranged, with an MRI scan scheduled later in the pregnancy to assist in surgical planning, ensuring the preservation of the Mitrofanoff stoma's function. The surgery will occur in an elective setting and include a consultant urologist. Conclusion: Pregnancy in women with Fowler's syndrome who have undergone Clam cystoplasty and the Mitrofanoff procedure is rare, and management requires careful planning and a multidisciplinary approach. This case highlights the importance of individualized care plans and close monitoring of both mother and fetus. The patient's risk of recurrent UTIs, coupled with her diabetes and high BMI, necessitated coordinated care across specialties to ensure the best possible outcomes. The Mitrofanoff procedure proved effective in managing her urinary retention, allowing her to maintain self-catheterization during pregnancy. The multidisciplinary team approach was crucial in addressing her complex medical needs, involving obstetrics, urology, and endocrinology. This case adds valuable information to the limited literature on pregnancy management in patients with Fowler's syndrome who have undergone the Mitrofanoff procedure, highlighting the need for comprehensive, individualized care and the involvement of a multidisciplinary team to achieve the best results.

Keywords: fowler's syndrome, clam cystoplasty, mitrofanoff procedure, pregnancy

Procedia PDF Downloads 32
458 Insights on Nitric Oxide Interaction with Phytohormones in Rice Root System Response to Metal Stress

Authors: Piacentini Diego, Della Rovere Federica, Fattorini Laura, Lanni Francesca, Cittadini Martina, Altamura Maria Maddalena, Falasca Giuseppina

Abstract:

Plants have evolved sophisticated mechanisms to cope with environmental cues. Changes in intracellular content and distribution of phytohormones, such as the auxin indole-3-acetic acid (IAA), have been involved in morphogenic adaptation to environmental stresses. In addition to phytohormones, plants can rely on a plethora of small signal molecules able to promptly sense and transduce the stress signals, resulting in morpho/physiological responses thanks also to their capacity to modulate the levels/distribution/reception of most hormones. Among these signaling molecules, nitrogen monoxide (nitric oxide – NO) is a critical component in several plant acclimation strategies to both biotic and abiotic stresses. Depending on its levels, NO increases plant adaptation by enhancing the enzymatic or non-enzymatic antioxidant systems or by acting as a direct scavenger of reactive oxygen/nitrogen (ROS/RNS) species produced during the stress. In addition, exogenous applications of NO-specific donor compounds showed the involvement of the signal molecule in auxin metabolism, transport, and signaling, under both physiological and stress conditions. However, the complex mechanisms underlying NO action in interacting with phytohormones, such as auxins, during metal stress responses are still poorly understood and need to be better investigated. Emphasis must be placed on the response of the root system since it is the first plant organ system to be exposed to metal soil pollution. The monocot Oryza sativa L. (rice) has been chosen given its importance as a stable food for some 4 billion people worldwide. In addition, increasing evidence has shown that rice is often grown in contaminated paddy soils with high levels of heavy metal cadmium (Cd) and metalloid arsenic (As). The facility through which these metals are taken up by rice roots and transported to the aerial organs up to the edible caryopses makes rice one of the most relevant sources of these pollutants for humans. This study aimed to evaluate if NO has a mitigatory activity in the roots of rice seedlings against Cd or As toxicity and to understand if this activity requires interactions with auxin. Our results show that exogenous treatments with the NO-donor SNP alleviate the stress induced by Cd, but not by As, in in-vitro-grown rice seedlings through increased intracellular root NO levels. The damages induced by the pollutants include root growth inhibition, root histological alterations and ROS (H2O2, O2●ˉ), and RNS (ONOOˉ) production. Also, SNP treatments mitigate both the root increase in root IAA levels and the IAA alteration in distribution monitored by the OsDR5::GUS system due to the toxic metal exposure. Notably, the SNP-induced mitigation of the IAA homeostasis altered by the pollutants does not involve changes in the expression of OsYUCCA1 and ASA2 IAA-biosynthetic genes. Taken together, the results highlight a mitigating role of NO in the rice root system, which is pollutant-specific, and involves the interaction of the signal molecule with both IAA and brassinosteroids at different (i.e., transport, levels, distribution) and multiple levels (i.e., transcriptional/post-translational levels). The research is supported by Progetti Ateneo Sapienza University of Rome, grant number: RG120172B773D1FF

Keywords: arsenic, auxin, cadmium, nitric oxide, rice, root system

Procedia PDF Downloads 80