Search results for: background models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10612

Search results for: background models

322 Elucidation of Dynamics of Murine Double Minute 2 Shed Light on the Anti-cancer Drug Development

Authors: Nigar Kantarci Carsibasi

Abstract:

Coarse-grained elastic network models, namely Gaussian network model (GNM) and Anisotropic network model (ANM), are utilized in order to investigate the fluctuation dynamics of Murine Double Minute 2 (MDM2), which is the native inhibitor of p53. Conformational dynamics of MDM2 are elucidated in unbound, p53 bound, and non-peptide small molecule inhibitor bound forms. With this, it is aimed to gain insights about the alterations brought to global dynamics of MDM2 by native peptide inhibitor p53, and two small molecule inhibitors (HDM201 and NVP-CGM097) that are undergoing clinical stages in cancer studies. MDM2 undergoes significant conformational changes upon inhibitor binding, carrying pieces of evidence of induced-fit mechanism. Small molecule inhibitors examined in this work exhibit similar fluctuation dynamics and characteristic mode shapes with p53 when complexed with MDM2, which would shed light on the design of novel small molecule inhibitors for cancer therapy. The results showed that residues Phe 19, Trp 23, Leu 26 reside in the minima of slowest modes of p53, pointing to the accepted three-finger binding model. Pro 27 displays the most significant hinge present in p53 and comes out to be another functionally important residue. Three distinct regions are identified in MDM2, for which significant conformational changes are observed upon binding. Regions I (residues 50-77) and III (residues 90-105) correspond to the binding interface of MDM2, including (α2, L2, and α4), which are stabilized during complex formation. Region II (residues 77-90) exhibits a large amplitude motion, being highly flexible, both in the absence and presence of p53 or other inhibitors. MDM2 exhibits a scattered profile in the fastest modes of motion, while binding of p53 and inhibitors puts restraints on MDM2 domains, clearly distinguishing the kinetically hot regions. Mode shape analysis revealed that the α4 domain controls the size of the cleft by keeping the cleft narrow in unbound MDM2; and open in the bound states for proper penetration and binding of p53 and inhibitors, which points to the induced-fit mechanism of p53 binding. P53 interacts with α2 and α4 in a synchronized manner. Collective modes are shifted upon inhibitor binding, i.e., second mode characteristic motion in MDM2-p53 complex is observed in the first mode of apo MDM2; however, apo and bound MDM2 exhibits similar features in the softest modes pointing to pre-existing modes facilitating the ligand binding. Although much higher amplitude motions are attained in the presence of non-peptide small molecule inhibitor molecules as compared to p53, they demonstrate close similarity. Hence, NVP-CGM097 and HDM201 succeed in mimicking the p53 behavior well. Elucidating how drug candidates alter the MDM2 global and conformational dynamics would shed light on the rational design of novel anticancer drugs.

Keywords: cancer, drug design, elastic network model, MDM2

Procedia PDF Downloads 104
321 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution

Authors: M. Arun, A. Kannan

Abstract:

Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.

Keywords: acid orange 10, activated carbon, optimum adsorption conditions, statistical design

Procedia PDF Downloads 153
320 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza

Abstract:

Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.

Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making

Procedia PDF Downloads 55
319 Modern Cardiac Surgical Outcomes in Nonagenarians: A Multicentre Retrospective Observational Study

Authors: Laurence Weinberg, Dominic Walpole, Dong-Kyu Lee, Michael D’Silva, Jian W. Chan, Lachlan F. Miles, Bradley Carp, Adam Wells, Tuck S. Ngun, Siven Seevanayagam, George Matalanis, Ziauddin Ansari, Rinaldo Bellomo, Michael Yii

Abstract:

Background: There have been multiple recent advancements in the selection, optimization and management of cardiac surgical patients. However, there is limited data regarding the outcomes of nonagenarians undergoing cardiac surgery, despite this vulnerable cohort increasingly receiving these interventions. This study describes the patient characteristics, management and outcomes of a group of nonagenarians undergoing cardiac surgery in the context of contemporary peri-operative care. Methods: A retrospective observational study was conducted of patients 90 to 99 years of age (i.e., nonagenarians) who had undergone cardiac surgery requiring a classic median sternotomy (i.e., open-heart surgery). All operative indications were included. Patients who underwent minimally invasive surgery, transcatheter aortic valve implantation and thoracic aorta surgery were excluded. Data were collected from four hospitals in Victoria, Australia, over an 8-year period (January 2012 – December 2019). The primary objective was to assess six-month mortality in nonagenarians undergoing open-heart surgery and to evaluate the incidence and severity of postoperative complications using the Clavien-Dindo classification system. The secondary objective was to provide a detailed description of the characteristics and peri-operative management of this group. Results: A total of 12,358 adult patients underwent cardiac surgery at the study centers during the observation period, of whom 18 nonagenarians (0.15%) fulfilled the inclusion criteria. The median (IQR) [min-max] age was 91 years (90.0:91.8) [90-94] and 14 patients (78%) were men. Cardiovascular comorbidities, polypharmacy and frailty, were common. The median (IQR) predicted in-hospital mortality by EuroSCORE II was 6.1% (4.1-14.5). All patients were optimized preoperatively by a multidisciplinary team of surgeons, cardiologists, geriatricians and anesthetists. All index surgeries were performed on cardiopulmonary bypass. Isolated coronary artery bypass grafting (CABG) and CABG with aortic valve replacement were the most common surgeries being performed in four and five patients, respectively. Half the study group underwent surgery involving two or more major procedures (e.g. CABG and valve replacement). Surgery was undertaken emergently in 44% of patients. All patients except one experienced at least one postoperative complication. The most common complications were acute kidney injury (72%), new atrial fibrillation (44%) and delirium (39%). The highest Clavien-Dindo complication grade was IIIb occurring once each in three patients. Clavien-Dindo grade IIIa complications occurred in only one patient. The median (IQR) postoperative length of stay was 11.6 days (9.8:17.6). One patient was discharged home and all others to an inpatient rehabilitation facility. Three patients had an unplanned readmission within 30 days of discharge. All patients had follow-up to at least six months after surgery and mortality over this period was zero. The median (IQR) duration of follow-up was 11.3 months (6.0:26.4) and there were no cases of mortality observed within the available follow-up records. Conclusion: In this group of nonagenarians undergoing cardiac surgery, postoperative six-month mortality was zero. Complications were common but generally of low severity. These findings support carefully selected nonagenarian patients being offered cardiac surgery in the context of contemporary, multidisciplinary perioperative care. Further, studies are needed to assess longer-term mortality and functional and quality of life outcomes in this vulnerable surgical cohort.

Keywords: cardiac surgery, mortality, nonagenarians, postoperative complications

Procedia PDF Downloads 93
318 Emotion Regulation and Executive Functioning Scale for Children and Adolescents (REMEX): Scale Development

Authors: Cristina Costescu, Carmen David, Adrian Roșan

Abstract:

Executive functions (EF) and emotion regulation strategies are processes that allow individuals to function in an adaptative way and to be goal-oriented, which is essential for success in daily living activities, at school, or in social contexts. The Emotion Regulation and Executive Functioning Scale for Children and Adolescents (REMEX) represents an empirically based tool (based on the model of EF developed by Diamond) for evaluating significant dimensions of child and adolescent EFs and emotion regulation strategies, mainly in school contexts. The instrument measures the following dimensions: working memory, inhibition, cognitive flexibility, executive attention, planning, emotional control, and emotion regulation strategies. Building the instrument involved not only a top-down process, as we selected the content in accordance with prominent models of FE, but also a bottom-up one, as we were able to identify valid contexts in which FE and ER are put to use. For the construction of the instrument, we implemented three focus groups with teachers and other professionals since the aim was to develop an accurate, objective, and ecological instrument. We used the focus group method in order to address each dimension and to yield a bank of items to be further tested. Each dimension is addressed through a task that the examiner will apply and through several items derived from the main task. For the validation of the instrument, we plan to use item response theory (IRT), also known as the latent response theory, that attempts to explain the relationship between latent traits (unobservable cognitive processes) and their manifestations (i.e., observed outcomes, responses, or performance). REMEX represents an ecological scale that integrates a current scientific understanding of emotion regulation and EF and is directly applicable to school contexts, and it can be very useful for developing intervention protocols. We plan to test his convergent validity with the Childhood Executive Functioning Inventory (CHEXI) and Emotion Dysregulation Inventory (EDI) and divergent validity between a group of typically developing children and children with neurodevelopmental disorders, aged between 6 and 9 years old. In a previous pilot study, we enrolled a sample of 40 children with autism spectrum disorders and attention-deficit/hyperactivity disorder aged 6 to 12 years old, and we applied the above-mentioned scales (CHEXI and EDI). Our results showed that deficits in planning, bebavior regulation, inhibition, and working memory predict high levels of emotional reactivity, leading to emotional and behavioural problems. Considering previous results, we expect our findings to provide support for the validity and reliability of the REMEX version as an ecological instrument for assessing emotion regulation and EF in children and for key features of its uses in intervention protocols.

Keywords: executive functions, emotion regulation, children, item response theory, focus group

Procedia PDF Downloads 72
317 Qualitative Narrative Framework as Tool for Reduction of Stigma and Prejudice

Authors: Anastasia Schnitzer, Oliver Rehren

Abstract:

Mental health has become an increasingly important topic in society in recent years, not least due to the challenges posed by the corona pandemic. Along with this, the public has become more and more aware that a lack of enlightenment and proper coping mechanisms may result in a notable risk to develop mental disorders. Yet, there are still many biases against those affected, which are further connected to issues of stigmatization and societal exclusion. One of the main strategies to combat these forms of prejudice and stigma is to induce intergroup contact. More specifically, the Intergroup Contact Theory states engaging in certain types of contact with members of marginalized groups may be an effective way to improve attitudes towards these groups. However, due to the persistent prejudice and stigmatization, affected individuals often do not dare to speak openly about their mental disorders, so that intergroup contact often goes unnoticed. As a result, many people only experience conscious contact with individuals with a mental disorder through media. As an analogy to the Intergroup Contact Theory, the Parasocial Contact Hypothesis proposes that repeatedly being exposed to positive media representations of outgroup members can lead to a reduction of negative prejudices and attitudes towards this outgroup. While there is a growing body of research on the merit of this mechanism, measurements often only consist of 'positive' or 'negative' parasocial contact conditions (or examine the valence or quality of the previous contact with the outgroup); meanwhile, more specific conditions are often neglected. The current study aims to tackle this shortcoming. By scrutinizing the potential of contemporary series as a narrative framework of high quality, we strive to elucidate more detailed aspects of beneficial parasocial contact -for the sake of reducing prejudice and stigma towards individuals with mental disorders. Thus, a two-factorial between-subject online panel study with three measurement points was conducted (N = 95). Participants were randomly assigned to one of two groups, having to watch episodes of either a series with a narrative framework of high (Quality-TV) or low quality (Continental-TV), with one-week interval in-between the episodes. Suitable series were determined with the help of a pretest. Prejudice and stigma towards people with mental disorders were measured at the beginning of the study, before and after each episode, and in a final follow-up one week after the last two episodes. Additionally, parasocial interaction (PSI), quality of contact (QoC), and transportation were measured several times. Based on these data, multivariate multilevel analyses were performed in R using the lavaan package. Latent growth models showed moderate to high increases in QoC and PSI as well as small to moderate decreases in stigma and prejudice over time. Multilevel path analysis with individual and group levels further revealed that a qualitative narrative framework leads to a higher quality of contact experience, which then leads to lower prejudice and stigma, with effects ranging from moderate to high.

Keywords: prejudice, quality of contact, parasocial contact, narrative framework

Procedia PDF Downloads 62
316 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements

Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga

Abstract:

Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.

Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform

Procedia PDF Downloads 362
315 Harnessing Artificial Intelligence for Early Detection and Management of Infectious Disease Outbreaks

Authors: Amarachukwu B. Isiaka, Vivian N. Anakwenze, Chinyere C. Ezemba, Chiamaka R. Ilodinso, Chikodili G. Anaukwu, Chukwuebuka M. Ezeokoli, Ugonna H. Uzoka

Abstract:

Infectious diseases continue to pose significant threats to global public health, necessitating advanced and timely detection methods for effective outbreak management. This study explores the integration of artificial intelligence (AI) in the early detection and management of infectious disease outbreaks. Leveraging vast datasets from diverse sources, including electronic health records, social media, and environmental monitoring, AI-driven algorithms are employed to analyze patterns and anomalies indicative of potential outbreaks. Machine learning models, trained on historical data and continuously updated with real-time information, contribute to the identification of emerging threats. The implementation of AI extends beyond detection, encompassing predictive analytics for disease spread and severity assessment. Furthermore, the paper discusses the role of AI in predictive modeling, enabling public health officials to anticipate the spread of infectious diseases and allocate resources proactively. Machine learning algorithms can analyze historical data, climatic conditions, and human mobility patterns to predict potential hotspots and optimize intervention strategies. The study evaluates the current landscape of AI applications in infectious disease surveillance and proposes a comprehensive framework for their integration into existing public health infrastructures. The implementation of an AI-driven early detection system requires collaboration between public health agencies, healthcare providers, and technology experts. Ethical considerations, privacy protection, and data security are paramount in developing a framework that balances the benefits of AI with the protection of individual rights. The synergistic collaboration between AI technologies and traditional epidemiological methods is emphasized, highlighting the potential to enhance a nation's ability to detect, respond to, and manage infectious disease outbreaks in a proactive and data-driven manner. The findings of this research underscore the transformative impact of harnessing AI for early detection and management, offering a promising avenue for strengthening the resilience of public health systems in the face of evolving infectious disease challenges. This paper advocates for the integration of artificial intelligence into the existing public health infrastructure for early detection and management of infectious disease outbreaks. The proposed AI-driven system has the potential to revolutionize the way we approach infectious disease surveillance, providing a more proactive and effective response to safeguard public health.

Keywords: artificial intelligence, early detection, disease surveillance, infectious diseases, outbreak management

Procedia PDF Downloads 37
314 Unlocking New Room of Production in Brown Field; ‎Integration of Geological Data Conditioned 3D Reservoir ‎Modelling of Lower Senonian Matulla Formation, RAS ‎Budran Field, East Central Gulf of Suez, Egypt

Authors: Nader Mohamed

Abstract:

The Late Cretaceous deposits are well developed through-out Egypt. This is due to a ‎transgression phase associated with the subsidence caused by the neo-Tethyan rift event that ‎took place across the northern margin of Africa, resulting in a period of dominantly marine ‎deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, ‎Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided ‎into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, ‎the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian ‎Matulla formations are the most important clastic sequence in the Nezzazat Group because they ‎provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis ‎on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic ‎surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed ‎Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras ‎Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with ‎outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and ‎subdivide the Matulla formation into three units. The lower unit is believed to be the main ‎reservoir where it consists mainly of sands with shale and sandy carbonates, while the other ‎units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an ‎effective technique that assists in reservoir management as decisions concerning development ‎and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as ‎accurately as possible in order to better evaluate, calculate the reserves and to determine the ‎most effective way of recovering as much of the petroleum economically as possible. All ‎available data on Matulla formation are used to build the reservoir structure model, lithofacies, ‎porosity, permeability and water saturation models which are the main parameters that describe ‎the reservoirs and provide information on effective evaluation of the need to develop the oil ‎potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of ‎geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and ‎facies environment interpretation which helped in defining the nature of deposition of Matulla ‎formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the ‎spatial distribution of property and in addition evaluating the unlocked new reservoir areas of ‎Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This ‎study led to adding a new room of production and additional reserves to Ras Budran field. ‎

Keywords: geology, oil and gas, geoscience, sequence stratigraphy

Procedia PDF Downloads 79
313 Pulmonary Complication of Chronic Liver Disease and the Challenges Identifying and Managing Three Patients

Authors: Aidan Ryan, Nahima Miah, Sahaj Kaur, Imogen Sutherland, Mohamed Saleh

Abstract:

Pulmonary symptoms are a common presentation to the emergency department. Due to a lack of understanding of the underlying pathophysiology, chronic liver disease is not often considered a cause of dyspnea. We present three patients who were admitted with significant respiratory distress secondary to hepatopulmonary syndrome, portopulmonary hypertension, and hepatic hydrothorax. The first is a 27-year-old male with a 6-month history of progressive dyspnea. The patient developed a severe type 1 respiratory failure with a PaO₂ of 6.3kPa and was escalated to critical care, where he was managed with non-invasive ventilation to maintain oxygen saturation. He had an agitated saline contrast echocardiogram, which showed the presence of a possible shunt. A CT angiogram revealed significant liver cirrhosis, portal hypertension, and large para esophageal varices. Ultrasound of the abdomen showed coarse liver echo patter and enlarged spleen. Along with these imaging findings, his biochemistry demonstrated impaired synthetic liver function with an elevated international normalized ratio (INR) of 1.4 and hypoalbuminaemia of 28g/L. The patient was then transferred to a tertiary center for further management. Further investigations confirmed a shunt of 56%, and liver biopsy confirmed cirrhosis suggestive of alpha-1-antitripsyin deficiency. The findings were consistent with a diagnosis of hepatopulmonary syndrome, and the patient is awaiting a liver transplant. The second patient is a 56-year-old male with a 12-month history of worsening dyspnoea, jaundice, confusion. His medical history included liver cirrhosis, portal hypertension, and grade 1 oesophageal varices secondary to significant alcohol excess. On admission, he developed a type 1 respiratory failure with PaO₂ of 6.8kPa requiring 10L of oxygen. CT pulmonary angiogram was negative for pulmonary embolism but showed evidence of chronic pulmonary hypertension, liver cirrhosis, and portal hypertension. An echocardiogram revealed a grossly dilated right heart with reduced function, pulmonary and tricuspid regurgitation, and pulmonary artery pressures estimated at 78mmHg. His biochemical markers showed impaired synthetic liver function with an INR of 3.2, albumin of 29g/L, along with raised bilirubin of 148mg/dL. During his long admission, he was managed with diuretics with little improvement. After three weeks, he was diagnosed with portopulmonary hypertension and was commenced on terlipressin. This resulted in successfully weaning off oxygen, and he was discharged home. The third patient is a 61-year-old male who presented to the local ambulatory care unit for therapeutic paracentesis on a background of decompensated liver cirrhosis. On presenting, he complained of a 2-day history of worsening dyspnoea and a productive cough. Chest x-ray showed a large pleural effusion, increasing in size over the previous eight months, and his abdomen was visibly distended with ascitic fluid. Unfortunately, the patient deteriorated, developing a larger effusion along with an increase in oxygen demand, and passed away. Without underlying cardiorespiratory disease, in the presence of a persistent pleural effusion with underlying decompensated cirrhosis, he was diagnosed with hepatic hydrothorax. While each presented with dyspnoea, the cause and underlying pathophysiology differ significantly from case to case. By describing these complications, we hope to improve awareness and aid prompt and accurate diagnosis, vital for improving outcomes.

Keywords: dyspnea, hepatic hydrothorax, hepatopulmonary syndrome, portopulmonary syndrome

Procedia PDF Downloads 99
312 A Prospective Neurosurgical Registry Evaluating the Clinical Care of Traumatic Brain Injury Patients Presenting to Mulago National Referral Hospital in Uganda

Authors: Benjamin J. Kuo, Silvia D. Vaca, Joao Ricardo Nickenig Vissoci, Catherine A. Staton, Linda Xu, Michael Muhumuza, Hussein Ssenyonjo, John Mukasa, Joel Kiryabwire, Lydia Nanjula, Christine Muhumuza, Henry E. Rice, Gerald A. Grant, Michael M. Haglund

Abstract:

Background: Traumatic Brain Injury (TBI) is disproportionally concentrated in low- and middle-income countries (LMICs), with the odds of dying from TBI in Uganda more than 4 times higher than in high income countries (HICs). The disparities in the injury incidence and outcome between LMICs and resource-rich settings have led to increased health outcomes research for TBIs and their associated risk factors in LMICs. While there have been increasing TBI studies in LMICs over the last decade, there is still a need for more robust prospective registries. In Uganda, a trauma registry implemented in 2004 at the Mulago National Referral Hospital (MNRH) showed that RTI is the major contributor (60%) of overall mortality in the casualty department. While the prior registry provides information on injury incidence and burden, it’s limited in scope and doesn’t follow patients longitudinally throughout their hospital stay nor does it focus specifically on TBIs. And although these retrospective analyses are helpful for benchmarking TBI outcomes, they make it hard to identify specific quality improvement initiatives. The relationship among epidemiology, patient risk factors, clinical care, and TBI outcomes are still relatively unknown at MNRH. Objective: The objectives of this study are to describe the processes of care and determine risk factors predictive of poor outcomes for TBI patients presenting to a single tertiary hospital in Uganda. Methods: Prospective data were collected for 563 TBI patients presenting to a tertiary hospital in Kampala from 1 June – 30 November 2016. Research Electronic Data Capture (REDCap) was used to systematically collect variables spanning 8 categories. Univariate and multivariate analysis were conducted to determine significant predictors of mortality. Results: 563 TBI patients were enrolled from 1 June – 30 November 2016. 102 patients (18%) received surgery, 29 patients (5.1%) intended for surgery failed to receive it, and 251 patients (45%) received non-operative management. Overall mortality was 9.6%, which ranged from 4.7% for mild and moderate TBI to 55% for severe TBI patients with GCS 3-5. Within each TBI severity category, mortality differed by management pathway. Variables predictive of mortality were TBI severity, more than one intracranial bleed, failure to receive surgery, high dependency unit admission, ventilator support outside of surgery, and hospital arrival delayed by more than 4 hours. Conclusions: The overall mortality rate of 9.6% in Uganda for TBI is high, and likely underestimates the true TBI mortality. Furthermore, the wide-ranging mortality (3-82%), high ICU fatality, and negative impact of care delays suggest shortcomings with the current triaging practices. Lack of surgical intervention when needed was highly predictive of mortality in TBI patients. Further research into the determinants of surgical interventions, quality of step-up care, and prolonged care delays are needed to better understand the complex interplay of variables that affect patient outcome. These insights guide the development of future interventions and resource allocation to improve patient outcomes.

Keywords: care continuum, global neurosurgery, Kampala Uganda, LMIC, Mulago, prospective registry, traumatic brain injury

Procedia PDF Downloads 206
311 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 159
310 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group

Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb

Abstract:

Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.

Keywords: broadband services, customer experience quality, loyalty, net promoters score

Procedia PDF Downloads 246
309 Active Development of Tacit Knowledge: Knowledge Management, High Impact Practices and Experiential Learning

Authors: John Zanetich

Abstract:

Due to their positive associations with student learning and retention, certain undergraduate opportunities are designated ‘high-impact.’ High-Impact Practices (HIPs) such as, learning communities, community based projects, research, internships, study abroad and culminating senior experience, share several traits bin common: they demand considerable time and effort, learning occurs outside of the classroom, and they require meaningful interactions between faculty and students, they encourage collaboration with diverse others, and they provide frequent and substantive feedback. As a result of experiential learning in these practices, participation in these practices can be life changing. High impact learning helps individuals locate tacit knowledge, and build mental models that support the accumulation of knowledge. On-going learning from experience and knowledge conversion provides the individual with a way to implicitly organize knowledge and share knowledge over a lifetime. Knowledge conversion is a knowledge management component which focuses on the explication of the tacit knowledge that exists in the minds of students and that knowledge which is embedded in the process and relationships of the classroom educational experience. Knowledge conversion is required when working with tacit knowledge and the demand for a learner to align deeply held beliefs with the cognitive dissonance created by new information. Knowledge conversion and tacit knowledge result from the fact that an individual's way of knowing, that is, their core belief structure, is considered generalized and tacit instead of explicit and specific. As a phenomenon, tacit knowledge is not readily available to the learner for explicit description unless evoked by an external source. The development of knowledge–related capabilities such as Aggressive Development of Tacit Knowledge (ADTK) can be used in experiential educational programs to enhance knowledge, foster behavioral change, improve decision making, and overall performance. ADTK allows the student in HIPs to use their existing knowledge in a way that allows them to evaluate and make any necessary modifications to their core construct of reality in order to amalgamate new information. Based on the Lewin/Schein Change Theory, the learner will reach for tacit knowledge as a stabilizing mechanism when they are challenged by new information that puts them slightly off balance. As in word association drills, the important concept is the first thought. The reactionary outpouring to an experience is the programmed or tacit memory and knowledge of their core belief structure. ADTK is a way to help teachers design their own methods and activities to unfreeze, create new learning, and then refreeze the core constructs upon which future learning in a subject area is built. This paper will explore the use of ADTK as a technique for knowledge conversion in the classroom in general and in HIP programs specifically. It will focus on knowledge conversion in curriculum development and propose the use of one-time educational experiences, multi-session experiences and sequential program experiences focusing on tacit knowledge in educational programs.

Keywords: tacit knowledge, knowledge management, college programs, experiential learning

Procedia PDF Downloads 240
308 Constructing and Circulating Knowledge in Continuous Education: A Study of Norwegian Educational-Psychological Counsellors' Reflection Logs in Post-Graduate Education

Authors: Moen Torill, Rismark Marit, Astrid M. Solvberg

Abstract:

In Norway, every municipality shall provide an educational psychological service, EPS, to support kindergartens and schools in their work with children and youths with special needs. The EPS focus its work on individuals, aiming to identify special needs and to give advice to teachers and parents when they ask for it. In addition, the service also give priority to prevention and system intervention in kindergartens and schools. To master these big tasks university courses are established to support EPS counsellors' continuous learning. There is, however, a need for more in-depth and systematic knowledge on how they experience the courses they attend. In this study, EPS counsellors’ reflection logs during a particular course are investigated. The research question is: what are the content and priorities of the reflections that are communicated in the logs produced by the educational psychological counsellors during a post-graduate course? The investigated course is a credit course organized over a one-year period in two one-semester modules. The altogether 55 students enrolled in the course work as EPS counsellors in various municipalities across Norway. At the end of each day throughout the course period, the participants wrote reflection logs about what they had experienced during the day. The data material consists of 165 pages of typed text. The collaborating researchers studied the data material to ascertain, differentiate and understand the meaning of the content in each log. The analysis also involved the search for similarity in content and development of analytical categories that described the focus and primary concerns in each of the written logs. This involved constant 'critical and sustained discussions' for mutual construction of meaning between the co-researchers in the developing categories. The process is inspired by Grounded Theory. This means that the concepts developed during the analysis derived from the data material and not chosen prior to the investigation. The analysis revealed that the concept 'Useful' frequently appeared in the participants’ reflections and, as such, 'Useful' serves as a core category. The core category is described through three major categories: (1) knowledge sharing (concerning direct and indirect work with students with special needs) with colleagues is useful, (2) reflections on models and theoretical concepts (concerning students with special needs) are useful, (3) reflection on the role as EPS counsellor is useful. In all the categories, the notion of useful occurs in the participants’ emphasis on and acknowledgement of the immediate and direct link between the university course content and their daily work practice. Even if each category has an importance and value of its own, it is crucial that they are understood in connection with one another and as interwoven. It is the connectedness that gives the core category an overarching explanatory power. The knowledge from this study may be a relevant contribution when it comes to designing new courses that support continuing professional development for EPS counsellors, whether for post-graduate university courses or local courses at the EPS offices or whether in Norway or other countries in the world.

Keywords: constructing and circulating knowledge, educational-psychological counsellor, higher education, professional development

Procedia PDF Downloads 91
307 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 126
306 An Investigation of Wind Loading Effects on the Design of Elevated Steel Tanks with Lattice Tower Supporting Structures

Authors: J. van Vuuren, D. J. van Vuuren, R. Muigai

Abstract:

In recent times, South Africa has experienced extensive droughts that created the need for reliable small water reservoirs. These reservoirs have comparatively quick fabrication and installation times compared to market alternatives. An elevated water tank has inherent potential energy, resulting in that no additional water pumps are required to sustain water pressure at the outlet point – thus ensuring that, without electricity, a water source is available. The initial construction formwork and the complex geometric shape of concrete towers that requires casting can become time-consuming, rendering steel towers preferable. Reinforced concrete foundations, cast in advance, are required to be of sufficient strength. Thereafter, the prefabricated steel supporting structure and tank, which consist of steel panels, can be assembled and erected on site within a couple of days. Due to the time effectiveness of this system, it has become a popular solution to aid drought-stricken areas. These sites are normally in rural, schools or farmland areas. As these tanks can contain up to 2000kL (approximately 19.62MN) of water, combined with supporting lattice steel structures ranging between 5m and 30m in height, failure of one of the supporting members will result in system failure. Thus, there is a need to gain a comprehensive understanding of the operation conditions because of wind loadings on both the tank and the supporting structure. The aim of the research is to investigate the relationship between the theoretical wind loading on a lattice steel tower in combination with an elevated sectional steel tank, and the current wind loading codes, as applicable to South Africa. The research compares the respective design parameters (both theoretical and wind loading codes) whereby FEA analyses are conducted on the various design solutions. The currently available wind loading codes are not sufficient to design slender cantilever latticed steel towers that support elevated water storage tanks. Numerous factors in the design codes are not comprehensively considered when designing the system as these codes are dependent on various assumptions. Factors that require investigation for the study are; the wind loading angle to the face of the structure that will result in maximum load; the internal structural effects on models with different bracing patterns; the loading influence of the aspect ratio of the tank; and the clearance height of the tank on the structural members. Wind loads, as the variable that results in the highest failure rate of cantilevered lattice steel tower structures, require greater understanding. This study aims to contribute towards the design process of elevated steel tanks with lattice tower supporting structures.

Keywords: aspect ratio, bracing patterns, clearance height, elevated steel tanks, lattice steel tower, wind loads

Procedia PDF Downloads 124
305 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 65
304 Microencapsulation of Probiotic and Evaluation for Viability, Antimicrobial Property and Cytotoxic Activities of its Postbiotic Metabolites on MCF-7 Breast Cancer Cell Line

Authors: Nkechi V. Enwuru, Bullum Nkeki, Elizabeth A. Adekoya, Olumide A. Adebesin, Rebecca F. Peters, Victoria A. Aikhomu, Mendie E. U.

Abstract:

Background: Probiotics are live microbial feed supplement beneficial for host. Probiotics and their postbiotic products have been used to prevent or treat various health conditions. However, the products cell viability is often low due to harsh conditions subjected during processing, handling, storage, and gastrointestinal transit. These strongly influence probiotics’ benefits; thus, viability is essential for probiotics to produce health benefits for the host. Microencapsulation is a promising technique with considerable effects on probiotic survival. The study is aimed to formulate a microencapsulated probiotic and evaluate its viability, antimicrobial efficacy, and cytotoxic activity of its postbiotic on the MCF-7 breast cancer cell line. Method: Human and animal raw milk were sampled for lactic acid bacteria. The isolated bacteria were identified using conventional and VITEK 2 systems. The identified lactic acid bacterium was encapsulated using spray-dried and extrusion methods. The free, encapsulated, and chitosan-coated encapsulated probiotics were tested for viability in simulated-gastric intestinal (SGI) fluid and different storage conditions at refrigerated (4oC) and room (25oC) temperatures. The disintegration time and weight uniformity of the spray-dried hard gelatin capsules were tested. The antimicrobial property of free and encapsulated probiotics was tested against enteric pathogenic isolates from antiretroviral therapy (ART) treated HIV-positive patients. The postbiotic of the free cells was extracted, and its cytotoxic effect on the MCF-7 breast cancer cell line was tested through an MTT assay. Result: The Lactobacillus plantarum was isolated from animal raw milk. Zero-size hard gelatin L. plantarum capsules with granules within a size range of 0.71–1.00 mm diameter was formulated. The disintegration time ranges from 2.14±0.045 to 2.91±0.293 minutes, while the average weight is 502.1mg. Simulated gastric solution significantly affected viability of both free and microcapsules. However, the encapsulated cells were more protected and viable due to impermeability in the microcapsules. Furthermore, the viability of free cells stored at 4oC and 25oC were less than 4 log CFU/g and 6 log CFU/g respectively after 12 weeks. However, the microcapsules stored at 4oC achieved the highest viability among the free and microcapsules stored at 25oC and the free cells stored at 4oC. Encapsulated cells were released in the simulated gastric fluid, viable and effective against the enteric pathogens tested. However, chitosan-coated calcium alginate encapsulated probiotics significantly inhibited Shigella flexneri, Candida albicans, and Escherichia coli. The Postbiotic Metabolites (PM) of L. plantarum produced a cytotoxic effect on the MCF-7 breast cancer cell line. The postbiotic showed significant cytotoxic activity similar to 5FU, a standard antineoplastic agent. The inhibition concentration of 50% growth (IC50) of postbiotic metabolite K3 is low and consistent with the IC50 of the positive control (Cisplatin). Conclusions: Lactobacillus plantarum postbiotic exhibited a cytotoxic effect on the MCF-7 breast cancer cell line and could be used as combined adjuvant therapy in breast cancer management. The microencapsulation technique protects the probiotics, improving their viability and delivery to the gastrointestinal tract. Chitosan enhances antibacterial efficacy; thus, chitosan-coated microencapsulated L. plantarum probiotics could be more effective and used as a combined therapy in HIV management of opportunistic enteric infection.

Keywords: probiotics, encapsulation, gastrointestinal conditions, antimicrobial effect, postbiotic, cytotoxicity effect

Procedia PDF Downloads 60
303 Business Intelligent to a Decision Support Tool for Green Entrepreneurship: Meso and Macro Regions

Authors: Anishur Rahman, Maria Areias, Diogo Simões, Ana Figeuiredo, Filipa Figueiredo, João Nunes

Abstract:

The circular economy (CE) has gained increased awareness among academics, businesses, and decision-makers as it stimulates resource circularity in the production and consumption systems. A large epistemological study has explored the principles of CE, but scant attention eagerly focused on analysing how CE is evaluated, consented to, and enforced using economic metabolism data and business intelligent framework. Economic metabolism involves the ongoing exchange of materials and energy within and across socio-economic systems and requires the assessment of vast amounts of data to provide quantitative analysis related to effective resource management. Limited concern, the present work has focused on the regional flows pilot region from Portugal. By addressing this gap, this study aims to promote eco-innovation and sustainability in the regions of Intermunicipal Communities Região de Coimbra, Viseu Dão Lafões and Beiras e Serra da Estrela, using this data to find precise synergies in terms of material flows and give companies a competitive advantage in form of valuable waste destinations, access to new resources and new markets, cost reduction and risk sharing benefits. In our work, emphasis on applying artificial intelligence (AI) and, more specifically, on implementing state-of-the-art deep learning algorithms is placed, contributing to construction a business intelligent approach. With the emergence of new approaches generally highlighted under the sub-heading of AI and machine learning (ML), the methods for statistical analysis of complex and uncertain production systems are facing significant changes. Therefore, various definitions of AI and its differences from traditional statistics are presented, and furthermore, ML is introduced to identify its place in data science and the differences in topics such as big data analytics and in production problems that using AI and ML are identified. A lifecycle-based approach is then taken to analyse the use of different methods in each phase to identify the most useful technologies and unifying attributes of AI in manufacturing. Most of macroeconomic metabolisms models are mainly direct to contexts of large metropolis, neglecting rural territories, so within this project, a dynamic decision support model coupled with artificial intelligence tools and information platforms will be developed, focused on the reality of these transition zones between the rural and urban. Thus, a real decision support tool is under development, which will surpass the scientific developments carried out to date and will allow to overcome imitations related to the availability and reliability of data.

Keywords: circular economy, artificial intelligence, economic metabolisms, machine learning

Procedia PDF Downloads 45
302 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence

Authors: Nasser Salah Eldin Mohammed Salih Shebka

Abstract:

Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.

Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic

Procedia PDF Downloads 82
301 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology

Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert

Abstract:

Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.

Keywords: BIPV, building simulation, optimized control strategy, planning tool

Procedia PDF Downloads 83
300 Screens Design and Application for Sustainable Buildings

Authors: Fida Isam Abdulhafiz

Abstract:

Traditional vernacular architecture in the United Arab Emirates constituted namely of adobe houses with a limited number of openings in their facades. The thick mud and rubble walls and wooden window screens protected its inhabitants from the harsh desert climate and provided them with privacy and fulfilled their comfort zone needs to an extent. However, with the rise of the immediate post petroleum era reinforced concrete villas with glass and steel technology has replaced traditional vernacular dwellings. And more load was put on the mechanical cooling systems to ensure the satisfaction of today’s more demanding doweling inhabitants. However, In the early 21at century professionals started to pay more attention to the carbon footprint caused by the built constructions. In addition, many studies and innovative approaches are now dedicated to lower the impact of the existing operating buildings on their surrounding environments. The UAE government agencies started to regulate that aim to revive sustainable and environmental design through Local and international building codes and urban design policies such as Estidama and LEED. The focus in this paper is on the reduction of the emissions resulting from the use of energy sources in the cooling and heating systems, and that would be through using innovative screen designs and façade solutions to provide a green footprint and aesthetic architectural icons. Screens are one of the popular innovative techniques that can be added in the design process or used in existing building as a renovation techniques to develop a passive green buildings. Preparing future architects to understand the importance of environmental design was attempted through physical modelling of window screens as an educational means to combine theory with a hands on teaching approach. Designing screens proved to be a popular technique that helped them understand the importance of sustainable design and passive cooling. After creating models of prototype screens, several tests were conducted to calculate the amount of Sun, light and wind that goes through the screens affecting the heat load and light entering the building. Theory further explored concepts of green buildings and material that produce low carbon emissions. This paper highlights the importance of hands on experience for student architects and how physical modelling helped rise eco awareness in Design studio. The paper will study different types of façade screens and shading devices developed by Architecture students and explains the production of diverse patterns for traditional screens by student architects based on sustainable design concept that works properly with the climate requirements in the Middle East region.

Keywords: building’s screens modeling, façade design, sustainable architecture, sustainable dwellings, sustainable education

Procedia PDF Downloads 264
299 An Engaged Approach to Developing Tools for Measuring Caregiver Knowledge and Caregiver Engagement in Juvenile Type 1 Diabetes

Authors: V. Howard, R. Maguire, S. Corrigan

Abstract:

Background: Type 1 Diabetes (T1D) is a chronic autoimmune disease, typically diagnosed in childhood. T1D puts an enormous strain on families; controlling blood-glucose in children is difficult and the consequences of poor control for patient health are significant. Successful illness management and better health outcomes can be dependent on quality of caregiving. On diagnosis, parent-caregivers face a steep learning curve as T1D care requires a significant level of knowledge to inform complex decision making throughout the day. The majority of illness management is carried out in the home setting, independent of clinical health providers. Parent-caregivers vary in their level of knowledge and their level of engagement in applying this knowledge in the practice of illness management. Enabling researchers to quantify these aspects of the caregiver experience is key to identifying targets for psychosocial support interventions, which are desirable for reducing stress and anxiety in this highly burdened cohort, and supporting better health outcomes in children. Currently, there are limited tools available that are designed to capture this information. Where tools do exist, they are not comprehensive and do not adequately capture the lived experience. Objectives: Development of quantitative tools, informed by lived experience, to enable researchers gather data on parent-caregiver knowledge and engagement, which accurately represents the experience/cohort and enables exploration of questions that are of real-world value to the cohort themselves. Methods: This research employed an engaged approach to address the problem of quantifying two key aspects of caregiver diabetes management: Knowledge and engagement. The research process was multi-staged and iterative. Stage 1: Working from a constructivist standpoint, literature was reviewed to identify relevant questionnaires, scales and single-item measures of T1D caregiver knowledge and engagement, and harvest candidate questionnaire items. Stage 2: Aggregated findings from the review were circulated among a PPI (patient and public involvement) expert panel of caregivers (n=6), for discussion and feedback. Stage 3: In collaboration with the expert panel, data were interpreted through the lens of lived experience to create a long-list of candidate items for novel questionnaires. Items were categorized as either ‘knowledge’ or ‘engagement’. Stage 4: A Delphi-method process (iterative surveys) was used to prioritize question items and generate novel questions that further captured the lived experience. Stage 5: Both questionnaires were piloted to refine wording of text to increase accessibility and limit socially desirable responding. Stage 6: Tools were piloted using an online survey that was deployed using an online peer-support group for caregivers for Juveniles with T1D. Ongoing Research: 123 parent-caregivers completed the survey. Data analysis is ongoing to establish face and content validity qualitatively and through exploratory factor analysis. Reliability will be established using an alternative-form method and Cronbach’s alpha will assess internal consistency. Work will be completed by early 2024. Conclusion: These tools will enable researchers to gain deeper insights into caregiving practices among parents of juveniles with T1D. Development was driven by lived experience, illustrating the value of engaged research at all levels of the research process.

Keywords: caregiving, engaged research, juvenile type 1 diabetes, quantified engagement and knowledge

Procedia PDF Downloads 29
298 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 17
297 Radioprotective Effects of Super-Paramagnetic Iron Oxide Nanoparticles Used as Magnetic Resonance Imaging Contrast Agent for Magnetic Resonance Imaging-Guided Radiotherapy

Authors: Michael R. Shurin, Galina Shurin, Vladimir A. Kirichenko

Abstract:

Background. Visibility of hepatic malignancies is poor on non-contrast imaging for daily verification of liver malignancies prior to radiation therapy on MRI-guided Linear Accelerators (MR-Linac). Ferumoxytol® (Feraheme, AMAG Pharmaceuticals, Waltham, MA) is a SPION agent that is increasingly utilized off-label as hepatic MRI contrast. This agent has the advantage of providing a functional assessment of the liver based upon its uptake by hepatic Kupffer cells proportionate to vascular perfusion, resulting in strong T1, T2 and T2* relaxation effects and enhanced contrast of malignant tumors, which lack Kupffer cells. The latter characteristic has been recently utilized for MRI-guided radiotherapy planning with precision targeting of liver malignancies. However potential radiotoxicity of SPION has never been addressed for its safe use as an MRI-contrast agent during liver radiotherapy on MRI-Linac. This study defines the radiomodulating properties of SPIONs in vitro on human monocyte and macrophage cell lines exposed to 60Go gamma-rays within clinical radiotherapy dose range. Methods. Human monocyte and macrophages cell line in cultures were loaded with a clinically relevant concentration of Ferumoxytol (30µg/ml) for 2 and 24 h and irradiated to 3Gy, 5Gy and 10Gy. Cells were washed and cultured for additional 24 and 48 h prior to assessing their phenotypic activation by flow cytometry and function, including viability (Annexin V/PI assay), proliferation (MTT assay) and cytokine expression (Luminex assay). Results. Our results reveled that SPION affected both human monocytes and macrophages in vitro. Specifically, iron oxide nanoparticles decreased radiation-induced apoptosis and prevented radiation-induced inhibition of human monocyte proliferative activity. Furthermore, Ferumoxytol protected monocytes from radiation-induced modulation of phenotype. For instance, while irradiation decreased polarization of monocytes to CD11b+CD14+ and CD11bnegCD14neg phenotype, Ferumoxytol prevented these effects. In macrophages, Ferumoxytol counteracted the ability of radiation to up-regulate cell polarization to CD11b+CD14+ phenotype and prevented radiation-induced down-regulation of expression of HLA-DR and CD86 molecules. Finally, Ferumoxytol uptake by human monocytes down-regulated expression of pro-inflammatory chemokines MIP-1α (Macrophage inflammatory protein 1α), MIP-1β (CCL4) and RANTES (CCL5). In macrophages, Ferumoxytol reversed the expression of IL-1RA, IL-8, IP-10 (CXCL10) and TNF-α, and up-regulates expression of MCP-1 (CCL2) and MIP-1α in irradiated macrophages. Conclusion. SPION agent Ferumoxytol increases resistance of human monocytes to radiation-induced cell death in vitro and supports anti-inflammatory phenotype of human macrophages under radiation. The effect is radiation dose-dependent and depends on the duration of Feraheme uptake. This study also finds strong evidence that SPIONs reversed the effect of radiation on the expression of pro-inflammatory cytokines involved in initiation and development of radiation-induced liver damage. Correlative translational work at our institution will directly assess the cyto-protective effects of Ferumoxytol on human Kupfer cells in vitro and ex vivo analysis of explanted liver specimens in a subset of patients receiving Feraheme-enhanced MRI-guided radiotherapy to the primary liver tumors as a bridge to liver transplant.

Keywords: superparamagnetic iron oxide nanoparticles, radioprotection, magnetic resonance imaging, liver

Procedia PDF Downloads 50
296 The Power-Knowledge Relationship in the Italian Education System between the 19th and 20th Century

Authors: G. Iacoviello, A. Lazzini

Abstract:

This paper focuses on the development of the study of accounting in the Italian education system between the 19th and 20th centuries. It also focuses on the subsequent formation of a scientific and experimental forma mentis that would prepare students for administrative and managerial activities in industry, commerce and public administration. From a political perspective, the period was characterized by two dominant movements - liberalism (1861-1922) and fascism (1922-1945) - that deeply influenced accounting practices and the entire Italian education system. The materials used in the study include both primary and secondary sources. The primary sources used to inform this study are numerous original documents issued from 1890-1935 by the government and maintained in the Historical Archive of the State in Rome. The secondary sources have supported both the development of the theoretical framework and the definition of the historical context. This paper assigns to the educational system the role of cultural producer. Foucauldian analysis identifies the problem confronted by the critical intellectual in finding a way to deploy knowledge through a 'patient labour of investigation' that highlights the contingency and fragility of the circumstances that have shaped current practices and theories. Education can be considered a powerful and political process providing students with values, ideas, and models that they will subsequently use to discipline themselves, remaining as close to them as possible. It is impossible for power to be exercised without knowledge, just as it is impossible for knowledge not to engender power. The power-knowledge relationship can be usefully employed for explaining how power operates within society, how mechanisms of power affect everyday lives. Power is employed at all levels and through many dimensions including government. Schools exercise ‘epistemological power’ – a power to extract a knowledge of individuals from individuals. Because knowledge is a key element in the operation of power, the procedures applied to the formation and accumulation of knowledge cannot be considered neutral instruments for the presentation of the real. Consequently, the same institutions that produce and spread knowledge can be considered part of the ‘power-knowledge’ interrelation. Individuals have become both objects and subject in the development of knowledge. If education plays a fundamental role in shaping all aspects of communities in the same way, the structural changes resulting from economic, social and cultural development affect the educational systems. Analogously, the important changes related to social and economic development required legislative intervention to regulate the functioning of different areas in society. Knowledge can become a means of social control used by the government to manage populations. It can be argued that the evolution of Italy’s education systems is coherent with the idea that power and knowledge do not exist independently but instead are coterminous. This research aims to reduce such a gap by analysing the role of the state in the development of accounting education in Italy.

Keywords: education system, government, knowledge, power

Procedia PDF Downloads 121
295 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 155
294 MTT Assay-Guided Isolation of a Cytotoxic Lead from Hedyotis umbellata and Its Mechanism of Action against Non-Small Cell Lung Cancer A549 Cells

Authors: Kirti Hira, A. Sajeli Begum, S. Mahibalan, Poorna Chandra Rao

Abstract:

Introduction: Cancer is one of the leading causes of death worldwide. Although existing therapy effectively kills cancer cells, they do affect normal growing cells leading to many undesirable side effects. Hence there is need to develop effective as well as safe drug molecules to combat cancer, which is possible through phyto-research. The currently available plant-derived blockbuster drugs are the example for this. In view of this, an investigation was done to identify cytotoxic lead molecules from Hedyotis umbellata (Family Rubiaceae), a widely distributed weed in India. Materials and Methods: The methanolic extract of the whole plant of H. umbellata (MHU), prepared through Soxhlet extraction method was further fractionated with diethyl ether and n-butanol, successively. MHU, ether fraction (EMHU) and butanol fraction (BMHU) were lyophilized and were tested for the cytotoxic effect using 3-(4,5-Dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay against non-small cell lung cancer (NSCLC) A549 cell lines. The potentially active EMHU was subjected to chromatographic purification using normal-phase silica columns, in order to isolate the responsible bioactive compounds. The isolated pure compounds were tested for their cytotoxic effect by MTT assay against A549 cells. Compound-3, which was found to be most active, was characterized using IR, 1H- and 13C-NMR and MS analysis. The study was further extended to decipher the mechanism of action of cytotoxicity of compound-3 against A549 cells through various in vitro cellular models. Cell cycle analysis was done using flow cytometry following PI (Propidium Iodide) staining. Protein analysis was done using Western blot technique. Results: Among MHU, EMHU, and BMHU, the non-polar fraction EMHU demonstrated a significant dose-dependent cytotoxic effect with IC50 of 67.7μg/ml. Chromatography of EMHU yielded seven compounds. MTT assay of isolated compounds explored compound-3 as potentially active one, which inhibited the growth of A549 cells with IC50value of 14.2μM. Further, compound-3 was identified as cedrelopsin, a coumarin derivative having molecular weight of 260. Results of in vitro mechanistic studies explained that cedrelopsin induced cell cycle arrest at G2/M phase and down-regulated the expression of G2/M regulatory proteins such as cyclin B1, cdc2, and cdc25C, dose dependently. This is the first report that explores the cytotoxic mechanism of cedrelopsin. Conclusion: Thus a potential small lead molecule, cedrelopsin isolated from H. umbellata, showing antiproliferative effect mediated by G2/M arrest in A549 cells was discovered. The effect of cedrelopsin against other cancer cell lines followed by in vivo studies can be performed in future to develop a new drug candidate.

Keywords: A549, cedrelopsin, G2/M phase, Hedyotis umbellata

Procedia PDF Downloads 152
293 Reworking of the Anomalies in the Discounted Utility Model as a Combination of Cognitive Bias and Decrease in Impatience: Decision Making in Relation to Bounded Rationality and Emotional Factors in Intertemporal Choices

Authors: Roberta Martino, Viviana Ventre

Abstract:

Every day we face choices whose consequences are deferred in time. These types of choices are the intertemporal choices and play an important role in the social, economic, and financial world. The Discounted Utility Model is the mathematical model of reference to calculate the utility of intertemporal prospects. The discount rate is the main element of the model as it describes how the individual perceives the indeterminacy of subsequent periods. Empirical evidence has shown a discrepancy between the behavior expected from the predictions of the model and the effective choices made from the decision makers. In particular, the term temporal inconsistency indicates those choices that do not remain optimal with the passage of time. This phenomenon has been described with hyperbolic models of the discount rate which, unlike the linear or exponential nature assumed by the discounted utility model, is not constant over time. This paper explores the problem of inconsistency by tracing the decision-making process through the concept of impatience. The degree of impatience and the degree of decrease of impatience are two parameters that allow to quantify the weight of emotional factors and cognitive limitations during the evaluation and selection of alternatives. In fact, although the theory assumes perfectly rational decision makers, behavioral finance and cognitive psychology have made it possible to understand that distortions in the decision-making process and emotional influence have an inevitable impact on the decision-making process. The degree to which impatience is diminished is the focus of the first part of the study. By comparing consistent and inconsistent preferences over time, it was possible to verify that some anomalies in the discounted utility model are a result of the combination of cognitive bias and emotional factors. In particular: the delay effect and the interval effect are compared through the concept of misperception of time; starting from psychological considerations, a criterion is proposed to identify the causes of the magnitude effect that considers the differences in outcomes rather than their ratio; the sign effect is analyzed by integrating in the evaluation of prospects with negative outcomes the psychological aspects of loss aversion provided by Prospect Theory. An experiment implemented confirms three findings: the greatest variation in the degree of decrease in impatience corresponds to shorter intervals close to the present; the greatest variation in the degree of impatience occurs for outcomes of lower magnitude; the variation in the degree of impatience is greatest for negative outcomes. The experimental phase was implemented with the construction of the hyperbolic factor through the administration of questionnaires constructed for each anomaly. This work formalizes the underlying causes of the discrepancy between the discounted utility model and the empirical evidence of preference reversal.

Keywords: decreasing impatience, discount utility model, hyperbolic discount, hyperbolic factor, impatience

Procedia PDF Downloads 81