Search results for: HEMS utility
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 674

Search results for: HEMS utility

164 Teaching during the Pandemic Using a Feminist Pedagogy: Classroom Conversations and Practices

Authors: T. Northcut, A. Rai, N. Perkins

Abstract:

Background: The COVID-19 pandemic has had a serious impact on academia in general and social work education in particular, changing permanently the way in which we approach educating students. The new reality of the pandemic coupled with the much-needed focus on racism across the country inspired and required educators to get creative with their teaching styles in order to disrupt the power imbalance in the classroom and attend to the multiple layers of needs of diverse students in precarious sociological and economic circumstances. This paper highlights research examining educators with distinctive positionalities and approaches to classroom instruction who use feminist and antiracist pedagogies while adapting to online teaching during the pandemic. Despite being feminist scholars, whose ideologies developed during different waves of feminism, our commitment to having student-led classrooms, liberation, and equity of all, and striving for social change, unified our feminist teaching pedagogies as well as provided interpersonal support. Methodology: Following a narrative qualitative inquiry methodology, the five authors of this paper came together to discuss our pedagogical styles and underlying values using Zoom in a series of six conversations. Narrative inquiry is an appropriate method to use when researchers are bound by common stories or personal experiences. The use of feminist pedagogy in the classroom before and during the pandemic guided the discussions. After six sessions, we reached the point of data saturation. All data from the dialogic process was recorded and transcribed. We used in vivo, narrative, and descriptive coding for the data analytic process. Results: Analysis of the data revealed several themes, which included (1) the influence of our positionalities as an intersection of race, sexual orientation, gender, and years of teaching experience in the classroom, (2) the meaning and variations between different liberatory pedagogical approaches, (3) the tensions between these approaches and institutional policies and practices, (4) the role of self-reflection in everyday teaching, (5) the distinctions between theory and practice and its utility for students, and (6) the challenges of applying a feminist-centered pedagogical approach during the pandemic while utilizing an online platform. As a collective, we discussed several challenges that limited the use of our feminist pedagogical approaches due to instruction through Zoom.

Keywords: feminist, pedagogy, COVID, zoom

Procedia PDF Downloads 22
163 Investigating Salience Theory’s Implications for Real-Life Decision Making: An Experimental Test for Whether the Allais Paradox Exists under Subjective Uncertainty

Authors: Christoph Ostermair

Abstract:

We deal with the effect of correlation between prospects on human decision making under uncertainty as proposed by the comparatively new and promising model of “salience theory of choice under risk”. In this regard, we show that the theory entails the prediction that the inconsistency of choices, known as the Allais paradox, should not be an issue in the context of “real-life decision making”, which typically corresponds to situations of subjective uncertainty. The Allais paradox, probably the best-known anomaly regarding expected utility theory, would then essentially have no practical relevance. If, however, empiricism contradicts this prediction, salience theory might suffer a serious setback. Explanations of the model for variable human choice behavior are mostly the result of a particular mechanism that does not come to play under perfect correlation. Hence, if it turns out that correlation between prospects – as typically found in real-world applications – does not influence human decision making in the expected way, this might to a large extent cost the theory its explanatory power. The empirical literature regarding the Allais paradox under subjective uncertainty is so far rather moderate. Beyond that, the results are hard to maintain as an argument, as the presentation formats commonly employed, supposably have generated so-called event-splitting effects, thereby distorting subjects’ choice behavior. In our own incentivized experimental study, we control for such effects by means of two different choice settings. We find significant event-splitting effects in both settings, thereby supporting the suspicion that the so far existing empirical results related to Allais paradoxes under subjective uncertainty may not be able to answer the question at hand. Nevertheless, we find that the basic tendency behind the Allais paradox, which is a particular switch of the preference relation due to a modified common consequence, shared by two prospects, is still existent both under an event-splitting and a coalesced presentation format. Yet, the modal choice pattern is in line with the prediction of salience theory. As a consequence, the effect of correlation, as proposed by the model, might - if anything - only weaken the systematic choice pattern behind the Allais paradox.

Keywords: Allais paradox, common consequence effect, models of decision making under risk and uncertainty, salience theory

Procedia PDF Downloads 166
162 Histological Study on the Effect of Bone Marrow Transplantation Combined with Curcumin on Pancreatic Regeneration in Streptozotocin Induced Diabetic Rats

Authors: Manal M. Shehata, Kawther M. Abdel-Hamid, Nashwa A. Mohamed, Marwa H. Bakr, Maged S. Mahmoud, Hala M. Elbadre

Abstract:

Introduction: The worldwide rapid increase in diabetes poses a significant challenge to current therapeutic approaches. Therapeutic utility of bone marrow transplantation in diabetes is an attractive approach. However, the oxidative stress generated by hyperglycemia may hinder β-cell regeneration. Curcumin, is a dietary spice with antioxidant activity. Aim of work: The present study was undertaken to investigate the therapeutic potential of curcumin, bone marrow transplantation, and their combined effects in the reversal of experimental diabetes. Material and Methods: Fifty adult male healthy albino rats were included in the present study.They were divided into two groups: Group І: (control group) included 10 rats. Group П: (diabetic group): included 40 rats. Diabetes was induced by single intraperitoneal injection of streptozotocin (STZ). Group II will be further subdivided into four groups (10 rats for each): Group II-a (diabetic control). Group II-b: rats were received single intraperitoneal injection of bone marrow suspension (un-fractionated bone marrow cells) prepared from rats of the same family. Group II-c: rats were treated with curcumin orally by gastric intubation for 6 weeks. Group II-d: rats were received a combination of single bone marrow transplantation and curcumin for 6 weeks. After 6 weeks, blood glucose, insulin levels were measured and the pancreas from all rats were processed for Histological, Immunohistochemical and morphometric examination. Results: Diabetic group, showed progressive histological changes in the pancreatic islets. Treatment with either curcumin or bone marrow transplantation improved the structure of the islets and reversed streptozotocin-induced hyperglycemia and hypoinsulinemia. Combination of curcumin and bone marrow transplantation elicited more profound alleviation of streptozotocin-induced changes including islet regeneration and insulin secretion. Conclusion: The use of natural antioxidants combined with bone marrow transplantation to induce pancreatic regeneration is a promising strategy in the management of diabetes.

Keywords: diabtes, panceatic islets, bone marrow transplantation, curcumin

Procedia PDF Downloads 356
161 Raising the Property Provisions of the Topographic Located near the Locality of Gircov, Romania

Authors: Carmen Georgeta Dumitrache

Abstract:

Measurements of terrestrial science aims to study the totality of operations and computing, which are carried out for the purposes of representation on the plan or map of the land surface in a specific cartographic projection and topographic scale. With the development of society, the metrics have evolved, and they land, being dependent on the achievement of a goal-bound utility of economic activity and of a scientific purpose related to determining the form and dimensions of the Earth. For measurements in the field, data processing and proper representation on drawings and maps of planimetry and landform of the land, using topographic and geodesic instruments, calculation and graphical reporting, which requires a knowledge of theoretical and practical concepts from different areas of science and technology. In order to use properly in practice, topographical and geodetic instruments designed to measure precise angles and distances are required knowledge of geometric optics, precision mechanics, the strength of materials, and more. For processing, the results from field measurements are necessary for calculation methods, based on notions of geometry, trigonometry, algebra, mathematical analysis and computer science. To be able to illustrate topographic measurements was established for the lifting of property located near the locality of Gircov, Romania. We determine this total surface of the plan (T30), parcel/plot, but also in the field trace the coordinates of a parcel. The purpose of the removal of the planimetric consisted of: the exact determination of the bounding surface; analytical calculation of the surface; comparing the surface determined with the one registered in the documents produced; drawing up a plan of location and delineation with closeness and distance contour, as well as highlighting the parcels comprising this property; drawing up a plan of location and delineation with closeness and distance contour for a parcel from Dave; in the field trace outline of plot points from the previous point. The ultimate goal of this work was to determine and represent the surface, but also to tear off a plot of the surface total, while respecting the first surface condition imposed by the Act of the beneficiary's property.

Keywords: topography, surface, coordinate, modeling

Procedia PDF Downloads 231
160 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 35
159 A 3-Dimensional Memory-Based Model for Planning Working Postures Reaching Specific Area with Postural Constraints

Authors: Minho Lee, Donghyun Back, Jaemoon Jung, Woojin Park

Abstract:

The current 3-dimensional (3D) posture prediction models commonly provide only a few optimal postures to achieve a specific objective. The problem with such models is that they are incapable of rapidly providing several optimal posture candidates according to various situations. In order to solve this problem, this paper presents a 3D memory-based posture planning (3D MBPP) model, which is a new digital human model that can analyze the feasible postures in 3D space for reaching tasks that have postural constraints and specific reaching space. The 3D MBPP model can be applied to the types of works that are done with constrained working postures and have specific reaching space. The examples of such works include driving an excavator, driving automobiles, painting buildings, working at an office, pitching/batting, and boxing. For these types of works, a limited amount of space is required to store all of the feasible postures, as the hand reaches boundary can be determined prior to perform the task. This prevents computation time from increasing exponentially, which has been one of the major drawbacks of memory-based posture planning model in 3D space. This paper validates the utility of 3D MBPP model using a practical example of analyzing baseball batting posture. In baseball, batters swing with both feet fixed to the ground. This motion is appropriate for use with the 3D MBPP model since the player must try to hit the ball when the ball is located inside the strike zone (a limited area) in a constrained posture. The results from the analysis showed that the stored and the optimal postures vary depending on the ball’s flying path, the hitting location, the batter’s body size, and the batting objective. These results can be used to establish the optimal postural strategies for achieving the batting objective and performing effective hitting. The 3D MBPP model can also be applied to various domains to determine the optimal postural strategies and improve worker comfort.

Keywords: baseball, memory-based, posture prediction, reaching area, 3D digital human models

Procedia PDF Downloads 187
158 Exploring an Exome Target Capture Method for Cross-Species Population Genetic Studies

Authors: Benjamin A. Ha, Marco Morselli, Xinhui Paige Zhang, Elizabeth A. C. Heath-Heckman, Jonathan B. Puritz, David K. Jacobs

Abstract:

Next-generation sequencing has enhanced the ability to acquire massive amounts of sequence data to address classic population genetic questions for non-model organisms. Targeted approaches allow for cost effective or more precise analyses of relevant sequences; although, many such techniques require a known genome and it can be costly to purchase probes from a company. This is challenging for non-model organisms with no published genome and can be expensive for large population genetic studies. Expressed exome capture sequencing (EecSeq) synthesizes probes in the lab from expressed mRNA, which is used to capture and sequence the coding regions of genomic DNA from a pooled suite of samples. A normalization step produces probes to recover transcripts from a wide range of expression levels. This approach offers low cost recovery of a broad range of genes in the genome. This research project expands on EecSeq to investigate if mRNA from one taxon may be used to capture relevant sequences from a series of increasingly less closely related taxa. For this purpose, we propose to use the endangered Northern Tidewater goby, Eucyclogobius newberryi, a non-model organism that inhabits California coastal lagoons. mRNA will be extracted from E. newberryi to create probes and capture exomes from eight other taxa, including the more at-risk Southern Tidewater goby, E. kristinae, and more divergent species. Captured exomes will be sequenced, analyzed bioinformatically and phylogenetically, then compared to previously generated phylogenies across this group of gobies. This will provide an assessment of the utility of the technique in cross-species studies and for analyzing low genetic variation within species as is the case for E. kristinae. This method has potential applications to provide economical ways to expand population genetic and evolutionary biology studies for non-model organisms.

Keywords: coastal lagoons, endangered species, non-model organism, target capture method

Procedia PDF Downloads 165
157 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The original MMSE is one of the most widely used screening tools for detecting the cognitive impairment, in clinical settings, but also in the field of neurocognitive research. Now, the practitioners and researchers are turning their attention to the MMSE-2. To enhance its clinical utility, the new instrument was enriched and reorganized in three versions (MMSE-2:BV, MMSE-2:SV and MMSE-2:EV), each with two forms: blue and red. The MMSE-2 was adapted and used successfully in Romania since 2013. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. The alternation of the forms prevents the learning phenomenon. The diagnostic accuracy and efficient therapeutic conduct derive from the usage of the national test norms. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psycho-diagnostic solution. The clinicians can draw objective decisions and for the patients: it doesn’t take too much time and energy, it doesn’t bother them and it doesn’t force them to travel frequently.

Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology

Procedia PDF Downloads 490
156 Optimized Scheduling of Domestic Load Based on User Defined Constraints in a Real-Time Tariff Scenario

Authors: Madia Safdar, G. Amjad Hussain, Mashhood Ahmad

Abstract:

One of the major challenges of today’s era is peak demand which causes stress on the transmission lines and also raises the cost of energy generation and ultimately higher electricity bills to the end users, and it was used to be managed by the supply side management. However, nowadays this has been withdrawn because of existence of potential in the demand side management (DSM) having its economic and- environmental advantages. DSM in domestic load can play a vital role in reducing the peak load demand on the network provides a significant cost saving. In this paper the potential of demand response (DR) in reducing the peak load demands and electricity bills to the electric users is elaborated. For this purpose the domestic appliances are modeled in MATLAB Simulink and controlled by a module called energy management controller. The devices are categorized into controllable and uncontrollable loads and are operated according to real-time tariff pricing pattern instead of fixed time pricing or variable pricing. Energy management controller decides the switching instants of the controllable appliances based on the results from optimization algorithms. In GAMS software, the MILP (mixed integer linear programming) algorithm is used for optimization. In different cases, different constraints are used for optimization, considering the comforts, needs and priorities of the end users. Results are compared and the savings in electricity bills are discussed in this paper considering real time pricing and fixed tariff pricing, which exhibits the existence of potential to reduce electricity bills and peak loads in demand side management. It is seen that using real time pricing tariff instead of fixed tariff pricing helps to save in the electricity bills. Moreover the simulation results of the proposed energy management system show that the gained power savings lie in high range. It is anticipated that the result of this research will prove to be highly effective to the utility companies as well as in the improvement of domestic DR.

Keywords: controllable and uncontrollable domestic loads, demand response, demand side management, optimization, MILP (mixed integer linear programming)

Procedia PDF Downloads 282
155 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 83
154 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City

Authors: Sultan Ahmad Azizi, Gaurang J. Joshi

Abstract:

Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.

Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport

Procedia PDF Downloads 235
153 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 143
152 Practice on Design Knowledge Management and Transfer across the Life Cycle of a New-Built Nuclear Power Plant in China

Authors: Danying Gu, Xiaoyan Li, Yuanlei He

Abstract:

As a knowledge-intensive industry, nuclear industry highly values the importance of safety and quality. The life cycle of a NPP (Nuclear Power Plant) can last 100 years from the initial research and design to its decommissioning. How to implement the high-quality knowledge management and how to contribute to a more safe, advanced and economic NPP (Nuclear Power Plant) is the most important issue and responsibility for knowledge management. As the lead of nuclear industry, nuclear research and design institute has competitive advantages of its advanced technology, knowledge and information, DKM (Design Knowledge Management) of nuclear research and design institute is the core of the knowledge management in the whole nuclear industry. In this paper, the study and practice on DKM and knowledge transfer across the life cycle of a new-built NPP in China is introduced. For this digital intelligent NPP, the whole design process is based on a digital design platform which includes NPP engineering and design dynamic analyzer, visualization engineering verification platform, digital operation maintenance support platform and digital equipment design, manufacture integrated collaborative platform. In order to make all the design data and information transfer across design, construction, commissioning and operation, the overall architecture of new-built digital NPP should become a modern knowledge management system. So a digital information transfer model across the NPP life cycle is proposed in this paper. The challenges related to design knowledge transfer is also discussed, such as digital information handover, data center and data sorting, unified data coding system. On the other hand, effective delivery of design information during the construction and operation phase will contribute to the comprehensive understanding of design ideas and components and systems for the construction contractor and operation unit, largely increasing the safety, quality and economic benefits during the life cycle. The operation and maintenance records generated from the NPP operation process have great significance for maintaining the operating state of NPP, especially the comprehensiveness, validity and traceability of the records. So the requirements of an online monitoring and smart diagnosis system of NPP is also proposed, to help utility-owners to improve the safety and efficiency.

Keywords: design knowledge management, digital nuclear power plant, knowledge transfer, life cycle

Procedia PDF Downloads 248
151 Development of Requirements Analysis Tool for Medical Autonomy in Long-Duration Space Exploration Missions

Authors: Lara Dutil-Fafard, Caroline Rhéaume, Patrick Archambault, Daniel Lafond, Neal W. Pollock

Abstract:

Improving resources for medical autonomy of astronauts in prolonged space missions, such as a Mars mission, requires not only technology development, but also decision-making support systems. The Advanced Crew Medical System - Medical Condition Requirements study, funded by the Canadian Space Agency, aimed to create knowledge content and a scenario-based query capability to support medical autonomy of astronauts. The key objective of this study was to create a prototype tool for identifying medical infrastructure requirements in terms of medical knowledge, skills and materials. A multicriteria decision-making method was used to prioritize the highest risk medical events anticipated in a long-term space mission. Starting with those medical conditions, event sequence diagrams (ESDs) were created in the form of decision trees where the entry point is the diagnosis and the end points are the predicted outcomes (full recovery, partial recovery, or death/severe incapacitation). The ESD formalism was adapted to characterize and compare possible outcomes of medical conditions as a function of available medical knowledge, skills, and supplies in a given mission scenario. An extensive literature review was performed and summarized in a medical condition database. A PostgreSQL relational database was created to allow query-based evaluation of health outcome metrics with different medical infrastructure scenarios. Critical decision points, skill and medical supply requirements, and probable health outcomes were compared across chosen scenarios. The three medical conditions with the highest risk rank were acute coronary syndrome, sepsis, and stroke. Our efforts demonstrate the utility of this approach and provide insight into the effort required to develop appropriate content for the range of medical conditions that may arise.

Keywords: decision support system, event-sequence diagram, exploration mission, medical autonomy, scenario-based queries, space medicine

Procedia PDF Downloads 102
150 A Case Study of Low Head Hydropower Opportunities at Existing Infrastructure in South Africa

Authors: Ione Loots, Marco van Dijk, Jay Bhagwan

Abstract:

Historically, South Africa had various small-scale hydropower installations in remote areas that were not incorporated in the national electricity grid. Unfortunately, in the 1960s most of these plants were decommissioned when Eskom, the national power utility, rapidly expanded its grid and capability to produce cheap, reliable, coal-fired electricity. This situation persisted until 2008, when rolling power cuts started to affect all citizens. This, together with the rising monetary and environmental cost of coal-based power generation, has sparked new interest in small-scale hydropower development, especially in remote areas or at locations (like wastewater treatment works) that could not afford to be without electricity for long periods at a time. Even though South Africa does not have the same, large-scale, hydropower potential as some other African countries, significant potential for micro- and small-scale hydropower is hidden in various places. As an example, large quantities of raw and potable water are conveyed daily under either pressurized or gravity conditions over large distances and elevations. Due to the relative water scarcity in the country, South Africa also has more than 4900 registered dams of varying capacities. However, institutional capacity and skills have not been maintained in recent years and therefore the identification of hydropower potential, as well as the development of micro- and small-scale hydropower plants has not gained significant momentum. An assessment model and decision support system for low head hydropower development has been developed to assist designers and decision makers with first-order potential analysis. As a result, various potential sites were identified and many of these sites were situated at existing infrastructure like weirs, barrages or pipelines. One reason for the specific interest in existing infrastructure is the fact that capital expenditure could be minimized and another is the reduced negative environmental impact compared to greenfield sites. This paper will explore the case study of retrofitting an unconventional and innovative hydropower plant to the outlet of a wastewater treatment works in South Africa.

Keywords: low head hydropower, retrofitting, small-scale hydropower, wastewater treatment works

Procedia PDF Downloads 218
149 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 326
148 Utility of Thromboelastography to Reduce Coagulation-Related Mortality and Blood Component Rate in Neurosurgery ICU

Authors: Renu Saini, Deepak Agrawal

Abstract:

Background: Patients with head and spinal cord injury frequently have deranged coagulation profiles and require blood products transfusion perioperatively. Thromboelastography (TEG) is a ‘bedside’ global test of coagulation which may have role in deciding the need of transfusion in such patients. Aim: To assess the usefulness of TEG in department of neurosurgery in decreasing transfusion rates and coagulation-related mortality in traumatic head and spinal cord injury. Method and Methodology: A retrospective comparative study was carried out in the department of neurosurgery over a period of 1 year. There are two groups in this study. ‘Control’ group constitutes the patients in whom data was collected over 6 months (1/6/2009-31/12/2009) prior to installation of TEG machine. ‘Test’ group includes patients in whom data was collected over 6months (1/1/2013-30/6/2013) post TEG installation. Total no. of platelet, FFP, and cryoprecipitate transfusions were noted in both groups along with in hospital mortality and length of stay. Result: Both groups were matched in age and sex of patients, number of head and spinal cord injury cases, number of patients with thrombocytopenia and number of patients who underwent operation. Total 178 patients (135 head injury and 43 spinal cord injury patents) were admitted in neurosurgery department during time period June 2009 to December 2009 i.e. prior to TEG installation and after TEG installation a total of 243 patients(197 head injury and 46 spinal cord injury patents) were admitted. After TEG introduction platelet transfusion significantly reduced (p=0.000) compare to control group (67 units to 34 units). Mortality rate was found significantly reduced after installation (77 patients to 57 patients, P=0.000). Length of stay was reduced significantly (Prior installation 1-211days and after installation 1-115days, p=0.02). Conclusion: Bedside TEG can dramatically reduce platelet transfusion components requirement in department of neurosurgery. TEG also lead to a drastic decrease in mortality rate and length of stay in patients with traumatic head and spinal cord injuries. We recommend its use as a standard of care in the patients with traumatic head and spinal cord injuries.

Keywords: blood component transfusion, mortality, neurosurgery ICU, thromboelastography

Procedia PDF Downloads 304
147 Blood Ketones as a Point of Care Testing in Paediatric Emergencies

Authors: Geetha Jayapathy, Lakshmi Muthukrishnan, Manoj Kumar Reddy Pulim , Radhika Raman

Abstract:

Introduction: Ketones are the end products of fatty acid metabolism and a source of energy for vital organs such as the brain, heart and skeletal muscles. Ketones are produced in excess when glucose is not available as a source of energy or it cannot be utilized as in diabetic ketoacidosis. Children admitted in the emergency department often have starvation ketosis which is not clinically manifested. Decision on admission of children to the emergency room with subtle signs can be difficult at times. Point of care blood ketone testing can be done at the bedside even in a primary level care setting to supplement and guide us in our management decisions. Hence this study was done to explore the utility of this simple bedside parameter as a supplement in assessing pediatric patients presenting to the emergency department. Objectives: To estimate blood ketones of children admitted in the emergency department. To analyze the significance of blood ketones in various disease conditions. Methods: Blood ketones using point of care testing instrument (ABOTTprecision Xceed Pro meters) was done in patients getting admitted in emergency room and in out-patients (through sample collection centre). Study population: Children aged 1 month to 18 years were included in the study. 250 cases (In-patients) and 250 controls (out-patients) were collected. Study design: Prospective observational study. Data on details of illness and physiological status were documented. Blood ketones were compared between the two groups and all in patients were categorized into various system groups and analysed. Results: Mean blood ketones were high in in-patients ranging from 0 to 7.2, with a mean of 1.28 compared to out-patients ranging from 0 to 1.9 with a mean of 0.35. This difference was statistically significant with a p value < 0.001. In-patients with shock (mean of 4.15) and diarrheal dehydration (mean of 1.85) had a significantly higher blood ketone values compared to patients with other system involvement. Conclusion: Blood ketones were significantly high (above the normal range) in pediatric patients who are sick requiring admission. Patients with various forms of shock had very high blood ketone values as found in diabetic ketoacidosis. Ketone values in diarrheal dehydration were moderately high correlating to the degree of dehydration.

Keywords: admission, blood ketones, paediatric emergencies, point of care testing

Procedia PDF Downloads 186
146 Insights into The Oversight Functions of The Legislative Power Under The Nigerian Constitution

Authors: Olanrewaju O. Adeojo

Abstract:

The constitutional system of government provides for the federating units of the Federal Republic of Nigeria, the States and the Local Councils under a governing structure of the Executive, the Legislature and the Judiciary with attendant distinct powers and spheres of influence. The legislative powers of the Federal Republic of Nigeria and of a State are vested in the National Assembly and House of Assembly of the State respectively. The Local council exercises legislative powers in clearly defined matters as provided by the Constitution. Though, the executive as constituted by the President and the Governor are charged with the powers of execution and administration, the legislature is empowered to ensure that such powers are duly exercised in accordance with the provisions of the Constitution. The vast areas do not make oversight functions indefinite and more importantly the purpose for the exercise of the powers are circumscribed. It include, among others, any matter with respect to which it has power to make laws. Indeed, the law provides for the competence of the legislature to procure evidence, examine all persons as witnesses, to summon any person to give evidence and to issue a warrant to compel attendance in matters relevant to the subject matter of its investigation. The exercise of functions envisaged by the Constitution seem to an extent to be literal because it lacks power of enforcing the outcome. Furthermore, the docility of the legislature is apparent in a situation where the agency or authority being called in to question is part of the branch of government to enforce sanctions. The process allows for cover up and obstruction of justice. The oversight functions are not functional in a situation where the executive is overbearing. The friction, that ensues, between the Legislature and the Executive in an attempt by the former to project the spirit of a constitutional mandate calls for concern. It is needless to state a power that can easily be frustrated. To an extent, the arm of government with coercive authority seems to have over shadowy effect over the laid down functions of the legislature. Recourse to adjudication by the Judiciary had not proved to be of any serious utility especially in a clime where the wheels of justice grinds slowly, as in Nigeria, due to the nature of the legal system. Consequently, the law and the Constitution, drawing lessons from other jurisdiction, need to insulate the legislative oversight from the vagaries of the executive. A strong and virile Constitutional Court that determines, within specific time line, issues pertaining to the oversight functions of the legislative power, is apposite.

Keywords: constitution, legislative, oversight, power

Procedia PDF Downloads 105
145 Use of Thrombolytics for Acute Myocardial Infarctions in Resource-Limited Settings, Globally: A Systematic Literature Review

Authors: Sara Zelman, Courtney Meyer, Hiren Patel, Lisa Philpotts, Sue Lahey, Thomas Burke

Abstract:

Background: As the global burden of disease shifts from infectious diseases to noncommunicable diseases, there is growing urgency to provide treatment for time-sensitive illnesses, such as ST-Elevation Myocardial Infarctions (STEMIs). The standard of care for STEMIs in developed countries is Percutaneous Coronary Intervention (PCI). However, this is inaccessible in resource-limited settings. Before the discovery of PCI, Streptokinase (STK) and other thrombolytic drugs were first-line treatments for STEMIs. STK has been recognized as a cost-effective and safe treatment for STEMIs; however, in settings which lack access to PCI, it has not become the established second-line therapy. A systematic literature review was conducted to geographically map the use of STK for STEMIs in resource-limited settings. Methods: Our literature review group searched the databases Cinhal, Embase, Ovid, Pubmed, Web of Science, and WHO’s Index Medicus. The search terms included ‘thrombolytics’ AND ‘myocardial infarction’ AND ‘resource-limited’ and were restricted to human studies and papers written in English. A considerable number of studies came from Latin America; however, these studies were not written in English and were excluded. The initial search yielded 3,487 articles, which was reduced to 3,196 papers after titles were screened. Three medical professionals then screened abstracts, from which 291 articles were selected for full-text review and 94 papers were chosen for final inclusion. These articles were then analyzed and mapped geographically. Results: This systematic literature review revealed that STK has been used for the treatment of STEMIs in 33 resource-limited countries, with 18 of 94 studies taking place in India. Furthermore, 13 studies occurred in Pakistan, followed by Iran (6), Sri Lanka (5), Brazil (4), China (4), and South Africa (4). Conclusion: Our systematic review revealed that STK has been used for the treatment of STEMIs in 33 resource-limited countries, with the highest utilization occurring in India. This demonstrates that even though STK has high utility for STEMI treatment in resource-limited settings, it still has not become the standard of care. Future research should investigate the barriers preventing the establishment of STK use as second-line treatment after PCI.

Keywords: cardiovascular disease, global health, resource-limited setting, ST-Elevation Myocardial Infarction, Streptokinase

Procedia PDF Downloads 123
144 Clostridium thermocellum DBT-IOC-C19, A Potential CBP Isolate for Ethanol Production

Authors: Nisha Singh, Munish Puri, Collin Barrow, Deepak Tuli, Anshu S. Mathur

Abstract:

The biological conversion of lignocellulosic biomass to ethanol is a promising strategy to solve the present global crisis of exhausting fossil fuels. The existing bioethanol production technologies have cost constraints due to the involvement of mandate pretreatment and extensive enzyme production steps. A unique process configuration known as consolidated bioprocessing (CBP) is believed to be a potential cost-effective process due to its efficient integration of enzyme production, saccharification, and fermentation into one step. Due to several favorable reasons like single step conversion, no need of adding exogenous enzymes and facilitated product recovery, CBP has gained the attention of researchers worldwide. However, there are several technical and economic barriers which need to be overcome for making consolidated bioprocessing a commercially viable process. Finding a natural candidate CBP organism is critically important and thermophilic anaerobes are preferred microorganisms. The thermophilic anaerobes that can represent CBP mainly belong to genus Clostridium, Caldicellulosiruptor, Thermoanaerobacter, Thermoanaero bacterium, and Geobacillus etc. Amongst them, Clostridium thermocellum has received increased attention as a high utility CBP candidate due to its highest growth rate on crystalline cellulose, the presence of highly efficient cellulosome system and ability to produce ethanol directly from cellulose. Recently with the availability of genetic and molecular tools aiding the metabolic engineering of Clostridium thermocellum have further facilitated the viability of commercial CBP process. With this view, we have specifically screened cellulolytic and xylanolytic thermophilic anaerobic ethanol producing bacteria, from unexplored hot spring/s in India. One of the isolates is a potential CBP organism identified as a new strain of Clostridium thermocellum. This strain has shown superior avicel and xylan degradation under unoptimized conditions compared to reported wild type strains of Clostridium thermocellum and produced more than 50 mM ethanol in 72 hours from 1 % avicel at 60°C. Besides, this strain shows good ethanol tolerance and growth on both hexose and pentose sugars. Hence, with further optimization this new strain could be developed as a potential CBP microbe.

Keywords: Clostridium thermocellum, consolidated bioprocessing, ethanol, thermophilic anaerobes

Procedia PDF Downloads 378
143 Development of a Fire Analysis Drone for Smoke Toxicity Measurement for Fire Prediction and Management

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

This research presents the design and creation of a drone gas analyser, aimed at addressing the need for independent data collection and analysis of gas emissions during large-scale fires, particularly wasteland fires. The analyser drone, comprising a lightweight gas analysis system attached to a remote-controlled drone, enables the real-time assessment of smoke toxicity and the monitoring of gases released into the atmosphere during such incidents. The key components of the analyser unit included two gas line inlets connected to glass wool filters, a pump with regulated flow controlled by a mass flow controller, and electrochemical cells for detecting nitrogen oxides, hydrogen cyanide, and oxygen levels. Additionally, a non-dispersive infrared (NDIR) analyser is employed to monitor carbon monoxide (CO), carbon dioxide (CO₂), and hydrocarbon concentrations. Thermocouples can be attached to the analyser to monitor temperature, as well as McCaffrey probes combined with pressure transducers to monitor air velocity and wind direction. These additions allow for monitoring of the large fire and can be used for predictions of fire spread. The innovative system not only provides crucial data for assessing smoke toxicity but also contributes to fire prediction and management. The remote-controlled drone's mobility allows for safe and efficient data collection in proximity to the fire source, reducing the need for human exposure to hazardous conditions. The data obtained from the gas analyser unit facilitates informed decision-making by emergency responders, aiding in the protection of both human health and the environment. This abstract highlights the successful development of a drone gas analyser, illustrating its potential for enhancing smoke toxicity analysis and fire prediction capabilities. The integration of this technology into fire management strategies offers a promising solution for addressing the challenges associated with wildfires and other large-scale fire incidents. The project's methodology and results contribute to the growing body of knowledge in the field of environmental monitoring and safety, emphasizing the practical utility of drones for critical applications.

Keywords: fire prediction, drone, smoke toxicity, analyser, fire management

Procedia PDF Downloads 59
142 Novel Adomet Analogs as Tools for Nucleic Acids Labeling

Authors: Milda Nainyte, Viktoras Masevicius

Abstract:

Biological methylation is a methyl group transfer from S-adenosyl-L-methionine (AdoMet) onto N-, C-, O- or S-nucleophiles in DNA, RNA, proteins or small biomolecules. The reaction is catalyzed by enzymes called AdoMet-dependent methyltransferases (MTases), which represent more than 3 % of the proteins in the cell. As a general mechanism, the methyl group from AdoMet replaces a hydrogen atom of nucleophilic center producing methylated DNA and S-adenosyl-L-homocysteine (AdoHcy). Recently, DNA methyltransferases have been used for the sequence-specific, covalent labeling of biopolymers. Two types of MTase catalyzed labeling of biopolymers are known, referred as two-step and one-step. During two-step labeling, an alkylating fragment is transferred onto DNA in a sequence-specific manner and then the reporter group, such as biotin, is attached for selective visualization using suitable chemistries of coupling. This approach of labeling is quite difficult and the chemical hitching does not always proceed at 100 %, but in the second step the variety of reporter groups can be selected and that gives the flexibility for this labeling method. In the one-step labeling, AdoMet analog is designed with the reporter group already attached to the functional group. Thus, the one-step labeling method would be more comfortable tool for labeling of biopolymers in order to prevent additional chemical reactions and selection of reaction conditions. Also, time costs would be reduced. However, effective AdoMet analog appropriate for one-step labeling of biopolymers and containing cleavable bond, required for reduction of PCR interferation, is still not known. To expand the practical utility of this important enzymatic reaction, cofactors with activated sulfonium-bound side-chains have been produced and can serve as surrogate cofactors for a variety of wild-type and mutant DNA and RNA MTases enabling covalent attachment of these chains to their target sites in DNA, RNA or proteins (the approach named methyltransferase-directed Transfer of Activated Groups, mTAG). Compounds containing hex-2-yn-1-yl moiety has proved to be efficient alkylating agents for labeling of DNA. Herein we describe synthetic procedures for the preparation of N-biotinoyl-N’-(pent-4-ynoyl)cystamine starting from the coupling of cystamine with pentynoic acid and finally attaching the biotin as a reporter group. The synthesis of the first AdoMet based cofactor containing a cleavable reporter group and appropriate for one-step labeling was developed.

Keywords: adoMet analogs, DNA alkylation, cofactor, methyltransferases

Procedia PDF Downloads 172
141 Destigmatising Generalised Anxiety Disorder: The Differential Effects of Causal Explanations on Stigma

Authors: John McDowall, Lucy Lightfoot

Abstract:

Stigma constitutes a significant barrier to the recovery and social integration of individuals affected by mental illness. Although there is some debate in the literature regarding the definition and utility of stigma as a concept, it is widely accepted that it comprises three components: stereotypical beliefs, prejudicial reactions, and discrimination. Stereotypical beliefs describe the cognitive knowledge-based component of stigma, referring to beliefs (often negative) about members of a group that is based on cultural and societal norms (e.g. ‘People with anxiety are just weak’). Prejudice refers to the affective/evaluative component of stigma and describes the endorsement of negative stereotypes and the resulting negative emotional reactions (e.g. ‘People with anxiety are just weak, and they frustrate me’). Discrimination refers to the behavioural component of stigma, which is arguably the most problematic, as it exerts a direct effect on the stigmatized person and may lead people to behave in a hostile or avoidant way towards them (i.e. refusal to hire them). Research exploring anti-stigma initiatives focus primarily on an educational approach, with the view that accurate information will replace misconceptions and decrease stigma. Many approaches take a biogenetic stance, emphasising brain and biochemical deficits - the idea being that ‘mental illness is an illness like any other.' While this approach tends to effectively reduce blame, it has also demonstrated negative effects such as increasing prognostic pessimism, the desire for social distance and perceptions of stereotypes. In the present study 144 participants were split into three groups and read one of three vignettes presenting causal explanations for Generalised Anxiety Disorder (GAD): One explanation emphasized biogenetic factors as being important in the etiology of GAD, another emphasised psychosocial factors (e.g. aversive life events, poverty, etc.), and a third stressed the adaptive features of the disorder from an evolutionary viewpoint. A variety of measures tapping the various components of stigma were administered following the vignettes. No difference in stigma measures as a function of causal explanation was found. People who had contact with mental illness in the past were significantly less stigmatising across a wide range of measures, but this did not interact with the type of causal explanation.

Keywords: generalised anxiety disorder, discrimination, prejudice, stigma

Procedia PDF Downloads 257
140 Utility of Thromboelastography Derived Maximum Amplitude and R-Time (MA-R) Ratio as a Predictor of Mortality in Trauma Patients

Authors: Arulselvi Subramanian, Albert Venencia, Sanjeev Bhoi

Abstract:

Coagulopathy of trauma is an early endogenous coagulation abnormality that occurs shortly resulting in high mortality. In emergency trauma situations, viscoelastic tests may be better in identifying the various phenotypes of coagulopathy and demonstrate the contribution of platelet function to coagulation. We aimed to determine thrombin generation and clot strength, by estimating a ratio of Maximum amplitude and R-time (MA-R ratio) for identifying trauma coagulopathy and predicting subsequent mortality. Methods: We conducted a prospective cohort analysis of acutely injured trauma patients of the adult age groups (18- 50 years), admitted within 24hrs of injury, for one year at a Level I trauma center and followed up on 3rd day and 5th day of injury. Patients with h/o coagulation abnormalities, liver disease, renal impairment, with h/o intake of drugs were excluded. Thromboelastography was done and a ratio was calculated by dividing the MA by the R-time (MA-R). Patients were further stratified into sub groups based on the calculated MA-R quartiles. First sampling was done within 24 hours of injury; follow up on 3rd and 5thday of injury. Mortality was the primary outcome. Results: 100 acutely injured patients [average, 36.6±14.3 years; 94% male; injury severity score 12.2(9-32)] were included in the study. Median (min-max) on admission MA-R ratio was 15.01(0.4-88.4) which declined 11.7(2.2-61.8) on day three and slightly rose on day 5 13.1(0.06-68). There were no significant differences between sub groups in regard to age, or gender. In the lowest MA-R ratios subgroup; MA-R1 (<8.90; n = 27), injury severity score was significantly elevated. MA-R2 (8.91-15.0; n = 23), MA-R3 (15.01-19.30; n = 24) and MA-R4 (>19.3; n = 26) had no difference between their admission laboratory investigations, however slight decline was observed in hemoglobin, red blood cell count and platelet counts compared to the other subgroups. Also significantly prolonged R time, shortened alpha angle and MA were seen in MA-R1. Elevated incidence of mortality also significantly correlated with on admission low MA-R ratios (p 0.003). Temporal changes in the MA-R ratio did not correlated with mortality. Conclusion: The MA-R ratio provides a snapshot of early clot function, focusing specifically on thrombin burst and clot strength. In our observation, patients with the lowest MA-R time ratio (MA-R1) had significantly increased mortality compared with all other groups (45.5% MA-R1 compared with <25% in MA-R2 to MA-R3, and 9.1% in MA-R4; p < 0.003). Maximum amplitude and R-time may prove highly useful to predict at-risk patients early, when other physiologic indicators are absent.

Keywords: coagulopathy, trauma, thromboelastography, mortality

Procedia PDF Downloads 137
139 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses

Authors: Matthew Baucum

Abstract:

With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.

Keywords: FMRI, machine learning, meta-analysis, text analysis

Procedia PDF Downloads 424
138 Posterior Cortical Atrophy Phenotype of Alzheimer’s Dementia: A Case Report

Authors: Joana Beyer

Abstract:

Background: Alzheimer’s disease (AD) is the predominant cause of dementia, characterized by progressive cognitive decline. Posterior cortical atrophy (PCA) is a less common variant of AD, primarily affecting younger individuals and presenting with visual, visuospatial, and visuoperceptual deficits, often leading to delayed diagnosis due to its atypical presentation. Case Presentation: We report the case of a 58-year-old woman referred to psychiatric services with a two-year history of progressive visuospatial decline, mild memory difficulties, and language impairments, notably anomia. Despite undergoing cataract and squint surgeries, her visual symptoms persisted, impacting her professional life as a music educator. The neuropsychological evaluation revealed profound visuoperceptual and visuospatial disturbances, with neuroimaging supporting a diagnosis of PCA. Treatment with Donepezil showed symptom improvement, highlighting the challenges and importance of early intervention and managing this atypical form of AD. Methods: The diagnostic process involved comprehensive physical, neuropsychological assessments, and neuroimaging, including MRI and F18 FDG PET CT, which demonstrated severe bilateral posterior cortical involvement. The case underscores the utility of these modalities in diagnosing PCA. Results: The initiation of Donepezil, an acetylcholinesterase inhibitor, resulted in symptom improvement, emphasizing the potential for AD treatments to benefit PCA patients. However, challenges in management, including treatment side effects and the necessity of multidisciplinary care, are discussed. Conclusion: This case highlights PCA's diagnostic challenges due to its atypical presentation and the broader implications for managing younger patients with early-onset dementia. It underscores the necessity for early recognition, comprehensive assessment, and tailored management strategies, including both pharmacological and non-pharmacological interventions, to improve patients' quality of life. Additionally, the case illustrates the need for expanding community memory services to accommodate younger patients with atypical forms of dementia, advocating for a more inclusive approach to dementia care.

Keywords: Alzheimer’s disease, posterior cortical atrophy, dementia, diagnosis, management, donepezil, early-onset dementia

Procedia PDF Downloads 26
137 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials

Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik

Abstract:

Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.

Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes

Procedia PDF Downloads 30
136 Efficacy of Opicapone and Levodopa with Different Levodopa Daily Doses in Parkinson’s Disease Patients with Early Motor Fluctuations: Findings from the Korean ADOPTION Study

Authors: Jee-Young Lee, Joaquim J. Ferreira, Hyeo-il Ma, José-Francisco Rocha, Beomseok Jeon

Abstract:

The effective management of wearing-off is a key driver of medication changes for patients with Parkinson’s disease (PD) treated with levodopa (L-DOPA). While L-DOPA is well tolerated and efficacious, its clinical utility over time is often limited by the development of complications such as dyskinesia. Still, common first-line option includes adjusting the daily L-DOPA dose followed by adjunctive therapies usually counting for the L-DOPA equivalent daily dose (LEDD). The LEDD conversion formulae are a tool used to compare the equivalence of anti-PD medications. The aim of this work is to compare the effects of opicapone (OPC) 50 mg, a catechol-O-methyltransferase (COMT) inhibitor, and an additional 100 mg dose of L-DOPA in reducing the off time in PD patients with early motor fluctuations receiving different daily L-DOPA doses. OPC was found to be well tolerated and efficacious in advanced PD population. This work utilized patients' home diary data from a 4-week Phase 2 pharmacokinetics clinical study. The Korean ADOPTION study randomized (1:1) patients with PD and early motor fluctuations treated with up to 600 mg of L-DOPA given 3–4 times daily. The main endpoint was change from baseline in off time in the subgroup of patients receiving 300–400 mg/day L-DOPA at baseline plus OPC 50 mg and in the subgroup receiving >300 mg/day L-DOPA at baseline plus an additional dose of L-DOPA 100 mg. Of the 86 patients included in this subgroup analysis, 39 received OPC 50 mg and 47 L-DOPA 100 mg. At baseline, both L-DOPA total daily dose and LEDD were lower in the L-DOPA 300–400 mg/day plus OPC 50 mg group than in the L-DOPA >300 mg/day plus L-DOPA 100 mg. However, at Week 4, LEDD was similar between the two groups. The mean (±standard error) reduction in off time was approximately three-fold greater for the OPC 50 mg than for the L-DOPA 100 mg group, being -63.0 (14.6) minutes for patients treated with L-DOPA 300–400 mg/day plus OPC 50 mg, and -22.1 (9.3) minutes for those receiving L-DOPA >300 mg/day plus L-DOPA 100 mg. In conclusion, despite similar LEDD, OPC demonstrated a significantly greater reduction in off time when compared to an additional 100 mg L-DOPA dose. The effect of OPC appears to be LEDD independent, suggesting that caution should be exercised when employing LEDD to guide treatment decisions as this does not take into account the timing of each dose, onset, duration of therapeutic effect and individual responsiveness. Additionally, OPC could be used for keeping the L-DOPA dose as low as possible for as long as possible to avoid the development of motor complications which are a significant source of disability.

Keywords: opicapone, levodopa, pharmacokinetics, off-time

Procedia PDF Downloads 33
135 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 127