Search results for: utility gains
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1059

Search results for: utility gains

219 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 92
218 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City

Authors: Sultan Ahmad Azizi, Gaurang J. Joshi

Abstract:

Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.

Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport

Procedia PDF Downloads 245
217 Towards a More Inclusive Society: A Study on the Assimilation and Integration of the Migrant Children in Kerala

Authors: Arun Perumbilavil Anand

Abstract:

For the past few years, the state of Kerala has been witnessing a large inflow of migrant workers from other states of the country, which emerged as a result of demographic transition and Gulf emigration. The in-migration patterns in Kerala have changed over the time with the migrants having a higher residence history bringing their families to the state, thereby making the process more complicated and divergent in its approach. These developments have led to an increase in the young migrant population at least in some parts of the state, which has opened up doubts and questions related to their future in the host society. At this juncture, the study ponders into the factors that are associated with the assimilation and wellbeing of migrant children in the society of Kerala. As one of the objectives, the study also analyzed the influence and role played by the educational institutions (both public and private) in meeting the needs and aspirations of both the children and their parents. The study gains significance as it tries to identify various impediments that hinder the cognitive skill formation and behaviour patterns of the migrant children in the host society. Data and Methodology: The study is based on the primary data collected through a series of interviews and interactions held with parents, children, and teachers of different educational institutions, including both public and private. The primary survey also made use of research techniques like observation, in-depth interviews, and case study method. The study was conducted in schools in the Kanjikode area of the Palakkad district in Kerala. The findings of the study are on the basis of a survey conducted in four schools and 40 migrant children. Findings: The study found that majority of the children have wholly integrated and assimilated into the host society. The influence of the peer group was quite visible in giving stimulus to the assimilation process. Most of the children do not have any emotional or cultural sentiments attached to their state of origin, and they consider Kerala as their ‘home state’ and the local language (Malayalam) as their ‘mother tongue'. The study could also find that the existing education system in the host society fails to meet the needs and aspirations of migrants as well as that of their children. On a comparative scale, to some extent, private schools have succeeded in fulfiling the special requirements of the migrant children. An interesting point that the study could pinpoint at is that the children of the migrants show better health conditions and wellbeing than compared to the natives, which is usually addressed as an epidemiologic paradox. As a concluding remark, the study recommends the inclusion concept of inclusive education into the education system of the state with giving due emphasis on those who are at higher risk of being excluded or marginalized, along with fostering increased interaction between diverse groups.

Keywords: assimilation, Kerala, migrant children, well-being

Procedia PDF Downloads 154
216 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 160
215 Practice on Design Knowledge Management and Transfer across the Life Cycle of a New-Built Nuclear Power Plant in China

Authors: Danying Gu, Xiaoyan Li, Yuanlei He

Abstract:

As a knowledge-intensive industry, nuclear industry highly values the importance of safety and quality. The life cycle of a NPP (Nuclear Power Plant) can last 100 years from the initial research and design to its decommissioning. How to implement the high-quality knowledge management and how to contribute to a more safe, advanced and economic NPP (Nuclear Power Plant) is the most important issue and responsibility for knowledge management. As the lead of nuclear industry, nuclear research and design institute has competitive advantages of its advanced technology, knowledge and information, DKM (Design Knowledge Management) of nuclear research and design institute is the core of the knowledge management in the whole nuclear industry. In this paper, the study and practice on DKM and knowledge transfer across the life cycle of a new-built NPP in China is introduced. For this digital intelligent NPP, the whole design process is based on a digital design platform which includes NPP engineering and design dynamic analyzer, visualization engineering verification platform, digital operation maintenance support platform and digital equipment design, manufacture integrated collaborative platform. In order to make all the design data and information transfer across design, construction, commissioning and operation, the overall architecture of new-built digital NPP should become a modern knowledge management system. So a digital information transfer model across the NPP life cycle is proposed in this paper. The challenges related to design knowledge transfer is also discussed, such as digital information handover, data center and data sorting, unified data coding system. On the other hand, effective delivery of design information during the construction and operation phase will contribute to the comprehensive understanding of design ideas and components and systems for the construction contractor and operation unit, largely increasing the safety, quality and economic benefits during the life cycle. The operation and maintenance records generated from the NPP operation process have great significance for maintaining the operating state of NPP, especially the comprehensiveness, validity and traceability of the records. So the requirements of an online monitoring and smart diagnosis system of NPP is also proposed, to help utility-owners to improve the safety and efficiency.

Keywords: design knowledge management, digital nuclear power plant, knowledge transfer, life cycle

Procedia PDF Downloads 259
214 Development of Requirements Analysis Tool for Medical Autonomy in Long-Duration Space Exploration Missions

Authors: Lara Dutil-Fafard, Caroline Rhéaume, Patrick Archambault, Daniel Lafond, Neal W. Pollock

Abstract:

Improving resources for medical autonomy of astronauts in prolonged space missions, such as a Mars mission, requires not only technology development, but also decision-making support systems. The Advanced Crew Medical System - Medical Condition Requirements study, funded by the Canadian Space Agency, aimed to create knowledge content and a scenario-based query capability to support medical autonomy of astronauts. The key objective of this study was to create a prototype tool for identifying medical infrastructure requirements in terms of medical knowledge, skills and materials. A multicriteria decision-making method was used to prioritize the highest risk medical events anticipated in a long-term space mission. Starting with those medical conditions, event sequence diagrams (ESDs) were created in the form of decision trees where the entry point is the diagnosis and the end points are the predicted outcomes (full recovery, partial recovery, or death/severe incapacitation). The ESD formalism was adapted to characterize and compare possible outcomes of medical conditions as a function of available medical knowledge, skills, and supplies in a given mission scenario. An extensive literature review was performed and summarized in a medical condition database. A PostgreSQL relational database was created to allow query-based evaluation of health outcome metrics with different medical infrastructure scenarios. Critical decision points, skill and medical supply requirements, and probable health outcomes were compared across chosen scenarios. The three medical conditions with the highest risk rank were acute coronary syndrome, sepsis, and stroke. Our efforts demonstrate the utility of this approach and provide insight into the effort required to develop appropriate content for the range of medical conditions that may arise.

Keywords: decision support system, event-sequence diagram, exploration mission, medical autonomy, scenario-based queries, space medicine

Procedia PDF Downloads 111
213 A Case Study of Low Head Hydropower Opportunities at Existing Infrastructure in South Africa

Authors: Ione Loots, Marco van Dijk, Jay Bhagwan

Abstract:

Historically, South Africa had various small-scale hydropower installations in remote areas that were not incorporated in the national electricity grid. Unfortunately, in the 1960s most of these plants were decommissioned when Eskom, the national power utility, rapidly expanded its grid and capability to produce cheap, reliable, coal-fired electricity. This situation persisted until 2008, when rolling power cuts started to affect all citizens. This, together with the rising monetary and environmental cost of coal-based power generation, has sparked new interest in small-scale hydropower development, especially in remote areas or at locations (like wastewater treatment works) that could not afford to be without electricity for long periods at a time. Even though South Africa does not have the same, large-scale, hydropower potential as some other African countries, significant potential for micro- and small-scale hydropower is hidden in various places. As an example, large quantities of raw and potable water are conveyed daily under either pressurized or gravity conditions over large distances and elevations. Due to the relative water scarcity in the country, South Africa also has more than 4900 registered dams of varying capacities. However, institutional capacity and skills have not been maintained in recent years and therefore the identification of hydropower potential, as well as the development of micro- and small-scale hydropower plants has not gained significant momentum. An assessment model and decision support system for low head hydropower development has been developed to assist designers and decision makers with first-order potential analysis. As a result, various potential sites were identified and many of these sites were situated at existing infrastructure like weirs, barrages or pipelines. One reason for the specific interest in existing infrastructure is the fact that capital expenditure could be minimized and another is the reduced negative environmental impact compared to greenfield sites. This paper will explore the case study of retrofitting an unconventional and innovative hydropower plant to the outlet of a wastewater treatment works in South Africa.

Keywords: low head hydropower, retrofitting, small-scale hydropower, wastewater treatment works

Procedia PDF Downloads 230
212 The Effect of Paper Based Concept Mapping on Students' Academic Achievement and Attitude in Science Education

Authors: Orhan Akınoğlu, Arif Çömek, Ersin Elmacı, Tuğba Gündoğdu

Abstract:

The concept map is known to be a powerful tool to organize the ideas and concepts of an individuals’ mind. This tool is a kind of visual map that illustrates the relationships between the concepts of a certain subject. The effect of concept mapping on cognitive and affective qualities is one of the research topics among educational researchers for last decades. We educators want to utilize it both as an instructional tool or an assessment tool in classes. For that reason, this study aimed to determine the effect of concept mapping as a learning strategy in science classes on students’ academic achievement and attitude. The research employed a randomized pre-test post-test control group design. Data collected from 60 sixth grade students participated in the study from a randomly selected primary school in Turkey. Sixth-grade classes of the school were analyzed according to students’ academic achievement, science attitude, gender, mathematics, science courses grades, and their GPAs before the implementation. Two of the classes found to be equivalent (t=0,983, p>0,05) and one of them was defined as experimental and the other one control group randomly. During a 5-weeks period, the experimental group students (N=30) used the paper-based concept mapping method while the control group students (N=30) were taught with the traditional approach according to the science and technology education curriculum for light and sound subject. Both groups were taught by the same teacher who is experienced using concept mapping in science classes. Before the implementation, the teacher explained the theory of the concept maps and showed how to create paper-based concept mapping individually to the experimental group students for two hours. Then for two following hours she asked them to create some concept maps related to their former science subjects and gave them feedback by reviewing their concept maps to be sure that they can create during the implementation. The data were collected by science achievement test, science attitude scale and personal information form. Science achievement test and science attitude scale were implemented as pre-test and post-test while personal information form was implemented just as once. The reliability coefficient of the achievement test was KR20=0,76 and Cronbach’s Alpha of the attitude scale was 0,89. SPSS statistical software was used to analyze the data. According to the results, there was a statistically significant difference between the experimental and control group for academic achievement but not for attitude. The experimental group had significantly greater gains from academic achievement test than the control group (t=0,02, p<0,05). The findings showed that the paper-and-pencil concept mapping can be used as an effective method for students’ academic achievement in science classes. The results have implications for further researches.

Keywords: concept mapping, science education, constructivism, academic achievement, science attitude

Procedia PDF Downloads 390
211 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 332
210 Utility of Thromboelastography to Reduce Coagulation-Related Mortality and Blood Component Rate in Neurosurgery ICU

Authors: Renu Saini, Deepak Agrawal

Abstract:

Background: Patients with head and spinal cord injury frequently have deranged coagulation profiles and require blood products transfusion perioperatively. Thromboelastography (TEG) is a ‘bedside’ global test of coagulation which may have role in deciding the need of transfusion in such patients. Aim: To assess the usefulness of TEG in department of neurosurgery in decreasing transfusion rates and coagulation-related mortality in traumatic head and spinal cord injury. Method and Methodology: A retrospective comparative study was carried out in the department of neurosurgery over a period of 1 year. There are two groups in this study. ‘Control’ group constitutes the patients in whom data was collected over 6 months (1/6/2009-31/12/2009) prior to installation of TEG machine. ‘Test’ group includes patients in whom data was collected over 6months (1/1/2013-30/6/2013) post TEG installation. Total no. of platelet, FFP, and cryoprecipitate transfusions were noted in both groups along with in hospital mortality and length of stay. Result: Both groups were matched in age and sex of patients, number of head and spinal cord injury cases, number of patients with thrombocytopenia and number of patients who underwent operation. Total 178 patients (135 head injury and 43 spinal cord injury patents) were admitted in neurosurgery department during time period June 2009 to December 2009 i.e. prior to TEG installation and after TEG installation a total of 243 patients(197 head injury and 46 spinal cord injury patents) were admitted. After TEG introduction platelet transfusion significantly reduced (p=0.000) compare to control group (67 units to 34 units). Mortality rate was found significantly reduced after installation (77 patients to 57 patients, P=0.000). Length of stay was reduced significantly (Prior installation 1-211days and after installation 1-115days, p=0.02). Conclusion: Bedside TEG can dramatically reduce platelet transfusion components requirement in department of neurosurgery. TEG also lead to a drastic decrease in mortality rate and length of stay in patients with traumatic head and spinal cord injuries. We recommend its use as a standard of care in the patients with traumatic head and spinal cord injuries.

Keywords: blood component transfusion, mortality, neurosurgery ICU, thromboelastography

Procedia PDF Downloads 312
209 Power Asymmetry and Major Corporate Social Responsibility Projects in Mhondoro-Ngezi District, Zimbabwe

Authors: A. T. Muruviwa

Abstract:

Empirical studies of the current CSR agenda have been dominated by literature from the North at the expense of the nations from the South where most TNCs are located. Therefore, owing to the limitations of the current discourse that is dominated by Western ideas such as voluntarism, philanthropy, business case and economic gains, scholars have been calling for a new CSR agenda that is South-centred and addresses the needs of developing nations. The development theme has dominated in the recent literature as scholars concerned with the relationship between business and society have tried to understand its relationship with CSR. Despite a plethora of literature on the roles of corporations in local communities and the impact of CSR initiatives, there is lack of adequate empirical evidence to help us understand the nexus between CSR and development. For all the claims made about the positive and negative consequences of CSR, there is surprisingly little information about the outcomes it delivers. This study is a response to these claims made about the developmental aspect of CSR in developing countries. It offers some empirical bases for assessing the major CSR projects that have been fulfilled by a major mining company, Zimplats in Mhondoro-Ngezi Zimbabwe. The neo-liberal idea of capitalism and market dominations has empowered TNCs to stamp their authority in the developing countries. TNCs have made their mark in developing nations as they stamp their global private authority, rivalling or implicitly challenging the state in many functions. This dominance of corporate power raises great concerns over their tendencies of abuses in terms of environmental, social and human rights concerns as well as how to make them increasingly accountable. The hegemonic power of TNCs in the developing countries has had a tremendous impact on the overall CSR practices. While TNCs are key drivers of globalization they may be acting responsibly in their Global Northern home countries where there is a combination of legal mechanisms and the fear of civil society activism associated with corporate scandals. Using a triangulated approach in which both qualitative and quantitative methods were used the study found out that most CSR projects in Zimbabwe are dominated and directed by Zimplats because of the power it possesses. Most of the major CSR projects are beneficial to the mining company as they serve the business plans of the mining company. What was deduced from the study is that the infrastructural development initiatives by Zimplats confirm that CSR is a tool to advance business obligations. This shows that although proponents of CSR might claim that business has a mandate for social obligations to society, we need not to forget the dominant idea that the primary function of CSR is to enhance the firm’s profitability.

Keywords: hegemonic power, projects, reciprocity, stakeholders

Procedia PDF Downloads 234
208 Blood Ketones as a Point of Care Testing in Paediatric Emergencies

Authors: Geetha Jayapathy, Lakshmi Muthukrishnan, Manoj Kumar Reddy Pulim , Radhika Raman

Abstract:

Introduction: Ketones are the end products of fatty acid metabolism and a source of energy for vital organs such as the brain, heart and skeletal muscles. Ketones are produced in excess when glucose is not available as a source of energy or it cannot be utilized as in diabetic ketoacidosis. Children admitted in the emergency department often have starvation ketosis which is not clinically manifested. Decision on admission of children to the emergency room with subtle signs can be difficult at times. Point of care blood ketone testing can be done at the bedside even in a primary level care setting to supplement and guide us in our management decisions. Hence this study was done to explore the utility of this simple bedside parameter as a supplement in assessing pediatric patients presenting to the emergency department. Objectives: To estimate blood ketones of children admitted in the emergency department. To analyze the significance of blood ketones in various disease conditions. Methods: Blood ketones using point of care testing instrument (ABOTTprecision Xceed Pro meters) was done in patients getting admitted in emergency room and in out-patients (through sample collection centre). Study population: Children aged 1 month to 18 years were included in the study. 250 cases (In-patients) and 250 controls (out-patients) were collected. Study design: Prospective observational study. Data on details of illness and physiological status were documented. Blood ketones were compared between the two groups and all in patients were categorized into various system groups and analysed. Results: Mean blood ketones were high in in-patients ranging from 0 to 7.2, with a mean of 1.28 compared to out-patients ranging from 0 to 1.9 with a mean of 0.35. This difference was statistically significant with a p value < 0.001. In-patients with shock (mean of 4.15) and diarrheal dehydration (mean of 1.85) had a significantly higher blood ketone values compared to patients with other system involvement. Conclusion: Blood ketones were significantly high (above the normal range) in pediatric patients who are sick requiring admission. Patients with various forms of shock had very high blood ketone values as found in diabetic ketoacidosis. Ketone values in diarrheal dehydration were moderately high correlating to the degree of dehydration.

Keywords: admission, blood ketones, paediatric emergencies, point of care testing

Procedia PDF Downloads 193
207 Insights into The Oversight Functions of The Legislative Power Under The Nigerian Constitution

Authors: Olanrewaju O. Adeojo

Abstract:

The constitutional system of government provides for the federating units of the Federal Republic of Nigeria, the States and the Local Councils under a governing structure of the Executive, the Legislature and the Judiciary with attendant distinct powers and spheres of influence. The legislative powers of the Federal Republic of Nigeria and of a State are vested in the National Assembly and House of Assembly of the State respectively. The Local council exercises legislative powers in clearly defined matters as provided by the Constitution. Though, the executive as constituted by the President and the Governor are charged with the powers of execution and administration, the legislature is empowered to ensure that such powers are duly exercised in accordance with the provisions of the Constitution. The vast areas do not make oversight functions indefinite and more importantly the purpose for the exercise of the powers are circumscribed. It include, among others, any matter with respect to which it has power to make laws. Indeed, the law provides for the competence of the legislature to procure evidence, examine all persons as witnesses, to summon any person to give evidence and to issue a warrant to compel attendance in matters relevant to the subject matter of its investigation. The exercise of functions envisaged by the Constitution seem to an extent to be literal because it lacks power of enforcing the outcome. Furthermore, the docility of the legislature is apparent in a situation where the agency or authority being called in to question is part of the branch of government to enforce sanctions. The process allows for cover up and obstruction of justice. The oversight functions are not functional in a situation where the executive is overbearing. The friction, that ensues, between the Legislature and the Executive in an attempt by the former to project the spirit of a constitutional mandate calls for concern. It is needless to state a power that can easily be frustrated. To an extent, the arm of government with coercive authority seems to have over shadowy effect over the laid down functions of the legislature. Recourse to adjudication by the Judiciary had not proved to be of any serious utility especially in a clime where the wheels of justice grinds slowly, as in Nigeria, due to the nature of the legal system. Consequently, the law and the Constitution, drawing lessons from other jurisdiction, need to insulate the legislative oversight from the vagaries of the executive. A strong and virile Constitutional Court that determines, within specific time line, issues pertaining to the oversight functions of the legislative power, is apposite.

Keywords: constitution, legislative, oversight, power

Procedia PDF Downloads 117
206 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 137
205 Use of Thrombolytics for Acute Myocardial Infarctions in Resource-Limited Settings, Globally: A Systematic Literature Review

Authors: Sara Zelman, Courtney Meyer, Hiren Patel, Lisa Philpotts, Sue Lahey, Thomas Burke

Abstract:

Background: As the global burden of disease shifts from infectious diseases to noncommunicable diseases, there is growing urgency to provide treatment for time-sensitive illnesses, such as ST-Elevation Myocardial Infarctions (STEMIs). The standard of care for STEMIs in developed countries is Percutaneous Coronary Intervention (PCI). However, this is inaccessible in resource-limited settings. Before the discovery of PCI, Streptokinase (STK) and other thrombolytic drugs were first-line treatments for STEMIs. STK has been recognized as a cost-effective and safe treatment for STEMIs; however, in settings which lack access to PCI, it has not become the established second-line therapy. A systematic literature review was conducted to geographically map the use of STK for STEMIs in resource-limited settings. Methods: Our literature review group searched the databases Cinhal, Embase, Ovid, Pubmed, Web of Science, and WHO’s Index Medicus. The search terms included ‘thrombolytics’ AND ‘myocardial infarction’ AND ‘resource-limited’ and were restricted to human studies and papers written in English. A considerable number of studies came from Latin America; however, these studies were not written in English and were excluded. The initial search yielded 3,487 articles, which was reduced to 3,196 papers after titles were screened. Three medical professionals then screened abstracts, from which 291 articles were selected for full-text review and 94 papers were chosen for final inclusion. These articles were then analyzed and mapped geographically. Results: This systematic literature review revealed that STK has been used for the treatment of STEMIs in 33 resource-limited countries, with 18 of 94 studies taking place in India. Furthermore, 13 studies occurred in Pakistan, followed by Iran (6), Sri Lanka (5), Brazil (4), China (4), and South Africa (4). Conclusion: Our systematic review revealed that STK has been used for the treatment of STEMIs in 33 resource-limited countries, with the highest utilization occurring in India. This demonstrates that even though STK has high utility for STEMI treatment in resource-limited settings, it still has not become the standard of care. Future research should investigate the barriers preventing the establishment of STK use as second-line treatment after PCI.

Keywords: cardiovascular disease, global health, resource-limited setting, ST-Elevation Myocardial Infarction, Streptokinase

Procedia PDF Downloads 125
204 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 226
203 Clostridium thermocellum DBT-IOC-C19, A Potential CBP Isolate for Ethanol Production

Authors: Nisha Singh, Munish Puri, Collin Barrow, Deepak Tuli, Anshu S. Mathur

Abstract:

The biological conversion of lignocellulosic biomass to ethanol is a promising strategy to solve the present global crisis of exhausting fossil fuels. The existing bioethanol production technologies have cost constraints due to the involvement of mandate pretreatment and extensive enzyme production steps. A unique process configuration known as consolidated bioprocessing (CBP) is believed to be a potential cost-effective process due to its efficient integration of enzyme production, saccharification, and fermentation into one step. Due to several favorable reasons like single step conversion, no need of adding exogenous enzymes and facilitated product recovery, CBP has gained the attention of researchers worldwide. However, there are several technical and economic barriers which need to be overcome for making consolidated bioprocessing a commercially viable process. Finding a natural candidate CBP organism is critically important and thermophilic anaerobes are preferred microorganisms. The thermophilic anaerobes that can represent CBP mainly belong to genus Clostridium, Caldicellulosiruptor, Thermoanaerobacter, Thermoanaero bacterium, and Geobacillus etc. Amongst them, Clostridium thermocellum has received increased attention as a high utility CBP candidate due to its highest growth rate on crystalline cellulose, the presence of highly efficient cellulosome system and ability to produce ethanol directly from cellulose. Recently with the availability of genetic and molecular tools aiding the metabolic engineering of Clostridium thermocellum have further facilitated the viability of commercial CBP process. With this view, we have specifically screened cellulolytic and xylanolytic thermophilic anaerobic ethanol producing bacteria, from unexplored hot spring/s in India. One of the isolates is a potential CBP organism identified as a new strain of Clostridium thermocellum. This strain has shown superior avicel and xylan degradation under unoptimized conditions compared to reported wild type strains of Clostridium thermocellum and produced more than 50 mM ethanol in 72 hours from 1 % avicel at 60°C. Besides, this strain shows good ethanol tolerance and growth on both hexose and pentose sugars. Hence, with further optimization this new strain could be developed as a potential CBP microbe.

Keywords: Clostridium thermocellum, consolidated bioprocessing, ethanol, thermophilic anaerobes

Procedia PDF Downloads 387
202 Development of a Fire Analysis Drone for Smoke Toxicity Measurement for Fire Prediction and Management

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

This research presents the design and creation of a drone gas analyser, aimed at addressing the need for independent data collection and analysis of gas emissions during large-scale fires, particularly wasteland fires. The analyser drone, comprising a lightweight gas analysis system attached to a remote-controlled drone, enables the real-time assessment of smoke toxicity and the monitoring of gases released into the atmosphere during such incidents. The key components of the analyser unit included two gas line inlets connected to glass wool filters, a pump with regulated flow controlled by a mass flow controller, and electrochemical cells for detecting nitrogen oxides, hydrogen cyanide, and oxygen levels. Additionally, a non-dispersive infrared (NDIR) analyser is employed to monitor carbon monoxide (CO), carbon dioxide (CO₂), and hydrocarbon concentrations. Thermocouples can be attached to the analyser to monitor temperature, as well as McCaffrey probes combined with pressure transducers to monitor air velocity and wind direction. These additions allow for monitoring of the large fire and can be used for predictions of fire spread. The innovative system not only provides crucial data for assessing smoke toxicity but also contributes to fire prediction and management. The remote-controlled drone's mobility allows for safe and efficient data collection in proximity to the fire source, reducing the need for human exposure to hazardous conditions. The data obtained from the gas analyser unit facilitates informed decision-making by emergency responders, aiding in the protection of both human health and the environment. This abstract highlights the successful development of a drone gas analyser, illustrating its potential for enhancing smoke toxicity analysis and fire prediction capabilities. The integration of this technology into fire management strategies offers a promising solution for addressing the challenges associated with wildfires and other large-scale fire incidents. The project's methodology and results contribute to the growing body of knowledge in the field of environmental monitoring and safety, emphasizing the practical utility of drones for critical applications.

Keywords: fire prediction, drone, smoke toxicity, analyser, fire management

Procedia PDF Downloads 70
201 Novel Adomet Analogs as Tools for Nucleic Acids Labeling

Authors: Milda Nainyte, Viktoras Masevicius

Abstract:

Biological methylation is a methyl group transfer from S-adenosyl-L-methionine (AdoMet) onto N-, C-, O- or S-nucleophiles in DNA, RNA, proteins or small biomolecules. The reaction is catalyzed by enzymes called AdoMet-dependent methyltransferases (MTases), which represent more than 3 % of the proteins in the cell. As a general mechanism, the methyl group from AdoMet replaces a hydrogen atom of nucleophilic center producing methylated DNA and S-adenosyl-L-homocysteine (AdoHcy). Recently, DNA methyltransferases have been used for the sequence-specific, covalent labeling of biopolymers. Two types of MTase catalyzed labeling of biopolymers are known, referred as two-step and one-step. During two-step labeling, an alkylating fragment is transferred onto DNA in a sequence-specific manner and then the reporter group, such as biotin, is attached for selective visualization using suitable chemistries of coupling. This approach of labeling is quite difficult and the chemical hitching does not always proceed at 100 %, but in the second step the variety of reporter groups can be selected and that gives the flexibility for this labeling method. In the one-step labeling, AdoMet analog is designed with the reporter group already attached to the functional group. Thus, the one-step labeling method would be more comfortable tool for labeling of biopolymers in order to prevent additional chemical reactions and selection of reaction conditions. Also, time costs would be reduced. However, effective AdoMet analog appropriate for one-step labeling of biopolymers and containing cleavable bond, required for reduction of PCR interferation, is still not known. To expand the practical utility of this important enzymatic reaction, cofactors with activated sulfonium-bound side-chains have been produced and can serve as surrogate cofactors for a variety of wild-type and mutant DNA and RNA MTases enabling covalent attachment of these chains to their target sites in DNA, RNA or proteins (the approach named methyltransferase-directed Transfer of Activated Groups, mTAG). Compounds containing hex-2-yn-1-yl moiety has proved to be efficient alkylating agents for labeling of DNA. Herein we describe synthetic procedures for the preparation of N-biotinoyl-N’-(pent-4-ynoyl)cystamine starting from the coupling of cystamine with pentynoic acid and finally attaching the biotin as a reporter group. The synthesis of the first AdoMet based cofactor containing a cleavable reporter group and appropriate for one-step labeling was developed.

Keywords: adoMet analogs, DNA alkylation, cofactor, methyltransferases

Procedia PDF Downloads 181
200 Destigmatising Generalised Anxiety Disorder: The Differential Effects of Causal Explanations on Stigma

Authors: John McDowall, Lucy Lightfoot

Abstract:

Stigma constitutes a significant barrier to the recovery and social integration of individuals affected by mental illness. Although there is some debate in the literature regarding the definition and utility of stigma as a concept, it is widely accepted that it comprises three components: stereotypical beliefs, prejudicial reactions, and discrimination. Stereotypical beliefs describe the cognitive knowledge-based component of stigma, referring to beliefs (often negative) about members of a group that is based on cultural and societal norms (e.g. ‘People with anxiety are just weak’). Prejudice refers to the affective/evaluative component of stigma and describes the endorsement of negative stereotypes and the resulting negative emotional reactions (e.g. ‘People with anxiety are just weak, and they frustrate me’). Discrimination refers to the behavioural component of stigma, which is arguably the most problematic, as it exerts a direct effect on the stigmatized person and may lead people to behave in a hostile or avoidant way towards them (i.e. refusal to hire them). Research exploring anti-stigma initiatives focus primarily on an educational approach, with the view that accurate information will replace misconceptions and decrease stigma. Many approaches take a biogenetic stance, emphasising brain and biochemical deficits - the idea being that ‘mental illness is an illness like any other.' While this approach tends to effectively reduce blame, it has also demonstrated negative effects such as increasing prognostic pessimism, the desire for social distance and perceptions of stereotypes. In the present study 144 participants were split into three groups and read one of three vignettes presenting causal explanations for Generalised Anxiety Disorder (GAD): One explanation emphasized biogenetic factors as being important in the etiology of GAD, another emphasised psychosocial factors (e.g. aversive life events, poverty, etc.), and a third stressed the adaptive features of the disorder from an evolutionary viewpoint. A variety of measures tapping the various components of stigma were administered following the vignettes. No difference in stigma measures as a function of causal explanation was found. People who had contact with mental illness in the past were significantly less stigmatising across a wide range of measures, but this did not interact with the type of causal explanation.

Keywords: generalised anxiety disorder, discrimination, prejudice, stigma

Procedia PDF Downloads 271
199 Utility of Thromboelastography Derived Maximum Amplitude and R-Time (MA-R) Ratio as a Predictor of Mortality in Trauma Patients

Authors: Arulselvi Subramanian, Albert Venencia, Sanjeev Bhoi

Abstract:

Coagulopathy of trauma is an early endogenous coagulation abnormality that occurs shortly resulting in high mortality. In emergency trauma situations, viscoelastic tests may be better in identifying the various phenotypes of coagulopathy and demonstrate the contribution of platelet function to coagulation. We aimed to determine thrombin generation and clot strength, by estimating a ratio of Maximum amplitude and R-time (MA-R ratio) for identifying trauma coagulopathy and predicting subsequent mortality. Methods: We conducted a prospective cohort analysis of acutely injured trauma patients of the adult age groups (18- 50 years), admitted within 24hrs of injury, for one year at a Level I trauma center and followed up on 3rd day and 5th day of injury. Patients with h/o coagulation abnormalities, liver disease, renal impairment, with h/o intake of drugs were excluded. Thromboelastography was done and a ratio was calculated by dividing the MA by the R-time (MA-R). Patients were further stratified into sub groups based on the calculated MA-R quartiles. First sampling was done within 24 hours of injury; follow up on 3rd and 5thday of injury. Mortality was the primary outcome. Results: 100 acutely injured patients [average, 36.6±14.3 years; 94% male; injury severity score 12.2(9-32)] were included in the study. Median (min-max) on admission MA-R ratio was 15.01(0.4-88.4) which declined 11.7(2.2-61.8) on day three and slightly rose on day 5 13.1(0.06-68). There were no significant differences between sub groups in regard to age, or gender. In the lowest MA-R ratios subgroup; MA-R1 (<8.90; n = 27), injury severity score was significantly elevated. MA-R2 (8.91-15.0; n = 23), MA-R3 (15.01-19.30; n = 24) and MA-R4 (>19.3; n = 26) had no difference between their admission laboratory investigations, however slight decline was observed in hemoglobin, red blood cell count and platelet counts compared to the other subgroups. Also significantly prolonged R time, shortened alpha angle and MA were seen in MA-R1. Elevated incidence of mortality also significantly correlated with on admission low MA-R ratios (p 0.003). Temporal changes in the MA-R ratio did not correlated with mortality. Conclusion: The MA-R ratio provides a snapshot of early clot function, focusing specifically on thrombin burst and clot strength. In our observation, patients with the lowest MA-R time ratio (MA-R1) had significantly increased mortality compared with all other groups (45.5% MA-R1 compared with <25% in MA-R2 to MA-R3, and 9.1% in MA-R4; p < 0.003). Maximum amplitude and R-time may prove highly useful to predict at-risk patients early, when other physiologic indicators are absent.

Keywords: coagulopathy, trauma, thromboelastography, mortality

Procedia PDF Downloads 150
198 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses

Authors: Matthew Baucum

Abstract:

With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.

Keywords: FMRI, machine learning, meta-analysis, text analysis

Procedia PDF Downloads 431
197 Posterior Cortical Atrophy Phenotype of Alzheimer’s Dementia: A Case Report

Authors: Joana Beyer

Abstract:

Background: Alzheimer’s disease (AD) is the predominant cause of dementia, characterized by progressive cognitive decline. Posterior cortical atrophy (PCA) is a less common variant of AD, primarily affecting younger individuals and presenting with visual, visuospatial, and visuoperceptual deficits, often leading to delayed diagnosis due to its atypical presentation. Case Presentation: We report the case of a 58-year-old woman referred to psychiatric services with a two-year history of progressive visuospatial decline, mild memory difficulties, and language impairments, notably anomia. Despite undergoing cataract and squint surgeries, her visual symptoms persisted, impacting her professional life as a music educator. The neuropsychological evaluation revealed profound visuoperceptual and visuospatial disturbances, with neuroimaging supporting a diagnosis of PCA. Treatment with Donepezil showed symptom improvement, highlighting the challenges and importance of early intervention and managing this atypical form of AD. Methods: The diagnostic process involved comprehensive physical, neuropsychological assessments, and neuroimaging, including MRI and F18 FDG PET CT, which demonstrated severe bilateral posterior cortical involvement. The case underscores the utility of these modalities in diagnosing PCA. Results: The initiation of Donepezil, an acetylcholinesterase inhibitor, resulted in symptom improvement, emphasizing the potential for AD treatments to benefit PCA patients. However, challenges in management, including treatment side effects and the necessity of multidisciplinary care, are discussed. Conclusion: This case highlights PCA's diagnostic challenges due to its atypical presentation and the broader implications for managing younger patients with early-onset dementia. It underscores the necessity for early recognition, comprehensive assessment, and tailored management strategies, including both pharmacological and non-pharmacological interventions, to improve patients' quality of life. Additionally, the case illustrates the need for expanding community memory services to accommodate younger patients with atypical forms of dementia, advocating for a more inclusive approach to dementia care.

Keywords: Alzheimer’s disease, posterior cortical atrophy, dementia, diagnosis, management, donepezil, early-onset dementia

Procedia PDF Downloads 41
196 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials

Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik

Abstract:

Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.

Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes

Procedia PDF Downloads 40
195 Efficacy of Opicapone and Levodopa with Different Levodopa Daily Doses in Parkinson’s Disease Patients with Early Motor Fluctuations: Findings from the Korean ADOPTION Study

Authors: Jee-Young Lee, Joaquim J. Ferreira, Hyeo-il Ma, José-Francisco Rocha, Beomseok Jeon

Abstract:

The effective management of wearing-off is a key driver of medication changes for patients with Parkinson’s disease (PD) treated with levodopa (L-DOPA). While L-DOPA is well tolerated and efficacious, its clinical utility over time is often limited by the development of complications such as dyskinesia. Still, common first-line option includes adjusting the daily L-DOPA dose followed by adjunctive therapies usually counting for the L-DOPA equivalent daily dose (LEDD). The LEDD conversion formulae are a tool used to compare the equivalence of anti-PD medications. The aim of this work is to compare the effects of opicapone (OPC) 50 mg, a catechol-O-methyltransferase (COMT) inhibitor, and an additional 100 mg dose of L-DOPA in reducing the off time in PD patients with early motor fluctuations receiving different daily L-DOPA doses. OPC was found to be well tolerated and efficacious in advanced PD population. This work utilized patients' home diary data from a 4-week Phase 2 pharmacokinetics clinical study. The Korean ADOPTION study randomized (1:1) patients with PD and early motor fluctuations treated with up to 600 mg of L-DOPA given 3–4 times daily. The main endpoint was change from baseline in off time in the subgroup of patients receiving 300–400 mg/day L-DOPA at baseline plus OPC 50 mg and in the subgroup receiving >300 mg/day L-DOPA at baseline plus an additional dose of L-DOPA 100 mg. Of the 86 patients included in this subgroup analysis, 39 received OPC 50 mg and 47 L-DOPA 100 mg. At baseline, both L-DOPA total daily dose and LEDD were lower in the L-DOPA 300–400 mg/day plus OPC 50 mg group than in the L-DOPA >300 mg/day plus L-DOPA 100 mg. However, at Week 4, LEDD was similar between the two groups. The mean (±standard error) reduction in off time was approximately three-fold greater for the OPC 50 mg than for the L-DOPA 100 mg group, being -63.0 (14.6) minutes for patients treated with L-DOPA 300–400 mg/day plus OPC 50 mg, and -22.1 (9.3) minutes for those receiving L-DOPA >300 mg/day plus L-DOPA 100 mg. In conclusion, despite similar LEDD, OPC demonstrated a significantly greater reduction in off time when compared to an additional 100 mg L-DOPA dose. The effect of OPC appears to be LEDD independent, suggesting that caution should be exercised when employing LEDD to guide treatment decisions as this does not take into account the timing of each dose, onset, duration of therapeutic effect and individual responsiveness. Additionally, OPC could be used for keeping the L-DOPA dose as low as possible for as long as possible to avoid the development of motor complications which are a significant source of disability.

Keywords: opicapone, levodopa, pharmacokinetics, off-time

Procedia PDF Downloads 44
194 Acute Neurophysiological Responses to Resistance Training; Evidence of a Shortened Super Compensation Cycle and Early Neural Adaptations

Authors: Christopher Latella, Ashlee M. Hendy, Dan Vander Westhuizen, Wei-Peng Teo

Abstract:

Introduction: Neural adaptations following resistance training interventions have been widely investigated, however the evidence regarding the mechanisms of early adaptation are less clear. Understanding neural responses from an acute resistance training session is pivotal in the prescription of frequency, intensity and volume in applied strength and conditioning practice. Therefore the primary aim of this study was to investigate the time course of neurophysiological mechanisms post training against current super compensation theory, and secondly, to examine whether these responses reflect neural adaptations observed with resistance training interventions. Methods: Participants (N=14) completed a randomised, counterbalanced crossover study comparing; control, strength and hypertrophy conditions. The strength condition involved 3 x 5RM leg extensions with 3min recovery, while the hypertrophy condition involved 3 x 12 RM with 60s recovery. Transcranial magnetic stimulation (TMS) and peripheral nerve stimulation were used to measure excitability of the central and peripheral neural pathways, and maximal voluntary contraction (MVC) to quantify strength changes. Measures were taken pre, immediately post, 10, 20 and 30 mins and 1, 2, 6, 24, 48, 72 and 96 hrs following training. Results: Significant decreases were observed at post, 10, 20, 30 min, 1 and 2 hrs for both training groups compared to control group for force, (p <.05), maximal compound wave; (p < .005), silent period; (p < .05). A significant increase in corticospinal excitability; (p < .005) was observed for both groups. Corticospinal excitability between strength and hypertrophy groups was near significance, with a large effect (η2= .202). All measures returned to baseline within 6 hrs post training. Discussion: Neurophysiological mechanisms appear to be significantly altered in the period 2 hrs post training, returning to homeostasis by 6 hrs. The evidence suggests that the time course of neural recovery post resistance training occurs 18-40 hours shorter than previous super compensation models. Strength and hypertrophy protocols showed similar response profiles with current findings suggesting greater post training corticospinal drive from hypertrophy training, despite previous evidence that strength training requires greater neural input. The increase in corticospinal drive and decrease inl inhibition appear to be a compensatory mechanism for decreases in peripheral nerve excitability and maximal voluntary force output. The changes in corticospinal excitability and inhibition are akin to adaptive processes observed with training interventions of 4 wks or longer. It appears that the 2 hr recovery period post training is the most influential for priming further neural adaptations with resistance training. Secondly, the frequency of prescribed resistance sessions can be scheduled closer than previous super compensation theory for optimal strength gains.

Keywords: neural responses, resistance training, super compensation, transcranial magnetic stimulation

Procedia PDF Downloads 265
193 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 133
192 Understanding the Nature of Blood Pressure as Metabolic Syndrome Component in Children

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Pediatric overweight and obesity need attention because they may cause morbid obesity, which may develop metabolic syndrome (MetS). Criteria used for the definition of adult MetS cannot be applied for pediatric MetS. Dynamic physiological changes that occur during childhood and adolescence require the evaluation of each parameter based upon age intervals. The aim of this study is to investigate the distribution of blood pressure (BP) values within diverse pediatric age intervals and the possible use and clinical utility of a recently introduced Diagnostic Obesity Notation Model Assessment Tension (DONMA tense) Index derived from systolic BP (SBP) and diastolic BP (DBP) [SBP+DBP/200]. Such a formula may enable a more integrative picture for the assessment of pediatric obesity and MetS due to the use of both SBP and DBP. 554 children, whose ages were between 6-16 years participated in the study; the study population was divided into two groups based upon their ages. The first group comprises 280 cases aged 6-10 years (72-120 months), while those aged 10-16 years (121-192 months) constituted the second group. The values of SBP, DBP and the formula (SBP+DBP/200) covering both were evaluated. Each group was divided into seven subgroups with varying degrees of obesity and MetS criteria. Two clinical definitions of MetS have been described. These groups were MetS3 (children with three major components), and MetS2 (children with two major components). The other groups were morbid obese (MO), obese (OB), overweight (OW), normal (N) and underweight (UW). The children were included into the groups according to the age- and sex-based body mass index (BMI) percentile values tabulated by WHO. Data were evaluated by SPSS version 16 with p < 0.05 as the statistical significance degree. Tension index was evaluated in the groups above and below 10 years of age. This index differed significantly between N and MetS as well as OW and MetS groups (p = 0.001) above 120 months. However, below 120 months, significant differences existed between MetS3 and MetS2 (p = 0.003) as well as MetS3 and MO (p = 0.001). In comparison with the SBP and DBP values, tension index values have enabled more clear-cut separation between the groups. It has been detected that the tension index was capable of discriminating MetS3 from MetS2 in the group, which was composed of children aged 6-10 years. This was not possible in the older group of children. This index was more informative for the first group. This study also confirmed that 130 mm Hg and 85 mm Hg cut-off points for SBP and DBP, respectively, are too high for serving as MetS criteria in children because the mean value for tension index was calculated as 1.00 among MetS children. This finding has shown that much lower cut-off points must be set for SBP and DBP for the diagnosis of pediatric MetS, especially for children under-10 years of age. This index may be recommended to discriminate MO, MetS2 and MetS3 among the 6-10 years of age group, whose MetS diagnosis is problematic.

Keywords: blood pressure, children, index, metabolic syndrome, obesity

Procedia PDF Downloads 106
191 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 64
190 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy

Authors: May Fadheel Estephan, Richard Perks

Abstract:

Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.

Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics

Procedia PDF Downloads 57