Search results for: prediction interval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2949

Search results for: prediction interval

249 Application of Shore Protective Structures in Optimum Land Using of Defense Sites Located in Coastal Cities

Authors: Mir Ahmad Lashteh Neshaei, Hamed Afsoos Biria, Ata Ghabraei, Mir Abdolhamid Mehrdad

Abstract:

Awareness of effective land using issues in coastal area including protection of natural ecosystems and coastal environment due to the increasing of human life along the coast is of great importance. There are numerous valuable structures and heritages which are located in defence sites and waterfront area. Marine structures such as groins, sea walls and detached breakwaters are constructed in coast to improve the coast stability against bed erosion due to changing wave and climate pattern. Marine mechanisms and interaction with the shore protection structures need to be intensively studied. Groins are one of the most prominent structures that are used in shore protection to create a safe environment for coastal area by maintaining the land against progressive coastal erosion. The main structural function of a groin is to control the long shore current and littoral sediment transport. This structure can be submerged and provide the necessary beach protection without negative environmental impact. However, for submerged structures adopted for beach protection, the shoreline response to these structures is not well understood at present. Nowadays, modelling and computer simulation are used to assess beach morphology in the vicinity of marine structures to reduce their environmental impact. The objective of this study is to predict the beach morphology in the vicinity of submerged groins and comparison with non-submerged groins with focus on a part of the coast located in Dahane sar Sefidrood, Guilan province, Iran where serious coast erosion has occurred recently. The simulations were obtained using a one-line model which can be used as a first approximation of shoreline prediction in the vicinity of groins. The results of the proposed model are compared with field measurements to determine the shape of the coast. Finally, the results of the present study show that using submerged groins can have a good efficiency to control the beach erosion without causing severe environmental impact to the coast. The important outcome from this study can be employed in optimum designing of defence sites in the coastal cities to improve their efficiency in terms of re-using the heritage lands.

Keywords: submerged structures, groin, shore protective structures, coastal cities

Procedia PDF Downloads 292
248 The Effect of Air Filter Performance on Gas Turbine Operation

Authors: Iyad Al-Attar

Abstract:

Air filters are widely used in gas turbines applications to ensure that the large mass (500kg/s) of clean air reach the compressor. The continuous demand of high availability and reliability has highlighted the critical role of air filter performance in providing enhanced air quality. In addition to being challenged with different environments [tropical, coastal, hot], gas turbines confront wide array of atmospheric contaminants with various concentrations and particle size distributions that would lead to performance degradation and components deterioration. Therefore, the role of air filters is of a paramount importance since fouled compressor can reduce power output and availability of the gas turbine to over 70 % throughout operation. Consequently, accurate filter performance prediction is critical tool in their selection considering their role in minimizing the economic impact of outages. In fact, actual performance of Efficient Particulate Air [EPA] filters used in gas turbine tend to deviate from the performance predicted by laboratory results. This experimental work investigates the initial pressure drop and fractional efficiency curves of full-scale pleated V-shaped EPA filters used globally in gas turbine. The investigation involved examining the effect of different operational conditions such as flow rates [500 to 5000 m3/h] and design parameters such as pleat count [28, 30, 32 and 34 pleats per 100mm]. This experimental work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase of flow rates and pleat density. The reasons, which led to surface area losses of filtration media, are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. This paper also demonstrates that the effect of increasing the flow rate has more pronounced effect on filter performance compared to pleating density. This experimental work suggests that a valid comparison of the pleat densities should be based on the effective surface area, namely, the area that participates in the filtration process, and not the total surface area the pleat density provides. Throughout this study, optimal pleat count that satisfies both initial pressure drop and efficiency requirements may not have necessarily existed.

Keywords: filter efficiency, EPA Filters, pressure drop, permeability

Procedia PDF Downloads 215
247 Quality of Life Among People with Mental Illness Attending a Psychiatric Outpatient Clinic in Ethiopia: A Structural Equation Model

Authors: Wondale Getinet Alemu, Lillian Mwanri, Clemence Due, Telake Azale, Anna Ziersch

Abstract:

Background: Mental illness is one of the most severe, chronic, and disabling public health problems that affect patients' Quality of life (QoL). Improving the QoL for people with mental illness is one of the most critical steps in stopping disease progression and avoiding complications of mental illness. Therefore, we aimed to assess the QoL and its determinants in patients with mental illness in outpatient clinics in Northwest Ethiopia in 2023. Methods: A facility-based cross-sectional study was conducted among people with mental illness in an outpatient clinic in Ethiopia. The sampling interval was decided by dividing the total number of study participants who had a follow-up appointment during the data collection period (2400) by the total sample size of 638, with the starting point selected by lottery method. The interviewer-administered WHOQOL BREF-26 tool was used to measure the QoL of people with mental illness. The domains and Health-Related Quality of Life (HRQoL) were identified. The indirect and direct effects of variables were calculated using structural equation modeling with SPSS-28 and Amos-28 software. A p-value of < 0.05 and a 95% CI were used to evaluate statistical significance. Results: A total of 636 (99.7%) participants responded and completed the WHOQOL-BREF questionnaire. The mean score of overall HRQoL of people with mental illness in the outpatient clinic was (49.6 ± 10 Sd). The highest QoL was found in the physical health domain (50.67 ±9.5 Sd), and the lowest mean QoL was found in the psychological health domain (48.41±10 Sd). Rural residents, drug nonadherence, suicidal ideation, not getting counseling, moderate or severe subjective severity, the family does not participate in patient care, and a family history of mental illness had an indirect negative effect on HRQoL. Alcohol use and psychological health domain had a direct positive effect on QoL. Furthermore, objective severity of illness, having low self-esteem, and having a history of mental illness in the family had both direct and indirect effects on QoL. Furthermore, sociodemographic factors (residence, educational status, marital status), social support-related factors (self-esteem, family not participating in patient care), substance use factors (alcohol use, tobacco use,) and clinical factors (objective and subjective severity of illness, not getting counseling, suicidal ideation, number of episodes, comorbid illness, family history of mental illness, poor drug adherence) directly and indirectly affected QoL. Conclusions: In this study, the QoL of people with mental illness was poor, with the psychological health domain being the most affected. Sociodemographic factors, social support-related factors, drug use factors, and clinical factors directly and indirectly, affect QoL through the mediator variables of physical health domains, psychological health domains, social relation health domains, and environmental health domains. In order to improve the QoL of people with mental illnesses, we recommend that emphasis be given to addressing the scourge of mental health, including the development of policy and practice drivers that address the above-identified factors.

Keywords: quality of life, mental wellbeing, mental illness, mental disorder, Ethiopia

Procedia PDF Downloads 38
246 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 50
245 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 36
244 An Overview of the Wind and Wave Climate in the Romanian Nearshore

Authors: Liliana Rusu

Abstract:

The goal of the proposed work is to provide a more comprehensive picture of the wind and wave climate in the Romanian nearshore, using the results provided by numerical models. The Romanian coastal environment is located in the western side of the Black Sea, the more energetic part of the sea, an area with heavy maritime traffic and various offshore operations. Information about the wind and wave climate in the Romanian waters is mainly based on observations at Gloria drilling platform (70 km from the coast). As regards the waves, the measurements of the wave characteristics are not so accurate due to the method used, being also available for a limited period. For this reason, the wave simulations that cover large temporal and spatial scales represent an option to describe better the wave climate. To assess the wind climate in the target area spanning 1992–2016, data provided by the NCEP-CFSR (U.S. National Centers for Environmental Prediction - Climate Forecast System Reanalysis) and consisting in wind fields at 10m above the sea level are used. The high spatial and temporal resolution of the wind fields is good enough to represent the wind variability over the area. For the same 25-year period, as considered for the wind climate, this study characterizes the wave climate from a wave hindcast data set that uses NCEP-CFSR winds as input for a model system SWAN (Simulating WAves Nearshore) based. The wave simulation results with a two-level modelling scale have been validated against both in situ measurements and remotely sensed data. The second level of the system, with a higher resolution in the geographical space (0.02°×0.02°), is focused on the Romanian coastal environment. The main wave parameters simulated at this level are used to analyse the wave climate. The spatial distributions of the wind speed, wind direction and the mean significant wave height have been computed as the average of the total data. As resulted from the amount of data, the target area presents a generally moderate wave climate that is affected by the storm events developed in the Black Sea basin. Both wind and wave climate presents high seasonal variability. All the results are computed as maps that help to find the more dangerous areas. A local analysis has been also employed in some key locations corresponding to highly sensitive areas, as for example the main Romanian harbors.

Keywords: numerical simulations, Romanian nearshore, waves, wind

Procedia PDF Downloads 313
243 In Silico Analysis of Salivary miRNAs to Identify the Diagnostic Biomarkers for Oral Cancer

Authors: Andleeb Zahra, Itrat Rubab, Sumaira Malik, Amina Khan, Muhammad Jawad Khan, M. Qaiser Fatmi

Abstract:

Oral squamous cell carcinoma (OSCC) is one of the most common cancers worldwide. Recent studies have highlighted the role of miRNA in disease pathology, indicating its potential use in an early diagnostic tool. miRNAs are small, double stranded, non-coding RNAs that regulate gene expression by deregulating mRNAs. miRNAs play important roles in modifying various cellular processes such as cell growth, differentiation, apoptosis, and immune response. Dis-regulated expression of miRNAs is known to affect the cell growth, and this may function as tumor suppressors or oncogenes in various cancers. Objectives: The main objectives of this study were to characterize the extracellular miRNAs involved in oral cancer (OC) to assist early detection of cancer as well as to propose a list of genes that can potentially be used as biomarkers of OC. We used gene expression data by microarrays already available in literature. Materials and Methods: In the first step, a total of 318 miRNAs involved in oral carcinoma were shortlisted followed by the prediction of their target genes. Simultaneously, the differentially expressed genes (DEGs) of oral carcinoma from all experiments were identified. The common genes between lists of DEGs of OC based on experimentally proven data and target genes of each miRNA were identified. These common genes are the targets of specific miRNA, which is involved in OC. Finally, a list of genes was generated which may be used as biomarker of OC. Results and Conclusion: In results, we included some of pathways in cancer to show the change in gene expression under the control of specific miRNA. Ingenuity pathway analysis (IPA) provided a list of major biomarkers like CDH2, CDK7 and functional enrichment analysis identified the role of miRNA in major pathways like cell adhesion molecules pathway affected by cancer. We observed that at least 25 genes are regulated by maximum number of miRNAs, and thereby, they can be used as biomarkers of OC. To better understand the role of miRNA with respect to their target genes further experiments are required, and our study provides a platform to better understand the miRNA-OC relationship at genomics level.

Keywords: biomarkers, gene expression, miRNA, oral carcinoma

Procedia PDF Downloads 347
242 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms

Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita

Abstract:

Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.

Keywords: air quality, internet of things, artificial intelligence, smart home

Procedia PDF Downloads 54
241 Discovering Event Outliers for Drug as Commercial Products

Authors: Arunas Burinskas, Aurelija Burinskiene

Abstract:

On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.

Keywords: drugs, Grubbs' test, outlier, shortage event

Procedia PDF Downloads 112
240 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks

Authors: Lexi Li, Vanessa H. K. Pang

Abstract:

This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.

Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 118
239 In Silico Analysis of Deleterious nsSNPs (Missense) of Dihydrolipoamide Branched-Chain Transacylase E2 Gene Associated with Maple Syrup Urine Disease Type II

Authors: Zainab S. Ahmed, Mohammed S. Ali, Nadia A. Elshiekh, Sami Adam Ibrahim, Ghada M. El-Tayeb, Ahmed H. Elsadig, Rihab A. Omer, Sofia B. Mohamed

Abstract:

Maple syrup urine (MSUD) is an autosomal recessive disease that causes a deficiency in the enzyme branched-chain alpha-keto acid (BCKA) dehydrogenase. The development of disease has been associated with SNPs in the DBT gene. Despite that, the computational analysis of SNPs in coding and noncoding and their functional impacts on protein level still remains unknown. Hence, in this study, we carried out a comprehensive in silico analysis of missense that was predicted to have a harmful influence on DBT structure and function. In this study, eight different in silico prediction algorithms; SIFT, PROVEAN, MutPred, SNP&GO, PhD-SNP, PANTHER, I-Mutant 2.0 and MUpo were used for screening nsSNPs in DBT including. Additionally, to understand the effect of mutations in the strength of the interactions that bind protein together the ELASPIC servers were used. Finally, the 3D structure of DBT was formed using Mutation3D and Chimera servers respectively. Our result showed that a total of 15 nsSNPs confirmed by 4 software (R301C, R376H, W84R, S268F, W84C, F276C, H452R, R178H, I355T, V191G, M444T, T174A, I200T, R113H, and R178C) were found damaging and can lead to a shift in DBT gene structure. Moreover, we found 7 nsSNPs located on the 2-oxoacid_dh catalytic domain, 5 nsSNPs on the E_3 binding domain and 3 nsSNPs on the Biotin Domain. So these nsSNPs may alter the putative structure of DBT’s domain. Furthermore, we detected all these nsSNPs are on the core residues of the protein and have the ability to change the stability of the protein. Additionally, we found W84R, S268F, and M444T have high significance, and they affected Leucine, Isoleucine, and Valine, which reduces or disrupt the function of BCKD complex, E2-subunit which the DBT gene encodes. In conclusion, based on our extensive in-silico analysis, we report 15 nsSNPs that have possible association with protein deteriorating and disease-causing abilities. These candidate SNPs can aid in future studies on Maple Syrup Urine Disease type II base in the genetic level.

Keywords: DBT gene, ELASPIC, in silico analysis, UCSF chimer

Procedia PDF Downloads 175
238 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 149
237 Analysis and Optimized Design of a Packaged Liquid Chiller

Authors: Saeed Farivar, Mohsen Kahrom

Abstract:

The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.

Keywords: optimization, packaged liquid chiller, performance, simulation

Procedia PDF Downloads 254
236 Aerosol Radiative Forcing Over Indian Subcontinent for 2000-2021 Using Satellite Observations

Authors: Shreya Srivastava, Sushovan Ghosh, Sagnik Dey

Abstract:

Aerosols directly affect Earth’s radiation budget by scattering and absorbing incoming solar radiation and outgoing terrestrial radiation. While the uncertainty in aerosol radiative forcing (ARF) has decreased over the years, it is still higher than that of greenhouse gas forcing, particularly in the South Asian region, due to high heterogeneity in their chemical properties. Understanding the Spatio-temporal heterogeneity of aerosol composition is critical in improving climate prediction. Studies using satellite data, in-situ and aircraft measurements, and models have investigated the Spatio-temporal variability of aerosol characteristics. In this study, we have taken aerosol data from Multi-angle Imaging Spectro-Radiometer (MISR) level-2 version 23 aerosol products retrieved at 4.4 km and radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 21 years (2000-2021) over the Indian subcontinent. MISR aerosol product includes size and shapes segregated aerosol optical depth (AOD), Angstrom exponent (AE), and single scattering albedo (SSA). Additionally, 74 aerosol mixtures are included in version 23 data that is used for aerosol speciation. We have seasonally mapped aerosol optical and microphysical properties from MISR for India at quarter degrees resolution. Results show strong Spatio-temporal variability, with a constant higher value of AOD for the Indo-Gangetic Plain (IGP). The contribution of small-size particles is higher throughout the year, spatially during winter months. SSA is found to be overestimated where absorbing particles are present. The climatological map of short wave (SW) ARF at the top of the atmosphere (TOA) shows a strong cooling except in only a few places (values ranging from +2.5o to -22.5o). Cooling due to aerosols is higher in the absence of clouds. Higher negative values of ARF are found over the IGP region, given the high aerosol concentration above the region. Surface ARF values are everywhere negative for our study domain, with higher values in clear conditions. The results strongly correlate with AOD from MISR and ARF from CERES.

Keywords: aerosol Radiative forcing (ARF), aerosol composition, single scattering albedo (SSA), CERES

Procedia PDF Downloads 25
235 Assessment of Interior Environmental Quality and Airborne Infectious Risk in a Commuter Bus Cabin by Using Computational Fluid Dynamics with Computer Simulated Person

Authors: Yutaro Kyuma, Sung-Jun Yoo, Kazuhide Ito

Abstract:

A commuter bus remains important as a means to network public transportation between railway stations and terminals within cities. In some cases, the boarding time becomes longer, and the boarding rate tends to be higher corresponding to the development of urban cities. The interior environmental quality, e.g. temperature and air quality, in a commuter bus is relatively heterogeneous and complex compared to that of an indoor environment in buildings due to several factors: solar radiative heat – which comes from large-area windows –, inadequate ventilation rate caused by high density of commuters, and metabolic heat generation from travelers themselves. In addition to this, under conditions where many passengers ride in the enclosed space, contact and airborne infectious risk have attracted considerable attention in terms of public health. From this point of view, it is essential to develop the prediction method for assessment of interior environmental quality and infection risk in commuter bus cabins. In this study, we developed a numerical commuter bus model integrated with computer simulated persons to reproduce realistic indoor environment conditions with high occupancy during commuting. Here, computer simulated persons were newly designed considering different types of geometries, e.g., standing position, seating position, and individual differences. Here we conducted coupled computational fluid dynamics (CFD) analysis with radiative heat transfer analysis under steady state condition. Distributions of heterogeneous air flow patterns, temperature, and moisture surrounding the human body under some different ventilation system were analyzed by using CFD technique, and skin surface temperature distributions were analyzed using thermoregulation model that integrated into computer simulated person. Through these analyses, we discussed the interior environmental quality in specific commuter bus cabins. Further, inhaled air quality of each passenger was also analyzed. This study may have possibility to design the ventilation system in bus for improving thermal comfort of occupants.

Keywords: computational fluid dynamics, CFD, computer simulated person, CSP, contaminant, indoor environment, public health, ventilation

Procedia PDF Downloads 226
234 Estimation of Noise Barriers for Arterial Roads of Delhi

Authors: Sourabh Jain, Parul Madan

Abstract:

Traffic noise pollution has become a challenging problem for all metro cities of India due to rapid urbanization, growing population and rising number of vehicles and transport development. In Delhi the prime source of noise pollution is vehicular traffic. In Delhi it is found that the ambient noise level (Leq) is exceeding the standard permissible value at all the locations. Noise barriers or enclosures are definitely useful in obtaining effective deduction of traffic noise disturbances in urbanized areas. US’s Federal Highway Administration Model (FHWA) and Calculation of Road Traffic Noise (CORTN) of UK are used to develop spread sheets for noise prediction. Spread sheets are also developed for evaluating effectiveness of existing boundary walls abutting houses in mitigating noise, redesigning them as noise barriers. Study was also carried out to examine the changes in noise level due to designed noise barrier by using both models FHWA and CORTN respectively. During the collection of various data it is found that receivers are located far away from road at Rithala and Moolchand sites and hence extra barrier height needed to meet prescribed limits was less as seen from calculations and most of the noise diminishes by propagation effect.On the basis of overall study and data analysis, it is concluded that FHWA and CORTN models under estimate noise levels. FHWA model predicted noise levels with an average percentage error of -7.33 and CORTN predicted with an average percentage error of -8.5. It was observed that at all sites noise levels at receivers were exceeding the standard limit of 55 dB. It was seen from calculations that existing walls are reducing noise levels. Average noise reduction due to walls at Rithala was 7.41 dB and at Panchsheel was 7.20 dB and lower amount of noise reduction was observed at Friend colony which was only 5.88. It was observed from analysis that Friends colony sites need much greater height of barrier. This was because of residential buildings abutting the road. At friends colony great amount of traffic was observed since it is national highway. At this site diminishing of noise due to propagation effect was very less.As FHWA and CORTN models were developed in excel programme, it eliminates laborious calculations of noise. There was no reflection correction in FHWA models as like in CORTN model.

Keywords: IFHWA, CORTN, Noise Sources, Noise Barriers

Procedia PDF Downloads 105
233 Predicting Daily Patient Hospital Visits Using Machine Learning

Authors: Shreya Goyal

Abstract:

The study aims to build user-friendly software to understand patient arrival patterns and compute the number of potential patients who will visit a particular health facility for a given period by using a machine learning algorithm. The underlying machine learning algorithm used in this study is the Support Vector Machine (SVM). Accurate prediction of patient arrival allows hospitals to operate more effectively, providing timely and efficient care while optimizing resources and improving patient experience. It allows for better allocation of staff, equipment, and other resources. If there's a projected surge in patients, additional staff or resources can be allocated to handle the influx, preventing bottlenecks or delays in care. Understanding patient arrival patterns can also help streamline processes to minimize waiting times for patients and ensure timely access to care for patients in need. Another big advantage of using this software is adhering to strict data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States as the hospital will not have to share the data with any third party or upload it to the cloud because the software can read data locally from the machine. The data needs to be arranged in. a particular format and the software will be able to read the data and provide meaningful output. Using software that operates locally can facilitate compliance with these regulations by minimizing data exposure. Keeping patient data within the hospital's local systems reduces the risk of unauthorized access or breaches associated with transmitting data over networks or storing it in external servers. This can help maintain the confidentiality and integrity of sensitive patient information. Historical patient data is used in this study. The input variables used to train the model include patient age, time of day, day of the week, seasonal variations, and local events. The algorithm uses a Supervised learning method to optimize the objective function and find the global minima. The algorithm stores the values of the local minima after each iteration and at the end compares all the local minima to find the global minima. The strength of this study is the transfer function used to calculate the number of patients. The model has an output accuracy of >95%. The method proposed in this study could be used for better management planning of personnel and medical resources.

Keywords: machine learning, SVM, HIPAA, data

Procedia PDF Downloads 45
232 Remote Sensing of Urban Land Cover Change: Trends, Driving Forces, and Indicators

Authors: Wei Ji

Abstract:

This study was conducted in the Kansas City metropolitan area of the United States, which has experienced significant urban sprawling in recent decades. The remote sensing of land cover changes in this area spanned over four decades from 1972 through 2010. The project was implemented in two stages: the first stage focused on detection of long-term trends of urban land cover change, while the second one examined how to detect the coupled effects of human impact and climate change on urban landscapes. For the first-stage study, six Landsat images were used with a time interval of about five years for the period from 1972 through 2001. Four major land cover types, built-up land, forestland, non-forest vegetation land, and surface water, were mapped using supervised image classification techniques. The study found that over the three decades the built-up lands in the study area were more than doubled, which was mainly at the expense of non-forest vegetation lands. Surprisingly and interestingly, the area also saw a significant gain in surface water coverage. This observation raised questions: How have human activities and precipitation variation jointly impacted surface water cover during recent decades? How can we detect such coupled impacts through remote sensing analysis? These questions led to the second stage of the study, in which we designed and developed approaches to detecting fine-scale surface waters and analyzing coupled effects of human impact and precipitation variation on the waters. To effectively detect urban landscape changes that might be jointly shaped by precipitation variation, our study proposed “urban wetscapes” (loosely-defined urban wetlands) as a new indicator for remote sensing detection. The study examined whether urban wetscape dynamics was a sensitive indicator of the coupled effects of the two driving forces. To better detect this indicator, a rule-based classification algorithm was developed to identify fine-scale, hidden wetlands that could not be appropriately detected based on their spectral differentiability by a traditional image classification. Three SPOT images for years 1992, 2008, and 2010, respectively were classified with this technique to generate the four types of land cover as described above. The spatial analyses of remotely-sensed wetscape changes were implemented at the scales of metropolitan, watershed, and sub-watershed, as well as based on the size of surface water bodies in order to accurately reveal urban wetscape change trends in relation to the driving forces. The study identified that urban wetscape dynamics varied in trend and magnitude from the metropolitan, watersheds, to sub-watersheds in response to human impacts at different scales. The study also found that increased precipitation in the region in the past decades swelled larger wetlands in particular while generally smaller wetlands decreased mainly due to human development activities. These results confirm that wetscape dynamics can effectively reveal the coupled effects of human impact and climate change on urban landscapes. As such, remote sensing of this indicator provides new insights into the relationships between urban land cover changes and driving forces.

Keywords: urban land cover, human impact, climate change, rule-based classification, across-scale analysis

Procedia PDF Downloads 289
231 Influence of Long-Term Variability in Atmospheric Parameters on Ocean State over the Head Bay of Bengal

Authors: Anindita Patra, Prasad K. Bhaskaran

Abstract:

The atmosphere-ocean is a dynamically linked system that influences the exchange of energy, mass, and gas at the air-sea interface. The exchange of energy takes place in the form of sensible heat, latent heat, and momentum commonly referred to as fluxes along the atmosphere-ocean boundary. The large scale features such as El Nino and Southern Oscillation (ENSO) is a classic example on the interaction mechanism that occurs along the air-sea interface that deals with the inter-annual variability of the Earth’s Climate System. Most importantly the ocean and atmosphere as a coupled system acts in tandem thereby maintaining the energy balance of the climate system, a manifestation of the coupled air-sea interaction process. The present work is an attempt to understand the long-term variability in atmospheric parameters (from surface to upper levels) and investigate their role in influencing the surface ocean variables. More specifically the influence of atmospheric circulation and its variability influencing the mean Sea Level Pressure (SLP) has been explored. The study reports on a critical examination of both ocean-atmosphere parameters during a monsoon season over the head Bay of Bengal region. A trend analysis has been carried out for several atmospheric parameters such as the air temperature, geo-potential height, and omega (vertical velocity) for different vertical levels in the atmosphere (from surface to the troposphere) covering a period from 1992 to 2012. The Reanalysis 2 dataset from the National Centers for Environmental Prediction-Department of Energy (NCEP-DOE) was used in this study. The study signifies that the variability in air temperature and omega corroborates with the variation noticed in geo-potential height. Further, the study advocates that for the lower atmosphere the geo-potential heights depict a typical east-west contrast exhibiting a zonal dipole behavior over the study domain. In addition, the study clearly brings to light that the variations over different levels in the atmosphere plays a pivotal role in supporting the observed dipole pattern as clearly evidenced from the trends in SLP, associated surface wind speed and significant wave height over the study domain.

Keywords: air temperature, geopotential height, head Bay of Bengal, long-term variability, NCEP reanalysis 2, omega, wind-waves

Procedia PDF Downloads 205
230 On the Other Side of Shining Mercury: In Silico Prediction of Cold Stabilizing Mutations in Serine Endopeptidase from Bacillus lentus

Authors: Debamitra Chakravorty, Pratap K. Parida

Abstract:

Cold-adapted proteases enhance wash performance in low-temperature laundry resulting in a reduction in energy consumption and wear of textiles and are also used in the dehairing process in leather industries. Unfortunately, the possible drawbacks of using cold-adapted proteases are their instability at higher temperatures. Therefore, proteases with broad temperature stability are required. Unfortunately, wild-type cold-adapted proteases exhibit instability at higher temperatures and thus have low shelf lives. Therefore, attempts to engineer cold-adapted proteases by protein engineering were made previously by directed evolution and random mutagenesis. The lacuna is the time, capital, and labour involved to obtain these variants are very demanding and challenging. Therefore, rational engineering for cold stability without compromising an enzyme's optimum pH and temperature for activity is the current requirement. In this work, mutations were rationally designed with the aid of high throughput computational methodology of network analysis, evolutionary conservation scores, and molecular dynamics simulations for Savinase from Bacillus lentus with the intention of rendering the mutants cold stable without affecting their temperature and pH optimum for activity. Further, an attempt was made to incorporate a mutation in the most stable mutant rationally obtained by this method to introduce oxidative stability in the mutant. Such enzymes are desired in detergents with bleaching agents. In silico analysis by performing 300 ns molecular dynamics simulations at 5 different temperatures revealed that these three mutants were found to be better in cold stability compared to the wild type Savinase from Bacillus lentus. Conclusively, this work shows that cold adaptation without losing optimum temperature and pH stability and additionally stability from oxidative damage can be rationally designed by in silico enzyme engineering. The key findings of this work were first, the in silico data of H5 (cold stable savinase) used as a control in this work, corroborated with its reported wet lab temperature stability data. Secondly, three cold stable mutants of Savinase from Bacillus lentus were rationally identified. Lastly, a mutation which will stabilize savinase against oxidative damage was additionally identified.

Keywords: cold stability, molecular dynamics simulations, protein engineering, rational design

Procedia PDF Downloads 113
229 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model

Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung

Abstract:

The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.

Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation

Procedia PDF Downloads 141
228 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials

Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik

Abstract:

Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.

Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes

Procedia PDF Downloads 27
227 Effect of Packing Ratio on Fire Spread across Discrete Fuel Beds: An Experimental Analysis

Authors: Qianqian He, Naian Liu, Xiaodong Xie, Linhe Zhang, Yang Zhang, Weidong Yan

Abstract:

In the wild, the vegetation layer with exceptionally complex fuel composition and heterogeneous spatial distribution strongly affects the rate of fire spread (ROS) and fire intensity. Clarifying the influence of fuel bed structure on fire spread behavior is of great significance to wildland fire management and prediction. The packing ratio is one of the key physical parameters describing the property of the fuel bed. There is a threshold value of the packing ratio for ROS, but little is known about the controlling mechanism. In this study, to address this deficiency, a series of fire spread experiments were performed across a discrete fuel bed composed of some regularly arranged laser-cut cardboards, with constant wind speed and different packing ratios (0.0125-0.0375). The experiment aims to explore the relative importance of the internal and surface heat transfer with packing ratio. The dependence of the measured ROS on the packing ratio was almost consistent with the previous researches. The data of the radiative and total heat fluxes show that the internal heat transfer and surface heat transfer are both enhanced with increasing packing ratio (referred to as ‘Stage 1’). The trend agrees well with the variation of the flame length. The results extracted from the video show that the flame length markedly increases with increasing packing ratio in Stage 1. Combustion intensity is suggested to be increased, which, in turn, enhances the heat radiation. The heat flux data shows that the surface heat transfer appears to be more important than the internal heat transfer (fuel preheating inside the fuel bed) in Stage 1. On the contrary, the internal heat transfer dominates the fuel preheating mechanism when the packing ratio further increases (referred to as ‘Stage 2’) because the surface heat flux keeps almost stable with the packing ratio in Stage 2. As for the heat convection, the flow velocity was measured using Pitot tubes both inside and on the upper surface of the fuel bed during the fire spread. Based on the gas velocity distribution ahead of the flame front, it is found that the airflow inside the fuel bed is restricted in Stage 2, which can reduce the internal heat convection in theory. However, the analysis indicates not the influence of inside flow on convection and combustion, but the decreased internal radiation of per unit fuel is responsible for the decrease of ROS.

Keywords: discrete fuel bed, fire spread, packing ratio, wildfire

Procedia PDF Downloads 108
226 Heuristic Approaches for Injury Reductions by Reduced Car Use in Urban Areas

Authors: Stig H. Jørgensen, Trond Nordfjærn, Øyvind Teige Hedenstrøm, Torbjørn Rundmo

Abstract:

The aim of the paper is to estimate and forecast road traffic injuries in the coming 10-15 years given new targets in urban transport policy and shifts of mode of transport, including injury cross-effects of mode changes. The paper discusses possibilities and limitations in measuring and quantifying possible injury reductions. Injury data (killed and seriously injured road users) from six urban areas in Norway from 1998-2012 (N= 4709 casualties) form the basis for estimates of changing injury patterns. For the coming period calculation of number of injuries and injury rates by type of road user (categories of motorized versus non-motorized) by sex, age and type of road are made. A prognosticated population increase (25 %) in total population within 2025 in the six urban areas will curb the proceeded fall in injury figures. However, policy strategies and measures geared towards a stronger modal shift from use of private vehicles to safer public transport (bus, train) will modify this effect. On the other side will door to door transport (pedestrians on their way to/from public transport nodes) imply a higher exposure for pedestrians (bikers) converting from private vehicle use (including fall accidents not registered as traffic accidents). The overall effect is the sum of these modal shifts in the increasing urban population and in addition diminishing return to the majority of road safety countermeasures has also to be taken into account. The paper demonstrates how uncertainties in the various estimates (prediction factors) on increasing injuries as well as decreasing injury figures may partly offset each other. The paper discusses road safety policy and welfare consequences of transport mode shift, including reduced use of private vehicles, and further environmental impacts. In this regard, safety and environmental issues will as a rule concur. However pursuing environmental goals (e.g. improved air quality, reduced co2 emissions) encouraging more biking may generate more biking injuries. The study was given financial grants from the Norwegian Research Council’s Transport Safety Program.

Keywords: road injuries, forecasting, reduced private care use, urban, Norway

Procedia PDF Downloads 211
225 Neural Networks Underlying the Generation of Neural Sequences in the HVC

Authors: Zeina Bou Diab, Arij Daou

Abstract:

The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.

Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird

Procedia PDF Downloads 40
224 Interpretation of Two Indices for the Prediction of Cardiovascular Risk in Pediatric Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity and weight gain are associated with increased risk of developing cardiovascular diseases and the progression of liver fibrosis. Aspartate transaminase–to-platelet count ratio index (AST-to-PLT, APRI) and fibrosis-4 (FIB-4) were primarily considered as the formulas capable of differentiating hepatitis from cirrhosis. Recently, they have found clinical use as measures of liver fibrosis and cardiovascular risk. However, their status in children has not been evaluated in detail yet. The aim of this study is to determine APRI and FIB-4 status in obese (OB) children and compare them with values found in children with normal body mass index (N-BMI). A total of sixty-eight children examined in the outpatient clinics of the Pediatrics Department in Tekirdag Namik Kemal University Medical Faculty were included in the study. Two groups were constituted. In the first group, thirty-five children with N-BMI, whose age- and sex-dependent BMI indices vary between 15 and 85 percentiles, were evaluated. The second group comprised thirty-three OB children whose BMI percentile values were between 95 and 99. Anthropometric measurements and routine biochemical tests were performed. Using these parameters, values for the related indices, BMI, APRI, and FIB-4, were calculated. Appropriate statistical tests were used for the evaluation of the study data. The statistical significance degree was accepted as p<0.05. In the OB group, values found for APRI and FIB-4 were higher than those calculated for the N-BMI group. However, there was no statistically significant difference between the N-BMI and OB groups in terms of APRI and FIB-4. A similar pattern was detected for triglyceride (TRG) values. The correlation coefficient and degree of significance between APRI and FIB-4 were r=0.336 and p=0.065 in the N-BMI group. On the other hand, they were r=0.707 and p=0.001 in the OB group. Associations of these two indices with TRG have shown that this parameter was strongly correlated (p<0.001) both with APRI and FIB-4 in the OB group, whereas no correlation was calculated in children with N-BMI. Triglycerides are associated with an increased risk of fatty liver, which can progress to severe clinical problems such as steatohepatitis, which can lead to liver fibrosis. Triglycerides are also independent risk factors for cardiovascular disease. In conclusion, the lack of correlation between TRG and APRI as well as FIB-4 in children with N-BMI, along with the detection of strong correlations of TRG with these indices in OB children, was the indicator of the possible onset of the tendency towards the development of fatty liver in OB children. This finding also pointed out the potential risk for cardiovascular pathologies in OB children. The nature of the difference between APRI vs FIB-4 correlations in N-BMI and OB groups (no correlation versus high correlation), respectively, may be the indicator of the importance of involving age and alanine transaminase parameters in addition to AST and PLT in the formula designed for FIB-4.

Keywords: APRI, children, FIB-4, obesity, triglycerides

Procedia PDF Downloads 321
223 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.

Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival

Procedia PDF Downloads 313
222 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning

Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu

Abstract:

Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.

Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.

Procedia PDF Downloads 66
221 In silico Statistical Prediction Models for Identifying the Microbial Diversity and Interactions Due to Fixed Periodontal Appliances

Authors: Suganya Chandrababu, Dhundy Bastola

Abstract:

Like in the gut, the subgingival microbiota plays a crucial role in oral hygiene, health, and cariogenic diseases. Human activities like diet, antibiotics, and periodontal treatments alter the bacterial communities, metabolism, and functions in the oral cavity, leading to a dysbiotic state and changes in the plaques of orthodontic patients. Fixed periodontal appliances hinder oral hygiene and cause changes in the dental plaques influencing the subgingival microbiota. However, the microbial species’ diversity and complexity pose a great challenge in understanding the taxa’s community distribution patterns and their role in oral health. In this research, we analyze the subgingival microbial samples from individuals with fixed dental appliances (metal/clear) using an in silico approach. We employ exploratory hypothesis-driven multivariate and regression analysis to shed light on the microbial community and its functional fluctuations due to dental appliances used and identify risks associated with complex disease phenotypes. Our findings confirm the changes in oral microbiota composition due to the presence and type of fixed orthodontal devices. We identified seven main periodontic pathogens, including Bacteroidetes, Actinobacteria, Proteobacteria, Fusobacteria, and Firmicutes, whose abundances were significantly altered due to the presence and type of fixed appliances used. In the case of metal braces, the abundances of Bacteroidetes, Proteobacteria, Fusobacteria, Candidatus saccharibacteria, and Spirochaetes significantly increased, while the abundance of Firmicutes and Actinobacteria decreased. However, in individuals With clear braces, the abundance of Bacteroidetes and Candidatus saccharibacteria increased. The highest abundance value (P-value=0.004 < 0.05) was observed with Bacteroidetes in individuals with the metal appliance, which is associated with gingivitis, periodontitis, endodontic infections, and odontogenic abscesses. Overall, the bacterial abundances decrease with clear type and increase with metal type of braces. Regression analysis further validated the multivariate analysis of variance (MANOVA) results, supporting the hypothesis that the presence and type of the fixed oral appliances significantly alter the bacterial abundance and composition.

Keywords: oral microbiota, statistical analysis, fixed or-thodontal appliances, bacterial abundance, multivariate analysis, regression analysis

Procedia PDF Downloads 162
220 Intelligent Materials and Functional Aspects of Shape Memory Alloys

Authors: Osman Adiguzel

Abstract:

Shape-memory alloys are a new class of functional materials with a peculiar property known as shape memory effect. These alloys return to a previously defined shape on heating after deformation in low temperature product phase region and take place in a class of functional materials due to this property. The origin of this phenomenon lies in the fact that the material changes its internal crystalline structure with changing temperature. Shape memory effect is based on martensitic transitions, which govern the remarkable changes in internal crystalline structure of materials. Martensitic transformation, which is a solid state phase transformation, occurs in thermal manner in material on cooling from high temperature parent phase region. This transformation is governed by changes in the crystalline structure of the material. Shape memory alloys cycle between original and deformed shapes in bulk level on heating and cooling, and can be used as a thermal actuator or temperature-sensitive elements due to this property. Martensitic transformations usually occur with the cooperative movement of atoms by means of lattice invariant shears. The ordered parent phase structures turn into twinned structures with this movement in crystallographic manner in thermal induced case. The twinned martensites turn into the twinned or oriented martensite by stressing the material at low temperature martensitic phase condition. The detwinned martensite turns into the parent phase structure on first heating, first cycle, and parent phase structures turn into the twinned and detwinned structures respectively in irreversible and reversible memory cases. On the other hand, shape memory materials are very important and useful in many interdisciplinary fields such as medicine, pharmacy, bioengineering, metallurgy and many engineering fields. The choice of material as well as actuator and sensor to combine it with the host structure is very essential to develop main materials and structures. Copper based alloys exhibit this property in metastable beta-phase region, which has bcc-based structures at high temperature parent phase field, and these structures martensitically turn into layered complex structures with lattice twinning following two ordered reactions on cooling. Martensitic transition occurs as self-accommodated martensite with inhomogeneous shears, lattice invariant shears which occur in two opposite directions, <110 > -type directions on the {110}-type plane of austenite matrix which is basal plane of martensite. This kind of shear can be called as {110}<110> -type mode and gives rise to the formation of layered structures, like 3R, 9R or 18R depending on the stacking sequences on the close-packed planes of the ordered lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper based alloys which have the chemical compositions in weight; Cu-26.1%Zn 4%Al and Cu-11%Al-6%Mn. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit super lattice reflections inherited from parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that locations and intensities of diffraction peaks change with the aging time at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close each other.

Keywords: Shape memory effect, martensite, twinning, detwinning, self-accommodation, layered structures

Procedia PDF Downloads 406