Search results for: score prediction
672 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling
Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari
Abstract:
A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis
Procedia PDF Downloads 147671 Exploring the Interplay of Attention, Awareness, and Control: A Comprehensive Investigation
Authors: Venkateswar Pujari
Abstract:
This study tries to investigate the complex interplay between control, awareness, and attention in human cognitive processes. The fundamental elements of cognitive functioning that play a significant role in influencing perception, decision-making, and behavior are attention, awareness, and control. Understanding how they interact can help us better understand how our minds work and may even increase our understanding of cognitive science and its therapeutic applications. The study uses an empirical methodology to examine the relationships between attention, awareness, and control by integrating different experimental paradigms and neuropsychological tests. To ensure the generalizability of findings, a wide sample of participants is chosen, including people with various cognitive profiles and ages. The study is structured into four primary parts, each of which focuses on one component of how attention, awareness, and control interact: 1. Evaluation of Attentional Capacity and Selectivity: In this stage, participants complete established attention tests, including the Stroop task and visual search tasks. 2. Evaluation of Awareness Degrees: In the second stage, participants' degrees of conscious and unconscious awareness are assessed using perceptual awareness tasks such as masked priming and binocular rivalry tasks. 3. Investigation of Cognitive Control Mechanisms: In the third phase, reaction inhibition, cognitive flexibility, and working memory capacity are investigated using exercises like the Wisconsin Card Sorting Test and the Go/No-Go paradigm. 4. Results Integration and Analysis: Data from all phases are integrated and analyzed in the final phase. To investigate potential links and prediction correlations between attention, awareness, and control, correlational and regression analyses are carried out. The study's conclusions shed light on the intricate relationships that exist between control, awareness, and attention throughout cognitive function. The findings may have consequences for cognitive psychology, neuroscience, and clinical psychology by providing new understandings of cognitive dysfunctions linked to deficiencies in attention, awareness, and control systems.Keywords: attention, awareness, control, cognitive functioning, neuropsychological assessment
Procedia PDF Downloads 91670 Comparison and Improvement of the Existing Cone Penetration Test Results: Shear Wave Velocity Correlations for Hungarian Soils
Authors: Ákos Wolf, Richard P. Ray
Abstract:
Due to the introduction of Eurocode 8, the structural design for seismic and dynamic effects has become more significant in Hungary. This has emphasized the need for more effort to describe the behavior of structures under these conditions. Soil conditions have a significant effect on the response of structures by modifying the stiffness and damping of the soil-structural system and by modifying the seismic action as it reaches the ground surface. Shear modulus (G) and shear wave velocity (vs), which are often measured in the field, are the fundamental dynamic soil properties for foundation vibration problems, liquefaction potential and earthquake site response analysis. There are several laboratory and in-situ measurement techniques to evaluate dynamic soil properties, but unfortunately, they are often too expensive for general design practice. However, a significant number of correlations have been proposed to determine shear wave velocity or shear modulus from Cone Penetration Tests (CPT), which are used more and more in geotechnical design practice in Hungary. This allows the designer to analyze and compare CPT and seismic test result in order to select the best correlation equations for Hungarian soils and to improve the recommendations for the Hungarian geologic conditions. Based on a literature review, as well as research experience in Hungary, the influence of various parameters on the accuracy of results will be shown. This study can serve as a basis for selecting and modifying correlation equations for Hungarian soils. Test data are taken from seven locations in Hungary with similar geologic conditions. The shear wave velocity values were measured by seismic CPT. Several factors are analyzed including soil type, behavior index, measurement depth, geologic age etc. for their effect on the accuracy of predictions. The final results show an improved prediction method for Hungarian soilsKeywords: CPT correlation, dynamic soil properties, seismic CPT, shear wave velocity
Procedia PDF Downloads 246669 Predictors of Clinical Failure After Endoscopic Lumbar Spine Surgery During the Initial Learning Curve
Authors: Daniel Scherman, Daniel Madani, Shanu Gambhir, Marcus Ling Zhixing, Yingda Li
Abstract:
Objective: This study aims to identify clinical factors that may predict failed endoscopic lumbar spine surgery to guide surgeons with patient selection during the initial learning curve. Methods: This is an Australasian prospective analysis of the first 105 patients to undergo lumbar endoscopic spine decompression by 3 surgeons. Modified MacNab outcomes, Oswestry Disability Index (ODI) and Visual Analogue Score (VAS) scores were utilized to evaluate clinical outcomes at 6 months postoperatively. Descriptive statistics and Anova t-tests were performed to measure statistically significant (p<0.05) associations between variables using GraphPad Prism v10. Results: Patients undergoing endoscopic lumbar surgery via an interlaminar or transforaminal approach have overall good/excellent modified MacNab outcomes and a significant reduction in post-operative VAS and ODI scores. Regardless of the anatomical location of disc herniations, good/excellent modified MacNab outcomes and significant reductions in VAS and ODI were reported post-operatively; however, not in patients with calcified disc herniations. Patients with central and foraminal stenosis overall reported poor/fair modified MacNab outcomes. However, there were significant reductions in VAS and ODI scores post-operatively. Patients with subarticular stenosis or an associated spondylolisthesis reported good/excellent modified MacNab outcomes and significant reductions in VAS and ODI scores post-operatively. Patients with disc herniation and concurrent degenerative stenosis had generally poor/fair modified MacNab outcomes. Conclusion: The outcomes of endoscopic spine surgery are encouraging, with a low complication and reoperation rate. However, patients with calcified disc herniations, central canal stenosis or a disc herniation with concurrent degenerative stenosis present challenges during the initial learning curve and may benefit from traditional open or other minimally invasive techniques.Keywords: complications, lumbar disc herniation, lumbar endoscopic spine surgery, predictors of failed endoscopic spine surgery
Procedia PDF Downloads 155668 Experimental Pain Study Investigating the Distinction between Pain and Relief Reports
Authors: Abeer F. Almarzouki, Christopher A. Brown, Richard J. Brown, Anthony K. P. Jones
Abstract:
Although relief is commonly assumed to be a direct reflection of pain reduction, it seems to be driven by complex emotional interactions in which pain reduction is only one component. For example, termination of a painful/aversive event may be relieving and rewarding. Accordingly, in this study, whether terminating an aversive negative prediction of pain would be reflected in a greater relief experience was investigated, with a view to separating apart the effects of the manipulation on pain and relief. We use aversive conditioning paradigm to investigate the perception of relief in an aversive (threat) vs. positive context. Participants received positive predictors of a non-painful outcome which were presented within either a congruent positive (non-painful) context or an incongruent threat (painful) context that had been previously conditioned; trials followed by identical laser stimuli on both conditions. Participants were asked to rate the perceived intensity of pain as well as their perception of relief in response to the cue predicting the outcome. Results demonstrated that participants reported more pain in the aversive context compared to the positive context. Conversely, participants reported more relief in the aversive context compares to the neutral context. The rating of relief in the threat context was not correlated with pain reports. The results suggest that relief is not dependant on pain intensity. Consistent with this, relief in the threat context was greater than that in the positive expectancy condition, while the opposite pattern was obtained for the pain ratings. The value of relief in this study is better appreciated in the context of an impending negative threat, which is apparent in the higher pain ratings in the prior negative expectancy compared to the positive expectancy condition. Moreover, the more threatening the context (as manifested by higher unpleasantness/higher state anxiety scores), the more the relief is appreciated. The importance of the study highlights the importance of exploring relief and pain intensity in monitoring separately or evaluating pain-related suffering. The results also illustrate that the perception of painful input may largely be shaped by the context and not necessarily stimulus-related.Keywords: aversive context, pain, predictions, relief
Procedia PDF Downloads 140667 Computational System for the Monitoring Ecosystem of the Endangered White Fish (Chirostoma estor estor) in the Patzcuaro Lake, Mexico
Authors: Cesar Augusto Hoil Rosas, José Luis Vázquez Burgos, José Juan Carbajal Hernandez
Abstract:
White fish (Chirostoma estor estor) is an endemic species that habits in the Patzcuaro Lake, located in Michoacan, Mexico; being an important source of gastronomic and cultural wealth of the area. Actually, it have undergone an immense depopulation of individuals, due to the high fishing, contamination and eutrophication of the lake water, resulting in the possible extinction of this important species. This work proposes a new computational model for monitoring and assessment of critical environmental parameters of the white fish ecosystem. According to an Analytical Hierarchy Process, a mathematical model is built assigning weights to each environmental parameter depending on their water quality importance on the ecosystem. Then, a development of an advanced system for the monitoring, analysis and control of water quality is built using the virtual environment of LabVIEW. As results, we have obtained a global score that indicates the condition level of the water quality in the Chirostoma estor ecosystem (excellent, good, regular and poor), allowing to provide an effective decision making about the environmental parameters that affect the proper culture of the white fish such as temperature, pH and dissolved oxygen. In situ evaluations show regular conditions for a success reproduction and growth rates of this species where the water quality tends to have regular levels. This system emerges as a suitable tool for the water management, where future laws for white fish fishery regulations will result in the reduction of the mortality rate in the early stages of development of the species, which represent the most critical phase. This can guarantees better population sizes than those currently obtained in the aquiculture crop. The main benefit will be seen as a contribution to maintain the cultural and gastronomic wealth of the area and for its inhabitants, since white fish is an important food and economical income of the region, but the species is endangered.Keywords: Chirostoma estor estor, computational system, lab view, white fish
Procedia PDF Downloads 326666 Grid and Market Integration of Large Scale Wind Farms using Advanced Predictive Data Mining Techniques
Authors: Umit Cali
Abstract:
The integration of intermittent energy sources like wind farms into the electricity grid has become an important challenge for the utilization and control of electric power systems, because of the fluctuating behaviour of wind power generation. Wind power predictions improve the economic and technical integration of large amounts of wind energy into the existing electricity grid. Trading, balancing, grid operation, controllability and safety issues increase the importance of predicting power output from wind power operators. Therefore, wind power forecasting systems have to be integrated into the monitoring and control systems of the transmission system operator (TSO) and wind farm operators/traders. The wind forecasts are relatively precise for the time period of only a few hours, and, therefore, relevant with regard to Spot and Intraday markets. In this work predictive data mining techniques are applied to identify a statistical and neural network model or set of models that can be used to predict wind power output of large onshore and offshore wind farms. These advanced data analytic methods helps us to amalgamate the information in very large meteorological, oceanographic and SCADA data sets into useful information and manageable systems. Accurate wind power forecasts are beneficial for wind plant operators, utility operators, and utility customers. An accurate forecast allows grid operators to schedule economically efficient generation to meet the demand of electrical customers. This study is also dedicated to an in-depth consideration of issues such as the comparison of day ahead and the short-term wind power forecasting results, determination of the accuracy of the wind power prediction and the evaluation of the energy economic and technical benefits of wind power forecasting.Keywords: renewable energy sources, wind power, forecasting, data mining, big data, artificial intelligence, energy economics, power trading, power grids
Procedia PDF Downloads 519665 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks
Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee
Abstract:
Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)
Procedia PDF Downloads 114664 Numerical Investigation of Dynamic Stall over a Wind Turbine Pitching Airfoil by Using OpenFOAM
Authors: Mahbod Seyednia, Shidvash Vakilipour, Mehran Masdari
Abstract:
Computations for two-dimensional flow past a stationary and harmonically pitching wind turbine airfoil at a moderate value of Reynolds number (400000) are carried out by progressively increasing the angle of attack for stationary airfoil and at fixed pitching frequencies for rotary one. The incompressible Navier-Stokes equations in conjunction with Unsteady Reynolds Average Navier-Stokes (URANS) equations for turbulence modeling are solved by OpenFOAM package to investigate the aerodynamic phenomena occurred at stationary and pitching conditions on a NACA 6-series wind turbine airfoil. The aim of this study is to enhance the accuracy of numerical simulation in predicting the aerodynamic behavior of an oscillating airfoil in OpenFOAM. Hence, for turbulence modelling, k-ω-SST with low-Reynolds correction is employed to capture the unsteady phenomena occurred in stationary and oscillating motion of the airfoil. Using aerodynamic and pressure coefficients along with flow patterns, the unsteady aerodynamics at pre-, near-, and post-static stall regions are analyzed in harmonically pitching airfoil, and the results are validated with the corresponding experimental data possessed by the authors. The results indicate that implementing the mentioned turbulence model leads to accurate prediction of the angle of static stall for stationary airfoil and flow separation, dynamic stall phenomenon, and reattachment of the flow on the surface of airfoil for pitching one. Due to the geometry of the studied 6-series airfoil, the vortex on the upper surface of the airfoil during upstrokes is formed at the trailing edge. Therefore, the pattern flow obtained by our numerical simulations represents the formation and change of the trailing-edge vortex at near- and post-stall regions where this process determines the dynamic stall phenomenon.Keywords: CFD, moderate Reynolds number, OpenFOAM, pitching oscillation, unsteady aerodynamics, wind turbine
Procedia PDF Downloads 204663 Statistical Modeling and by Artificial Neural Networks of Suspended Sediment Mina River Watershed at Wadi El-Abtal Gauging Station (Northern Algeria)
Authors: Redhouane Ghernaout, Amira Fredj, Boualem Remini
Abstract:
Suspended sediment transport is a serious problem worldwide, but it is much more worrying in certain regions of the world, as is the case in the Maghreb and more particularly in Algeria. It continues to take disturbing proportions in Northern Algeria due to the variability of rains in time and in space and constant deterioration of vegetation. Its prediction is essential in order to identify its intensity and define the necessary actions for its reduction. The purpose of this study is to analyze the concentration data of suspended sediment measured at Wadi El-Abtal Hydrometric Station. It also aims to find and highlight regressive power relationships, which can explain the suspended solid flow by the measured liquid flow. The study strives to find models of artificial neural networks linking the flow, month and precipitation parameters with solid flow. The obtained results show that the power function of the solid transport rating curve and the models of artificial neural networks are appropriate methods for analysing and estimating suspended sediment transport in Wadi Mina at Wadi El-Abtal Hydrometric Station. They made it possible to identify in a fairly conclusive manner the model of neural networks with four input parameters: the liquid flow Q, the month and the daily precipitation measured at the representative stations (Frenda 013002 and Ain El-Hadid 013004 ) of the watershed. The model thus obtained makes it possible to estimate the daily solid flows (interpolate and extrapolate) even beyond the period of observation of solid flows (1985/86 to 1999/00), given the availability of the average daily liquid flows and daily precipitation since 1953/1954.Keywords: suspended sediment, concentration, regression, liquid flow, solid flow, artificial neural network, modeling, mina, algeria
Procedia PDF Downloads 104662 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery
Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas
Abstract:
The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition
Procedia PDF Downloads 150661 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware
Authors: Subham Ghosh, Banani Basu, Marami Das
Abstract:
Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease
Procedia PDF Downloads 12660 The Extent of Land Use Externalities in the Fringe of Jakarta Metropolitan: An Application of Spatial Panel Dynamic Land Value Model
Authors: Rahma Fitriani, Eni Sumarminingsih, Suci Astutik
Abstract:
In a fast growing region, conversion of agricultural lands which are surrounded by some new development sites will occur sooner than expected. This phenomenon has been experienced by many regions in Indonesia, especially the fringe of Jakarta (BoDeTaBek). Being Indonesia’s capital city, rapid conversion of land in this area is an unavoidable process. The land conversion expands spatially into the fringe regions, which were initially dominated by agricultural land or conservation sites. Without proper control or growth management, this activity will invite greater costs than benefits. The current land use is the use which maximizes its value. In order to maintain land for agricultural activity or conservation, some efforts are needed to keep the land value of this activity as high as possible. In this case, the knowledge regarding the functional relationship between land value and its driving forces is necessary. In a fast growing region, development externalities are the assumed dominant driving force. Land value is the product of the past decision of its use leading to its value. It is also affected by the local characteristics and the observed surrounded land use (externalities) from the previous period. The effect of each factor on land value has dynamic and spatial virtues; an empirical spatial dynamic land value model will be more useful to capture them. The model will be useful to test and to estimate the extent of land use externalities on land value in the short run as well as in the long run. It serves as a basis to formulate an effective urban growth management’s policy. This study will apply the model to the case of land value in the fringe of Jakarta Metropolitan. The model will be used further to predict the effect of externalities on land value, in the form of prediction map. For the case of Jakarta’s fringe, there is some evidence about the significance of neighborhood urban activity – negative externalities, the previous land value and local accessibility on land value. The effects are accumulated dynamically over years, but they will fully affect the land value after six years.Keywords: growth management, land use externalities, land value, spatial panel dynamic
Procedia PDF Downloads 257659 Evaluation of Compatibility between Produced and Injected Waters and Identification of the Causes of Well Plugging in a Southern Tunisian Oilfield
Authors: Sonia Barbouchi, Meriem Samcha
Abstract:
Scale deposition during water injection into aquifer of oil reservoirs is a serious problem experienced in the oil production industry. One of the primary causes of scale formation and injection well plugging is mixing two waters which are incompatible. Considered individually, the waters may be quite stable at system conditions and present no scale problems. However, once they are mixed, reactions between ions dissolved in the individual waters may form insoluble products. The purpose of this study is to identify the causes of well plugging in a southern Tunisian oilfield, where fresh water has been injected into the producing wells to counteract the salinity of the formation waters and inhibit the deposition of halite. X-ray diffraction (XRD) mineralogical analysis has been carried out on scale samples collected from the blocked well. Two samples collected from both formation water and injected water were analysed using inductively coupled plasma atomic emission spectroscopy, ion chromatography and other standard laboratory techniques. The results of complete waters analysis were the typical input parameters, to determine scaling tendency. Saturation indices values related to CaCO3, CaSO4, BaSO4 and SrSO4 scales were calculated for the water mixtures at different share, under various conditions of temperature, using a computerized scale prediction model. The compatibility study results showed that mixing the two waters tends to increase the probability of barite deposition. XRD analysis confirmed the compatibility study results, since it proved that the analysed deposits consisted predominantly of barite with minor galena. At the studied temperatures conditions, the tendency for barite scale is significantly increasing with the increase of fresh water share in the mixture. The future scale inhibition and removal strategies to be implemented in the concerned oilfield are being derived in a large part from the results of the present study.Keywords: compatibility study, produced water, scaling, water injection
Procedia PDF Downloads 169658 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur
Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh
Abstract:
The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.Keywords: agglomerate, blast furnace, permeability, softening-melting
Procedia PDF Downloads 253657 Efficacy Of Tranexamic Acid On Blood Loss After Primary Total Hip Replacement : A Case-control Study In 154 Patients
Authors: Fedili Benamar, Belloulou Mohamed Lamine, Ouahes Hassane, Ghattas Samir
Abstract:
Introduction: Perioperative blood loss is a frequent cause of complications in total hip replacement (THR). The present prospective study assessed the efficacy of tranexamic acid (Exacyl(®)) in reducing blood loss in primary THR. Hypothesis: Tranexamic acid reduces blood loss in THR. Material and method: -This is a prospective randomized study on the effectiveness of Exacyl (tranexamic acid) in total hip replacement surgery performed on a standardized technique between 2019 and September 2022. -It involved 154 patients, of which 84 received a single injection of Exacyl (group 1) at a dosage of 10 mg/kg over 20 minutes during the perioperative period. -All patients received postoperative thromboprophylaxis with enoxaparin 0.4 ml subcutaneously. -All patients were admitted to the post-interventional intensive care unit for a duration of 24 hours for monitoring and pain management as per the service protocol. Results: 154 patients, of which 84 received a single injection of Exacyl (group 1) and 70 patients patients who did not receive Exacyl perioperatively : (Group 2 ) The average age is 57 +/- 15 years The distribution by gender was nearly equal with 56% male and 44% female; "The distribution according to the ASA score was as follows: 20.2% ASA1, 82.3% ASA2, and 17.5% ASA3. "There was a significant difference in the average volume of intraoperative and postoperative bleeding during the 48 hours." The average bleeding volume for group 1 (received Exacyl) was 614 ml +/- 228, while the average bleeding volume for group 2 was 729 +/- 300, with a chi-square test of 6.35 and a p-value < 0.01, which is highly significant. The ANOVA test showed an F-statistic of 7.11 and a p-value of 0.008. A Bartlett test revealed a chi-square of 6.35 and a p-value < 0.01." "In Group 1 (patients who received Exacyl), 73% had bleeding less than 750 ml (Group A), and 26% had bleeding exceeding 750 ml (Group B). In Group 2 (patients who did not receive Exacyl perioperatively), 52% had bleeding less than 750 ml (Group A), and 47% had bleeding exceeding 750 ml (Group B). "Thus, the use of Exacyl reduced perioperative bleeding and specifically decreased the risk of severe bleeding exceeding 750 ml by 43% with a relative risk (RR) of 1.37 and a p-value < 0.01. The transfusion rate was 1.19% in the population of Group 1 (Exacyl), whereas it was 10% in the population of Group 2 (no Exacyl). It can be stated that the use of Exacyl resulted in a reduction in perioperative blood transfusion with an RR of 0.1 and a p-value of 0.02. Conclusions: The use of Exacyl significantly reduced perioperative bleeding in this type of surgery.Keywords: acid tranexamic, blood loss, anesthesia, total hip replacement, surgery
Procedia PDF Downloads 77656 Role of P53, KI67 and Cyclin a Immunohistochemical Assay in Predicting Wilms’ Tumor Mortality
Authors: Ahmed Atwa, Ashraf Hafez, Mohamed Abdelhameed, Adel Nabeeh, Mohamed Dawaba, Tamer Helmy
Abstract:
Introduction and Objective: Tumour staging and grading do not usually reflect the future behavior of Wilms' tumor (WT) regarding mortality. Therefore, in this study, P53, Ki67 and cyclin A immunohistochemistry were used in a trial to predict WT cancer-specific survival (CSS). Methods: In this nonconcurrent cohort study, patients' archived data, including age at presentation, gender, history, clinical examination and radiological investigations, were retrieved then the patients were reviewed at the outpatient clinic of a tertiary care center by history-taking, clinical examination and radiological investigations to detect the oncological outcome. Cases that received preoperative chemotherapy or died due to causes other than WT were excluded. Formalin-fixed, paraffin-embedded specimens obtained from the previously preserved blocks at the pathology laboratory were taken on positively charged slides for IHC with p53, Ki67 and cyclin A. All specimens were examined by an experienced histopathologist devoted to the urological practice and blinded to the patient's clinical findings. P53 and cyclin A staining were scored as 0 (no nuclear staining),1 (<10% nuclear staining), 2 (10-50% nuclear staining) and 3 (>50% nuclear staining). Ki67 proliferation index (PI) was graded as low, borderline and high. Results: Of the 75 cases, 40 (53.3%) were males and 35 (46.7%) were females, and the median age was 36 months (2-216). With a mean follow-up of 78.6±31 months, cancer-specific mortality (CSM) occurred in 15 (20%) and 11 (14.7%) patients, respectively. Kaplan-Meier curve was used for survival analysis, and groups were compared using the Log-rank test. Multivariate logistic regression and Cox regression were not used because only one variable (cyclin A) had shown statistical significance (P=.02), whereas the other significant factor (residual tumor) had few cases. Conclusions: Cyclin A IHC should be considered as a marker for the prediction of WT CSS. Prospective studies with a larger sample size are needed.Keywords: wilms’ tumour, nephroblastoma, urology, survival
Procedia PDF Downloads 67655 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data
Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill
Abstract:
Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function
Procedia PDF Downloads 280654 Inhibitory Effect of Coumaroyl Lupendioic Acid on Inflammation Mediator Generation in Complete Freund’s Adjuvant-Induced Arthritis
Authors: Rayhana Begum, Manju Sharma
Abstract:
Careya arborea Roxb. belongs to the Lecythidaceae family, is traditionally used in tumors, anthelmintic, bronchitis, epileptic fits, astringents, inflammation, an antidote to snake-venom, skin disease, diarrhea, dysentery with bloody stools, dyspepsia, ulcer, toothache, and ear pain. The present study was focused on investigating the anti-arthritic effect of coumaroyl lupendioic acid, a new lupane-type triterpene from Careya arborea stem bark in the chronic inflammatory model and further assessing its possible mechanism on the modulation of inflammatory biomarkers. Arthritis was induced by injecting 0.1 ml of Complete Freund’s Adjuvant (5 mg/ml of heat killed Mycobacterium tuberculosis) into the subplantar region of the left hind paw. Treatment with coumaroyl lupendioic acid (10 and 20 mg/kg, p.o.) and reference drugs (indomethacin and dexamethasone at the dose of 5 mg/kg, p.o.) were started on the day of induction and continued up to 28 days. The progression of arthritis was evaluated by measuring paw volume, tibio tarsal joint diameters, and arthritic index. The effect of coumaroyl lupendioic acid (CLA) on the production PGE₂, NO, MPO, NF-κB, TNF-α, IL-1β, and IL-6 on serum level as well as inflamed paw tissue were also assessed. In addition, ankle joints and spleen were collected and prepared for histological examination. CLA in inflamed rats resulted in significant amelioration of paw edema, tibio-tarsal joint swelling and arthritic score as compared to CFA control group. The results indicated that CLA treated groups markedly decreased the levels of inflammatory mediators (PGE₂, NO, MPO and NF-κB levels) and down-regulated the production of pro-inflammatory cytokines (TNF-α, IL-1β, and IL-6) in paw tissue homogenates as well as in serum. However, the more pronounced effect was observed in the inflamed paw tissue homogenates. CLA also revealed a protective effect to the tibio-tarsal joint cartilage and spleen. These results suggest that coumaroyl lupendioic acid inhibits inflammation may be through the suppression of the cascade of proinflammatory mediators via the down-regulation of NF-ҡB.Keywords: complete Freund’s adjuvant , Coumaroyl lupendioic acid, pro-inflammatory cytokines, prostaglandin E2
Procedia PDF Downloads 143653 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur
Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh
Abstract:
Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.Keywords: hanging, channelling, blast furnace, coke
Procedia PDF Downloads 197652 Transition in Protein Profile, Maillard Reaction Products and Lipid Oxidation of Flavored Ultra High Temperature Treated Milk
Authors: Muhammad Ajmal
Abstract:
- Thermal processing and subsequent storage of ultra-heat treated (UHT) milk leads to alteration in protein profile, Maillard reaction and lipid oxidation. Concentration of carbohydrates in normal and flavored version of UHT milk is considerably different. Transition in protein profile, Maillard reaction and lipid oxidation in UHT flavored milk was determined for 90 days at ambient conditions and analyzed at 0, 45 and 90 days of storage. Protein profile, hydroxymethyl furfural, furosine, Nε-carboxymethyl-l-lysine, fatty acid profile, free fatty acids, peroxide value and sensory characteristics were determined. After 90 days of storage, fat, protein, total solids contents and pH were significantly less than the initial values determined at 0 day. As compared to protein profile normal UHT milk, more pronounced changes were recorded in different fractions of protein in UHT milk at 45 and 90 days of storage. Tyrosine content of flavored UHT milk at 0, 45 and 90 days of storage were 3.5, 6.9 and 15.2 µg tyrosine/ml. After 45 days of storage, the decline in αs1-casein, αs2-casein, β-casein, κ-casein, β-lactoglobulin, α-lactalbumin, immunoglobulin and bovine serum albumin were 3.35%, 10.5%, 7.89%, 18.8%, 53.6%, 20.1%, 26.9 and 37.5%. After 90 days of storage, the decline in αs1-casein, αs2-casein, β-casein, κ-casein, β-lactoglobulin, α-lactalbumin, immunoglobulin and bovine serum albumin were 11.2%, 34.8%, 14.3%, 33.9%, 56.9%, 24.8%, 36.5% and 43.1%. Hydroxy methyl furfural content of UHT milk at 0, 45 and 90 days of storage were 1.56, 4.18 and 7.61 (µmol/L). Furosine content of flavored UHT milk at 0, 45 and 90 days of storage intervals were 278, 392 and 561 mg/100g protein. Nε-carboxymethyl-l-lysine content of UHT flavored milk at 0, 45 and 90 days of storage were 67, 135 and 343mg/kg protein. After 90 days of storage of flavored UHT milk, the loss of unsaturated fatty acids 45.7% from the initial values. At 0, 45 and 90 days of storage, free fatty acids of flavored UHT milk were 0.08%, 0.11% and 0.16% (p<0.05). Peroxide value of flavored UHT milk at 0, 45 and 90 days of storage was 0.22, 0.65 and 2.88 (MeqO²/kg). Sensory analysis of flavored UHT milk after 90 days indicated that appearance, flavor and mouth feel score significantly decreased from the initial values recorded at 0 day. Findings of this investigation evidenced that in flavored UHT milk more pronounced changes take place in protein profile, Maillard reaction products and lipid oxidation as compared to normal UHT milk.Keywords: UHT flavored milk , hydroxymethyl furfural, lipid oxidation, sensory properties
Procedia PDF Downloads 199651 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning
Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic
Abstract:
Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method
Procedia PDF Downloads 252650 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation
Authors: Min L. Stewart, Patrick Johnston
Abstract:
Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding
Procedia PDF Downloads 111649 Incidence of Lymphoma and Gonorrhea Infection: A Retrospective Study
Authors: Diya Kohli, Amalia Ardeljan, Lexi Frankel, Jose Garcia, Lokesh Manjani, Omar Rashid
Abstract:
Gonorrhea is the second most common sexually transmitted disease (STDs) in the United States of America. Gonorrhea affects the urethra, rectum, or throat and the cervix in females. Lymphoma is a cancer of the immune network called the lymphatic system that includes the lymph nodes/glands, spleen, thymus gland, and bone marrow. Lymphoma can affect many organs in the body. When a lymphocyte develops a genetic mutation, it signals other cells into rapid proliferation that causes many mutated lymphocytes. Multiple studies have explored the incidence of cancer in people infected with STDs such as Gonorrhea. For instance, the studies conducted by Wang Y-C and Co., as well as Caini, S and Co. established a direct co-relationship between Gonorrhea infection and incidence of prostate cancer. We hypothesize that Gonorrhea infection also increases the incidence of Lymphoma in patients. This research study aimed to evaluate the correlation between Gonorrhea infection and the incidence of Lymphoma. The data for the research was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database. This database was utilized to evaluate patients infected with Gonorrhea versus the ones who were not infected to establish a correlation with the prevalence of Lymphoma using ICD-10 and ICD-9 codes. Access to the database was granted by the Holy Cross Health, Fort Lauderdale for academic research. Standard statistical methods were applied throughout. Between January 2010 and December 2019, the query was analyzed and resulted in 254 and 808 patients in both the infected and control group, respectively. The two groups were matched by Age Range and CCI score. The incidence of Lymphoma was 0.998% (254 patients out of 25455) in the Gonorrhea group (patients infected with Gonorrhea that was Lymphoma Positive) compared to 3.174% and 808 patients in the control group (Patients negative for Gonorrhea but with Lymphoma). This was statistically significant by a p-value < 2.210-16 with an OR= 0.431 (95% CI 0.381-0.487). The patients were then matched by antibiotic treatment to avoid treatment bias. The incidence of Lymphoma was 1.215% (82 patients out of 6,748) in the Gonorrhea group compared to 2.949% (199 patients out of 6748) in the control group. This was statistically significant by a p-value <5.410-10 with an OR= 0.468 (95% CI 0.367-0.596). The study shows a statistically significant correlation between Gonorrhea and a reduced incidence of Lymphoma. Further evaluation is recommended to assess the potential of Gonorrhea in reducing Lymphoma.Keywords: gonorrhea, lymphoma, STDs, cancer, ICD
Procedia PDF Downloads 196648 Numerical Modeling and Prediction of Nanoscale Transport Phenomena in Vertically Aligned Carbon Nanotube Catalyst Layers by the Lattice Boltzmann Simulation
Authors: Seungho Shin, Keunwoo Choi, Ali Akbar, Sukkee Um
Abstract:
In this study, the nanoscale transport properties and catalyst utilization of vertically aligned carbon nanotube (VACNT) catalyst layers are computationally predicted by the three-dimensional lattice Boltzmann simulation based on the quasi-random nanostructural model in pursuance of fuel cell catalyst performance improvement. A series of catalyst layers are randomly generated with statistical significance at the 95% confidence level to reflect the heterogeneity of the catalyst layer nanostructures. The nanoscale gas transport phenomena inside the catalyst layers are simulated by the D3Q19 (i.e., three-dimensional, 19 velocities) lattice Boltzmann method, and the corresponding mass transport characteristics are mathematically modeled in terms of structural properties. Considering the nanoscale reactant transport phenomena, a transport-based effective catalyst utilization factor is defined and statistically analyzed to determine the structure-transport influence on catalyst utilization. The tortuosity of the reactant mass transport path of VACNT catalyst layers is directly calculated from the streaklines. Subsequently, the corresponding effective mass diffusion coefficient is statistically predicted by applying the pre-estimated tortuosity factors to the Knudsen diffusion coefficient in the VACNT catalyst layers. The statistical estimation results clearly indicate that the morphological structures of VACNT catalyst layers reduce the tortuosity of reactant mass transport path when compared to conventional catalyst layer and significantly improve consequential effective mass diffusion coefficient of VACNT catalyst layer. Furthermore, catalyst utilization of the VACNT catalyst layer is substantially improved by enhanced mass diffusion and electric current paths despite the relatively poor interconnections of the ion transport paths.Keywords: Lattice Boltzmann method, nano transport phenomena, polymer electrolyte fuel cells, vertically aligned carbon nanotube
Procedia PDF Downloads 201647 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 117646 The Effect of Leadership Styles on Employees’ Organizational Commitment at Ambo Woreda Public Organizations, Oromia Regional State, Ethiopia
Authors: Mengistu Tulu Balcha, Endale Gadisa Motuma
Abstract:
The purpose of this study was to assess the effect of leadership styles on employees’ organizational commitments in Ambo Woreda public organizations. The study was guided by a Descriptive survey and correlation research design of the quantitative method. By using simple random sampling techniques, 80 participants of employees and by purposive sampling technique, 32 leaders were involved in research from five purposely selected Woreda public organizations without a non-response rate. Two separate instruments adopted from previous studies, namely the multifactor leadership questionnaire (MLQ), which has 36 items and the Organizational Commitment Questionnaire (OCQ), which has 12 items, were used as a data instrument tool. These items were rated by using a five-point Likert-scale. The survey data was processed by using an SPSS (version 27). Descriptive statistics to calculate mean and standard deviations of leaders’ and employees’ responses to leadership styles dominantly practiced in order to determine their perceptions, MLQ of leaders’ and employees’ responses (independent sample), and multiple linear regressions were used to calculate the effect of leadership styles on organizational commitment. The findings of the study show that the leadership style dominantly practiced in Ambo Woreda public organizations was more transactional than transformational and followed by laissez-faire. The level of EOC was ranked as continuance commitment and had the highest mean score, followed by normative commitment and then affective commitment. There is a strong, positive and significant relationship between leadership style dimensions and employees’ organizational commitment. Leadership styles were found statistically significant to predict employee commitment and there was a significant linear relationship between independent variables and dependent variables. Out of the three leadership variables, the transactional leadership style has the highest contribution, followed by the transformational leadership style, whereas the laissez-faire leadership style has the least contribution in predicting employees’ organizational commitment. Finally, the researcher forwarded possible recommendations for Ambo Woreda public organizational leaders and employees to work on improving leadership styles and employees’ commitment collaboratively.Keywords: organizations, employee, relations, commitments, style
Procedia PDF Downloads 32645 Prediction of Seismic Damage Using Scalar Intensity Measures Based on Integration of Spectral Values
Authors: Konstantinos G. Kostinakis, Asimina M. Athanatopoulou
Abstract:
A key issue in seismic risk analysis within the context of Performance-Based Earthquake Engineering is the evaluation of the expected seismic damage of structures under a specific earthquake ground motion. The assessment of the seismic performance strongly depends on the choice of the seismic Intensity Measure (IM), which quantifies the characteristics of a ground motion that are important to the nonlinear structural response. Several conventional IMs of ground motion have been used to estimate their damage potential to structures. Yet, none of them has been proved to be able to predict adequately the seismic damage. Therefore, alternative, scalar intensity measures, which take into account not only ground motion characteristics but also structural information have been proposed. Some of these IMs are based on integration of spectral values over a range of periods, in an attempt to account for the information that the shape of the acceleration, velocity or displacement spectrum provides. The adequacy of a number of these IMs in predicting the structural damage of 3D R/C buildings is investigated in the present paper. The investigated IMs, some of which are structure specific and some are nonstructure-specific, are defined via integration of spectral values. To achieve this purpose three symmetric in plan R/C buildings are studied. The buildings are subjected to 59 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along the structural axes. The response is determined by nonlinear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures are correlated with seven scalar ground motion IMs. The comparative assessment of the results revealed that the structure-specific IMs present higher correlation with the seismic damage of the three buildings. However, the adequacy of the IMs for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.Keywords: damage measures, bidirectional excitation, spectral based IMs, R/C buildings
Procedia PDF Downloads 328644 Critical Success Factors Influencing Construction Project Performance for Different Objectives: Procurement Phase
Authors: Samart Homthong, Wutthipong Moungnoi
Abstract:
Critical success factors (CSFs) and the criteria to measure project success have received much attention over the decades and are among the most widely researched topics in the context of project management. However, although there have been extensive studies on the subject by different researchers, to date, there has been little agreement on the CSFs. The aim of this study is to identify the CSFs that influence the performance of construction projects, and determine their relative importance for different objectives across five stages in the project life cycle. A considerable literature review was conducted that resulted in the identification of 179 individual factors. These factors were then grouped into nine major categories. A questionnaire survey was used to collect data from three groups of respondents: client representatives, consultants, and contractors. Out of 164 questionnaires distributed, 93 were returned, yielding a response rate of 56.7%. Using the mean score, relative importance index, and weighted average method, the top 10 critical factors for each category were identified. The agreement of survey respondents on those categorised factors were analysed using Spearman’s rank correlation. A one-way analysis of variance was then performed to determine whether the mean scores among the various groups of respondents were statistically significant. The findings indicate the most CSFs in each category in procurement phase are: proper procurement programming of materials (time), stability in the price of materials (cost), and determining quality in the construction (quality). They are then followed by safety equipment acquisition and maintenance (health and safety), budgeting allowed in a contractual arrangement for implementing environmental management activities (environment), completeness of drawing documents (productivity), accurate measurement and pricing of bill of quantities (risk management), adequate communication among the project team (human resource), and adequate cost control measures (client satisfaction). An understanding of CSFs would help all interested parties in the construction industry to improve project performance. Furthermore, the results of this study would help construction professionals and practitioners take proactive measures for effective project management.Keywords: critical success factors, procurement phase, project life cycle, project performance
Procedia PDF Downloads 184643 Bioinformatics Approach to Identify Physicochemical and Structural Properties Associated with Successful Cell-free Protein Synthesis
Authors: Alexander A. Tokmakov
Abstract:
Cell-free protein synthesis is widely used to synthesize recombinant proteins. It allows genome-scale expression of various polypeptides under strictly controlled uniform conditions. However, only a minor fraction of all proteins can be successfully expressed in the systems of protein synthesis that are currently used. The factors determining expression success are poorly understood. At present, the vast volume of data is accumulated in cell-free expression databases. It makes possible comprehensive bioinformatics analysis and identification of multiple features associated with successful cell-free expression. Here, we describe an approach aimed at identification of multiple physicochemical and structural properties of amino acid sequences associated with protein solubility and aggregation and highlight major correlations obtained using this approach. The developed method includes: categorical assessment of the protein expression data, calculation and prediction of multiple properties of expressed amino acid sequences, correlation of the individual properties with the expression scores, and evaluation of statistical significance of the observed correlations. Using this approach, we revealed a number of statistically significant correlations between calculated and predicted features of protein sequences and their amenability to cell-free expression. It was found that some of the features, such as protein pI, hydrophobicity, presence of signal sequences, etc., are mostly related to protein solubility, whereas the others, such as protein length, number of disulfide bonds, content of secondary structure, etc., affect mainly the expression propensity. We also demonstrated that amenability of polypeptide sequences to cell-free expression correlates with the presence of multiple sites of post-translational modifications. The correlations revealed in this study provide a plethora of important insights into protein folding and rationalization of protein production. The developed bioinformatics approach can be of practical use for predicting expression success and optimizing cell-free protein synthesis.Keywords: bioinformatics analysis, cell-free protein synthesis, expression success, optimization, recombinant proteins
Procedia PDF Downloads 419