Search results for: hubble's parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2049

Search results for: hubble's parameter

1659 Analyzing Nonsimilar Convective Heat Transfer in Copper/Alumina Nanofluid with Magnetic Field and Thermal Radiations

Authors: Abdulmohsen Alruwaili

Abstract:

A partial differential system featuring momentum and energy balance is often used to describe simulations of flow initiation and thermal shifting in boundary layers. The buoyancy force in terms of temperature is factored in the momentum balance equation. Buoyancy force causes the flow quantity to fluctuate along the streamwise direction 𝑋; therefore, the problem can be, to our best knowledge, analyzed through nonsimilar modeling. In this analysis, a nonsimilar model is evolved for radiative mixed convection of a magnetized power-law nanoliquid flow on top of a vertical plate installed in a stationary fluid. The upward linear stretching initiated the flow in the vertical direction. Assuming nanofluids are composite of copper (Cu) and alumina (Al₂O₃) nanoparticles, the viscous dissipation in this case is negligible. The nonsimilar system is dealt with analytically by local nonsimilarity (LNS) via numerical algorithm bvp4c. Surface temperature and flow field are shown visually in relation to factors like mixed convection, magnetic field strength, nanoparticle volume fraction, radiation parameters, and Prandtl number. The repercussions of magnetic and mixed convection parameters on the rate of energy transfer and friction coefficient are represented in tabular forms. The results obtained are compared to the published literature. It is found that the existence of nanoparticles significantly improves the temperature profile of considered nanoliquid. It is also observed that when the estimates of the magnetic parameter increase, the velocity profile decreases. Enhancement in nanoparticle concentration and mixed convection parameter improves the velocity profile.

Keywords: nanofluid, power law model, mixed convection, thermal radiation

Procedia PDF Downloads 32
1658 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates

Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery

Abstract:

Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 °C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.

Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop

Procedia PDF Downloads 95
1657 Simulation of GAG-Analogue Biomimetics for Intervertebral Disc Repair

Authors: Dafna Knani, Sarit S. Sivan

Abstract:

Aggrecan, one of the main components of the intervertebral disc (IVD), belongs to the family of proteoglycans (PGs) that are composed of glycosaminoglycan (GAG) chains covalently attached to a core protein. Its primary function is to maintain tissue hydration and hence disc height under the high loads imposed by muscle activity and body weight. Significant PG loss is one of the first indications of disc degeneration. A possible solution to recover disc functions is by injecting a synthetic hydrogel into the joint cavity, hence mimicking the role of PGs. One of the hydrogels proposed is GAG-analogues, based on sulfate-containing polymers, which are responsible for hydration in disc tissue. In the present work, we used molecular dynamics (MD) to study the effect of the hydrogel crosslinking (type and degree) on the swelling behavior of the suggested GAG-analogue biomimetics by calculation of cohesive energy density (CED), solubility parameter, enthalpy of mixing (ΔEmix) and the interactions between the molecules at the pure form and as a mixture with water. The simulation results showed that hydrophobicity plays an important role in the swelling of the hydrogel, as indicated by the linear correlation observed between solubility parameter values of the copolymers and crosslinker weight ratio (w/w); this correlation was found useful in predicting the amount of PEGDA needed for the desirable hydration behavior of (CS)₄-peptide. Enthalpy of mixing calculations showed that all the GAG analogs, (CS)₄ and (CS)₄-peptide are water-soluble; radial distribution function analysis revealed that they form interactions with water molecules, which is important for the hydration process. To conclude, our simulation results, beyond supporting the experimental data, can be used as a useful predictive tool in the future development of biomaterials, such as disc replacement.

Keywords: molecular dynamics, proteoglycans, enthalpy of mixing, swelling

Procedia PDF Downloads 75
1656 Evaluation of River Meander Geometry Using Uniform Excess Energy Theory and Effects of Climate Change on River Meandering

Authors: Youssef I. Hafez

Abstract:

Since ancient history rivers have been the fostering and favorite place for people and civilizations to live and exist along river banks. However, due to floods and droughts, especially sever conditions due to global warming and climate change, river channels are completely evolving and moving in the lateral direction changing their plan form either through straightening of curved reaches (meander cut-off) or increasing meandering curvature. The lateral shift or shrink of a river channel affects severely the river banks and the flood plain with tremendous impact on the surrounding environment. Therefore, understanding the formation and the continual processes of river channel meandering is of paramount importance. So far, in spite of the huge number of publications about river-meandering, there has not been a satisfactory theory or approach that provides a clear explanation of the formation of river meanders and the mechanics of their associated geometries. In particular two parameters are often needed to describe meander geometry. The first one is a scale parameter such as the meander arc length. The second is a shape parameter such as the maximum angle a meander path makes with the channel mean down path direction. These two parameters, if known, can determine the meander path and geometry as for example when they are incorporated in the well known sine-generated curve. In this study, a uniform excess energy theory is used to illustrate the origin and mechanics of formation of river meandering. This theory advocates that the longitudinal imbalance between the valley and channel slopes (with the former is greater than the second) leads to formation of curved meander channel in order to reduce the excess energy through its expenditure as transverse energy loss. Two relations are developed based on this theory; one for the determination of river channel radius of curvature at the bend apex (shape parameter) and the other for the determination of river channel sinuosity. The sinuosity equation tested very well when applied to existing available field data. In addition, existing model data were used to develop a relation between the meander arc length and the Darcy-Weisback friction factor. Then, the meander wave length was determined from the equations of the arc length and the sinuosity. The developed equation compared well with available field data. Effects of the transverse bed slope and grain size on river channel sinuosity are addressed. In addition, the concept of maximum channel sinuosity is introduced in order to explain the changes of river channel plan form due to changes in flow discharges and sediment loads induced by global warming and climate changes.

Keywords: river channel meandering, sinuosity, radius of curvature, meander arc length, uniform excess energy theory, transverse energy loss, transverse bed slope, flow discharges, sediment loads, grain size, climate change, global warming

Procedia PDF Downloads 223
1655 Influence of Preparation, Characterisation and Application of Carbon Nano Tube

Authors: Dhaivat S. Soni, Snehal Thakor, Afroz Bhatti

Abstract:

The prepare CNTs in bulk quantity by as easiest as possible method with highly pure and small diameter. Prepared CNTs first charactered its structural parameter for the conformation of CNTs and purity. Surface morphology of CNTs stured by using various instruments finally study application of prepared CNTs in various field. Carbon nanotubes (CNTs) were synthesized in large scale by pyrolyzing activated carbon in sealed autoclaves.

Keywords: nanostructures, nanotubes, carbon, pyrolysis

Procedia PDF Downloads 398
1654 Development of a Multi-Locus DNA Metabarcoding Method for Endangered Animal Species Identification

Authors: Meimei Shi

Abstract:

Objectives: The identification of endangered species, especially simultaneous detection of multiple species in complex samples, plays a critical role in alleged wildlife crime incidents and prevents illegal trade. This study was to develop a multi-locus DNA metabarcoding method for endangered animal species identification. Methods: Several pairs of universal primers were designed according to the mitochondria conserved gene regions. Experimental mixtures were artificially prepared by mixing well-defined species, including endangered species, e.g., forest musk, bear, tiger, pangolin, and sika deer. The artificial samples were prepared with 1-16 well-characterized species at 1% to 100% DNA concentrations. After multiplex-PCR amplification and parameter modification, the amplified products were analyzed by capillary electrophoresis and used for NGS library preparation. The DNA metabarcoding was carried out based on Illumina MiSeq amplicon sequencing. The data was processed with quality trimming, reads filtering, and OTU clustering; representative sequences were blasted using BLASTn. Results: According to the parameter modification and multiplex-PCR amplification results, five primer sets targeting COI, Cytb, 12S, and 16S, respectively, were selected as the NGS library amplification primer panel. High-throughput sequencing data analysis showed that the established multi-locus DNA metabarcoding method was sensitive and could accurately identify all species in artificial mixtures, including endangered animal species Moschus berezovskii, Ursus thibetanus, Panthera tigris, Manis pentadactyla, Cervus nippon at 1% (DNA concentration). In conclusion, the established species identiïŹcation method provides technical support for customs and forensic scientists to prevent the illegal trade of endangered animals and their products.

Keywords: DNA metabarcoding, endangered animal species, mitochondria nucleic acid, multi-locus

Procedia PDF Downloads 140
1653 Evaluation of Surface Roughness Condition Using App Roadroid

Authors: Diego de Almeida Pereira

Abstract:

The roughness index of a road is considered the most important parameter about the quality of the pavement, as it has a close relation with the comfort and safety of the road users. Such condition can be established by means of functional evaluation of pavement surface deviations, measured by the International Roughness Index (IRI), an index that came out of the international evaluation of pavements, coordinated by the World Bank, and currently owns, as an index of limit measure, for purposes of receiving roads in Brazil, the value of 2.7 m/km. This work make use of the e.IRI parameter, obtained by the Roadroid app. for smartphones which use Android operating system. The choice of such application is due to the practicality for the user interaction, as it possesses a data storage on a cloud of its own, and the support given to universities all around the world. Data has been collected for six months, once in each month. The studies begun in March 2018, season of precipitations that worsen the conditions of the roads, besides the opportunity to accompany the damage and the quality of the interventions performed. About 350 kilometers of sections of four federal highways were analyzed, BR-020, BR-040, BR-060 and BR-070 that connect the Federal District (area where Brasilia is located) and surroundings, chosen for their economic and tourist importance, been two of them of federal and two others of private exploitation. As well as much of the road network, the analyzed stretches are coated of Hot Mix Asphalt (HMA). Thus, this present research performs a contrastive discussion between comfort conditions and safety of the roads under private exploitation in which users pay a fee to the concessionaires so they could travel on a road that meet the minimum requirements for usage, and regarding the quality of offered service on the roads under Federal Government jurisdiction. And finally, the contrast of data collected by National Department of Transport Infrastructure – DNIT, by means of a laser perfilometer, with data achieved by Roadroid, checking the applicability, the practicality and cost-effective, considering the app limitations.

Keywords: roadroid, international roughness index, Brazilian roads, pavement

Procedia PDF Downloads 85
1652 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4

Authors: Ryan A. Black, Stacey A. McCaffrey

Abstract:

Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.

Keywords: instrument development, item response theory, latent trait theory, psychometrics

Procedia PDF Downloads 356
1651 On a Negative Relation between Bacterial Taxis and Turing Pattern Formation

Authors: A. Elragig, S. Townley, H. Dreiwi

Abstract:

In this paper we introduce a bacteria-leukocyte model with bacteria chemotaxsis. We assume that bacteria develop a tactic defense mechanism as a response to Leukocyte phagocytosis. We explore the effect of this tactic motion on Turing space in two parameter spaces. A fine tuning of bacterial chemotaxis shows a significant effect on developing a non-uniform steady state.

Keywords: chemotaxis-diffusion driven instability, bacterial chemotaxis, mathematical biology, ecology

Procedia PDF Downloads 368
1650 Cross-Cultural Analysis of the Impact of Project Atmosphere on Project Success and Failure

Authors: Omer Livvarcin, Mary Kay Park, Michael Miles

Abstract:

The current literature includes a few studies that mention the impact of relations between teams, the business environment, and experiences from previous projects. There is, however, limited research that treats the phenomenon of project atmosphere (PA) as a whole. This is especially true of research identifying parameters and sub-parameters, which allow project management (PM) teams to build a project culture that ultimately imbues project success. This study’s findings identify a number of key project atmosphere parameters and sub-parameters that affect project management success. One key parameter identified in the study is a cluster related to cultural concurrence, including artifacts such as policies and mores, values, perceptions, and assumptions. A second cluster centers on motivational concurrence, including such elements as project goals and team-member expectations, moods, morale, motivation, and organizational support. A third parameter cluster relates to experiential concurrence, with a focus on project and organizational memory, previous internal PM experience, and external environmental PM history and experience). A final cluster of parameters is comprised of those falling in the area of relational concurrence, including inter/intragroup relationships, role conflicts, and trust. International and intercultural project management data was collected and analyzed from the following countries: Canada, China, Nigeria, South Korea and Turkey. The cross-cultural nature of the data set suggests increased confidence that the findings will be generalizable across cultures and thus applicable for future international project management success. The intent of the identification of project atmosphere as a critical project management element is that a clear understanding of the dynamics of its sub-parameters upon projects may significantly improve the odds of success of future international and intercultural projects.

Keywords: project management, project atmosphere, cultural concurrence, motivational concurrence, relational concurrence

Procedia PDF Downloads 318
1649 Design and Optimization of Spoke Rotor Type Brushless Direct Current Motor for Electric Vehicles Using Different Flux Barriers

Authors: Ismail Kurt, Necibe Fusun Oyman Serteller

Abstract:

Today, with the reduction in semiconductor system costs, Brushless Direct Current (BLDC) motors have become widely preferred. Based on rotor architecture, BLDC structures are divided into internal permanent magnet (IPM) and surface permanent magnet (SPM). However, permanent magnet (PM) motors in electric vehicles (EVs) are still predominantly based on interior permanent magnet (IPM) motors, as the rotors do not require sleeves, the PMs are better protected by the rotor cores, and the air-gap lengths can be much smaller. This study discusses the IPM rotor structure in detail, highlighting its higher torque levels, reluctance torque, wide speed range operation, and production advantages. IPM rotor structures are particularly preferred in EVs due to their high-speed capabilities, torque density and field weakening (FW) features. In FW applications, the motor becomes more suitable for operation at torques lower than the rated torque but at speeds above the rated speed. Although V-type and triangular IPM rotor structures are generally preferred in EV applications, the spoke-type rotor structure offers distinct advantages, making it a competitive option for these systems. The flux barriers in the rotor significantly affect motor performance, providing notable benefits in both motor efficiency and cost. This study utilizes ANSYS/Maxwell simulation software to analyze the spoke-type IPM motor and examine its key design parameters. Through analytical and 2D analysis, preliminary motor design and parameter optimization have been carried out. During the parameter optimization phase, torque ripple a common issue, especially for IPM motors has been investigated, along with the associated changes in motor parameters.

Keywords: electric vehicle, field weakening, flux barrier, spoke rotor.

Procedia PDF Downloads 8
1648 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion

Authors: Omran M. Kenshel, Alan J. O'Connor

Abstract:

Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.

Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability

Procedia PDF Downloads 473
1647 Parameter Estimation of Additive Genetic and Unique Environment (AE) Model on Diabetes Mellitus Type 2 Using Bayesian Method

Authors: Andi Darmawan, Dewi Retno Sari Saputro, Purnami Widyaningsih

Abstract:

Diabetes mellitus (DM) is a chronic disease in human that occurred if pancreas cannot produce enough of insulin hormone or the body uses ineffectively insulin hormone which causes increasing level of glucose in the blood, or it was called hyperglycemia. In Indonesia, DM is a serious disease on health because it can cause blindness, kidney disease, diabetic feet (gangrene), and stroke. The type of DM criteria can also be divided based on the main causes; they are DM type 1, type 2, and gestational. Diabetes type 1 or previously known as insulin-independent diabetes is due to a lack of production of insulin hormone. Diabetes type 2 or previously known as non-insulin dependent diabetes is due to ineffective use of insulin while gestational diabetes is a hyperglycemia that found during pregnancy. The most one type commonly found in patient is DM type 2. The main factors of this disease are genetic (A) and life style (E). Those disease with 2 factors can be constructed with additive genetic and unique environment (AE) model. In this article was discussed parameter estimation of AE model using Bayesian method and the inheritance character simulation on parent-offspring. On the AE model, there are response variable, predictor variables, and parameters were capable of representing the number of population on research. The population can be measured through a taken random sample. The response and predictor variables can be determined by sample while the parameters are unknown, so it was required to estimate the parameters based on the sample. Estimation of AE model parameters was obtained based on a joint posterior distribution. The simulation was conducted to get the value of genetic variance and life style variance. The results of simulation are 0.3600 for genetic variance and 0.0899 for life style variance. Therefore, the variance of genetic factor in DM type 2 is greater than life style.

Keywords: AE model, Bayesian method, diabetes mellitus type 2, genetic, life style

Procedia PDF Downloads 284
1646 Analyzing the Effects of Bio-fibers on the Stiffness and Strength of Adhesively Bonded Thermoplastic Bio-fiber Reinforced Composites by a Mixed Experimental-Numerical Approach

Authors: Sofie Verstraete, Stijn Debruyne, Frederik Desplentere

Abstract:

Considering environmental issues, the interest to apply sustainable materials in industry increases. Specifically for composites, there is an emerging need for suitable materials and bonding techniques. As an alternative to traditional composites, short bio-fiber (cellulose-based flax) reinforced Polylactic Acid (PLA) is gaining popularity. However, these thermoplastic based composites show issues in adhesive bonding. This research focusses on analyzing the effects of the fibers near the bonding interphase. The research applies injection molded plate structures. A first important parameter concerns the fiber volume fraction, which directly affects adhesion characteristics of the surface. This parameter is varied between 0 (pure PLA) and 30%. Next to fiber volume fraction, the orientation of fibers near the bonding surface governs the adhesion characteristics of the injection molded parts. This parameter is not directly controlled in this work, but its effects are analyzed. Surface roughness also greatly determines surface wettability, thus adhesion. Therefore, this research work considers three different roughness conditions. Different mechanical treatments yield values up to 0.5 mm. In this preliminary research, only one adhesive type is considered. This is a two-part epoxy which is cured at 23 °C for 48 hours. In order to assure a dedicated parametric study, simple and reproduceable adhesive bonds are manufactured. Both single lap (substrate width 25 mm, thickness 3 mm, overlap length 10 mm) and double lap tests are considered since these are well documented and quite straightforward to conduct. These tests are conducted for the different substrate and surface conditions. Dog bone tensile testing is applied to retrieve the stiffness and strength characteristics of the substrates (with different fiber volume fractions). Numerical modelling (non-linear FEA) relates the effects of the considered parameters on the stiffness and strength of the different joints, obtained through the abovementioned tests. Ongoing work deals with developing dedicated numerical models, incorporating the different considered adhesion parameters. Although this work is the start of an extensive research project on the bonding characteristics of thermoplastic bio-fiber reinforced composites, some interesting results are already prominent. Firstly, a clear correlation between the surface roughness and the wettability of the substrates is observed. Given the adhesive type (and viscosity), it is noticed that an increase in surface energy is proportional to the surface roughness, to some extent. This becomes more pronounced when fiber volume fraction increases. Secondly, ultimate bond strength (single lap) also increases with increasing fiber volume fraction. On a macroscopic level, this confirms the positive effect of fibers near the adhesive bond line.

Keywords: adhesive bonding, bio-fiber reinforced composite, flax fibers, lap joint

Procedia PDF Downloads 127
1645 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces

Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur

Abstract:

In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.

Keywords: aerodynamic, bi-dimensional, vegetation, synergistic

Procedia PDF Downloads 269
1644 Enhancement of Density-Based Spatial Clustering Algorithm with Noise for Fire Risk Assessment and Warning in Metro Manila

Authors: Pinky Mae O. De Leon, Franchezka S. P. Flores

Abstract:

This study focuses on applying an enhanced density-based spatial clustering algorithm with noise for fire risk assessments and warnings in Metro Manila. Unlike other clustering algorithms, DBSCAN is known for its ability to identify arbitrary-shaped clusters and its resistance to noise. However, its performance diminishes when handling high dimensional data, wherein it can read the noise points as relevant data points. Also, the algorithm is dependent on the parameters (eps & minPts) set by the user; choosing the wrong parameters can greatly affect its clustering result. To overcome these challenges, the study proposes three key enhancements: first is to utilize multiple MinHash and locality-sensitive hashing to decrease the dimensionality of the data set, second is to implement Jaccard Similarity before applying the parameter Epsilon to ensure that only similar data points are considered neighbors, and third is to use the concept of Jaccard Neighborhood along with the parameter MinPts to improve in classifying core points and identifying noise in the data set. The results show that the modified DBSCAN algorithm outperformed three other clustering methods, achieving fewer outliers, which facilitated a clearer identification of fire-prone areas, high Silhouette score, indicating well-separated clusters that distinctly identify areas with potential fire hazards and exceptionally achieved a low Davies-Bouldin Index and a high Calinski-Harabasz score, highlighting its ability to form compact and well-defined clusters, making it an effective tool for assessing fire hazard zones. This study is intended for assessing areas in Metro Manila that are most prone to fire risk.

Keywords: DBSCAN, clustering, Jaccard similarity, MinHash LSH, fires

Procedia PDF Downloads 2
1643 Use of a Business Intelligence Software for Interactive Visualization of Data on the Swiss Elite Sports System

Authors: Corinne Zurmuehle, Andreas Christoph Weber

Abstract:

In 2019, the Swiss Federal Institute of Sport Magglingen (SFISM) conducted a mixed-methods study on the Swiss elite sports system, which yielded a large quantity of research data. In a quantitative online survey, 1151 elite sports athletes, 542 coaches, and 102 Performance Directors of national sports federations (NF) have submitted their perceptions of the national support measures of the Swiss elite sports system. These data provide an essential database for the further development of the Swiss elite sports system. The results were published in a report presenting the results divided into 40 Olympic summer and 14 winter sports (Olympic classification). The authors of this paper assume that, in practice, this division is too unspecific to assess where further measures would be needed. The aim of this paper is to find appropriate parameters for data visualization in order to identify disparities in sports promotion that allow an assessment of where further interventions by Swiss Olympic (NF umbrella organization) are required. Method: First, the variable 'salary earned from sport' was defined as a variable to measure the impact of elite sports promotion. This variable was chosen as a measure as it represents an important indicator for the professionalization of elite athletes and therefore reflects national level sports promotion measures applied by Swiss Olympic. Afterwards, the variable salary was tested with regard to the correlation between Olympic classification [a], calculating the Eta coefficient. To estimate the appropriate parameters for data visualization, the correlation between salary and four further parameters was analyzed by calculating the Eta coefficient: [a] sport; [b] prioritization (from 1 to 5) of the sports by Swiss Olympic; [c] gender; [d] employment level in sports. Results & Discussion: The analyses reveal a very small correlation between salary and Olympic classification (ÉłÂČ = .011, p = .005). Gender demonstrates an even small correlation (ÉłÂČ = .006, p = .014). The parameter prioritization was correlating with small effect (ÉłÂČ = .017, p = .001) as did employment level (ÉłÂČ = .028, p < .001). The highest correlation was identified by the parameter sport with a moderate effect (ÉłÂČ = .075, p = .047). The analyses show that the disparities in sports promotion cannot be determined by a particular parameter but presumably explained by a combination of several parameters. We argue that the possibility of combining parameters for data visualization should be enabled when the analysis is provided to Swiss Olympic for further strategic decision-making. However, the inclusion of multiple parameters massively multiplies the number of graphs and is therefore not suitable for practical use. Therefore, we suggest to apply interactive dashboards for data visualization using Business Intelligence Software. Practical & Theoretical Contribution: This contribution provides the first attempt to use Business Intelligence Software for strategic decision-making in national level sports regarding the prioritization of national resources for sports and athletes. This allows to set specific parameters with a significant effect as filters. By using filters, parameters can be combined and compared against each other and set individually for each strategic decision.

Keywords: data visualization, business intelligence, Swiss elite sports system, strategic decision-making

Procedia PDF Downloads 90
1642 Terahertz Glucose Sensors Based on Photonic Crystal Pillar Array

Authors: S. S. Sree Sanker, K. N. Madhusoodanan

Abstract:

Optical biosensors are dominant alternative for traditional analytical methods, because of their small size, simple design and high sensitivity. Photonic sensing method is one of the recent advancing technology for biosensors. It measures the change in refractive index which is induced by the difference in molecular interactions due to the change in concentration of the analyte. Glucose is an aldosic monosaccharide, which is a metabolic source in many of the organisms. The terahertz waves occupies the space between infrared and microwaves in the electromagnetic spectrum. Terahertz waves are expected to be applied to various types of sensors for detecting harmful substances in blood, cancer cells in skin and micro bacteria in vegetables. We have designed glucose sensors using silicon based 1D and 2D photonic crystal pillar arrays in terahertz frequency range. 1D photonic crystal has rectangular pillars with height 100 ”m, length 1600 ”m and width 50 ”m. The array period of the crystal is 500 ”m. 2D photonic crystal has 5×5 cylindrical pillar array with an array period of 75 ”m. Height and diameter of the pillar array are 160 ”m and 100 ”m respectively. Two samples considered in the work are blood and glucose solution, which are labelled as sample 1 and sample 2 respectively. The proposed sensor detects the concentration of glucose in the samples from 0 to 100 mg/dL. For this, the crystal was irradiated with 0.3 to 3 THz waves. By analyzing the obtained S parameter, the refractive index of the crystal corresponding to the particular concentration of glucose was measured using the parameter retrieval method. Refractive indices of the two crystals decreased gradually with the increase in concentration of glucose in the sample. For 1D photonic crystals, a gradual decrease in refractive index was observed at 1 THz. 2D photonic crystal showed this behavior at 2 THz. The proposed sensor was simulated using CST Microwave studio. This will enable us to develop a model which can be used to characterize a glucose sensor. The present study is expected to contribute to blood glucose monitoring.

Keywords: CST microwave studio, glucose sensor, photonic crystal, terahertz waves

Procedia PDF Downloads 281
1641 Investigation of Ductile Failure Mechanisms in SA508 Grade 3 Steel via X-Ray Computed Tomography and Fractography Analysis

Authors: Suleyman Karabal, Timothy L. Burnett, Egemen Avcu, Andrew H. Sherry, Philip J. Withers

Abstract:

SA508 Grade 3 steel is widely used in the construction of nuclear pressure vessels, where its fracture toughness plays a critical role in ensuring operational safety and reliability. Understanding the ductile failure mechanisms in this steel grade is crucial for designing robust pressure vessels that can withstand severe nuclear environment conditions. In the present study, round bar specimens of SA508 Grade 3 steel with four distinct notch geometries were subjected to tensile loading while capturing continuous 2D images at 5-second intervals in order to monitor any alterations in their geometries to construct true stress-strain curves of the specimens. 3D reconstructions of X-ray computed tomography (CT) images at high-resolution (a spatial resolution of 0.82 ÎŒm) allowed for a comprehensive assessment of the influences of second-phase particles (i.e., manganese sulfide inclusions and cementite particles) on ductile failure initiation as a function of applied plastic strain. Additionally, based on 2D and 3D images, plasticity modeling was executed, and the results were compared to experimental data. A specific ‘two-parameter criterion’ was established and calibrated based on the correlation between stress triaxiality and equivalent plastic strain at failure initiation. The proposed criterion demonstrated substantial agreement with the experimental results, thus enhancing our knowledge of ductile fracture behavior in this steel grade. The implementation of X-ray CT and fractography analysis provided new insights into the diverse roles played by different populations of second-phase particles in fracture initiation under varying stress triaxiality conditions.

Keywords: ductile fracture, two-parameter criterion, x-ray computed tomography, stress triaxiality

Procedia PDF Downloads 92
1640 Choosing an Optimal Epsilon for Differentially Private Arrhythmia Analysis

Authors: Arin Ghazarian, Cyril Rakovski

Abstract:

Differential privacy has become the leading technique to protect the privacy of individuals in a database while allowing useful analysis to be done and the results to be shared. It puts a guarantee on the amount of privacy loss in the worst-case scenario. Differential privacy is not a toggle between full privacy and zero privacy. It controls the tradeoff between the accuracy of the results and the privacy loss using a single key parameter called

Keywords: arrhythmia, cardiology, differential privacy, ECG, epsilon, medi-cal data, privacy preserving analytics, statistical databases

Procedia PDF Downloads 152
1639 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 89
1638 Implementing of Indoor Air Quality Index in Hong Kong

Authors: Kwok W. Mui, Ling T. Wong, Tsz W. Tsang

Abstract:

Many Hong Kong people nowadays spend most of their lifetime working indoor. Since poor Indoor Air Quality (IAQ) potentially leads to discomfort, ill health, low productivity and even absenteeism in workplaces, a call for establishing statutory IAQ control to safeguard the well-being of residents is urgently required. Although policies, strategies, and guidelines for workplace IAQ diagnosis have been developed elsewhere and followed with remedial works, some of those workplaces or buildings have relatively late stage of the IAQ problems when the investigation or remedial work started. Screening for IAQ problems should be initiated as it will provide information as a minimum provision of IAQ baseline requisite to the resolution of the problems. It is not practical to sample all air pollutants that exit. Nevertheless, as a statutory control, reliable, rapid screening is essential in accordance with a compromise strategy, which balances costs against detection of key pollutants. This study investigates the feasibility of using an IAQ index as a parameter of IAQ control in Hong Kong. The index is a screening parameter to identify the unsatisfactory workplace IAQ and will highlight where a fully effective IAQ monitoring and assessment is needed for an intensive diagnosis. There already exist a number of representative common indoor pollutants based on some extensive IAQ assessments. The selection of pollutants is surrogate to IAQ control consists of dilution, mitigation, and emission control. The IAQ Index and assessment will look at high fractional quantities of these common measurement parameters. With the support of the existing comprehensive regional IAQ database and the IAQ Index by the research team as the pre-assessment probability, and the unsatisfactory IAQ prevalence as the post-assessment probability from this study, thresholds of maintaining the current measures and performing a further IAQ test or IAQ remedial measures will be proposed. With justified resources, the proposed IAQ Index and assessment protocol might be a useful tool for setting up a practical public IAQ surveillance programme and policy in Hong Kong.

Keywords: assessment, index, indoor air quality, surveillance programme

Procedia PDF Downloads 267
1637 An Extension of the Generalized Extreme Value Distribution

Authors: Serge Provost, Abdous Saboor

Abstract:

A q-analogue of the generalized extreme value distribution which includes the Gumbel distribution is introduced. The additional parameter q allows for increased modeling flexibility. The resulting distribution can have a finite, semi-infinite or infinite support. It can also produce several types of hazard rate functions. The model parameters are determined by making use of the method of maximum likelihood. It will be shown that it compares favourably to three related distributions in connection with the modeling of a certain hydrological data set.

Keywords: extreme value theory, generalized extreme value distribution, goodness-of-fit statistics, Gumbel distribution

Procedia PDF Downloads 349
1636 Normal Weight Obesity among Female Students: BMI as a Non-Sufficient Tool for Obesity Assessment

Authors: Krzysztof Plesiewicz, Izabela Plesiewicz, Krzysztof ChiĆŒyƄski, Marzenna ZieliƄska

Abstract:

Background: Obesity is an independent risk factor for cardiovascular diseases. There are several anthropometric parameters proposed to estimate the level of obesity, but until now there is no agreement which one is the best predictor of cardiometabolic risk. Scientists defined metabolically obese normal weight, who suffer from metabolic abnormalities, the same as obese individuals, and defined this syndrome as normal weight obesity (NWO). Aim of the study: The aim of our study was to determine the occurrence of overweight and obesity in a cohort of young, adult women, using standard and complementary methods of obesity assessment and to indicate those, who are at risk of obesity. The second aim of our study was to test additional methods of obesity assessment and proof that body mass index using alone is not sufficient parameter of obesity assessment. Materials and methods: 384 young women, aged 18-32, were enrolled into the study. Standard anthropometric parameters (waist to hips ratio (WTH), waist to height ratio (WTHR)) and two other methods of body fat percentage measurement (BFPM) were used in the study: electrical bioimpendance analysis (BIA) and skinfold measurement test by digital fat body mass clipper (SFM). Results: In the study group 5% and 7% of participants had waist to hips ratio and accordingly waist to height ratio values connected with visceral obesity. According to BMI 14% participants were overweight and obese. Using additional methods of body fat assessment, there were 54% and 43% of obese for BIA and SMF method. In the group of participants with normal BMI and underweight (not overweight, n =340) there were individuals with the level of BFPM above the upper limit, for the BIA 49% (n =164) and for the SFM 36 % (n=125). Statistical analysis revealed strong correlation between BIA and SFM methods. Conclusion: BMI using alone is not a sufficient parameter of obesity assessment. High percentage of young women with normal BMI values seem to be normal weight obese.

Keywords: electrical bioimpedance, normal weight obesity, skin-fold measurement test, women

Procedia PDF Downloads 274
1635 Optimal Design of Wind Turbine Blades Equipped with Flaps

Authors: I. Kade Wiratama

Abstract:

As a result of the significant growth of wind turbines in size, blade load control has become the main challenge for large wind turbines. Many advanced techniques have been investigated aiming at developing control devices to ease blade loading. Amongst them, trailing edge flaps have been proven as effective devices for load alleviation. The present study aims at investigating the potential benefits of flaps in enhancing the energy capture capabilities rather than blade load alleviation. A software tool is especially developed for the aerodynamic simulation of wind turbines utilising blades equipped with flaps. As part of the aerodynamic simulation of these wind turbines, the control system must be also simulated. The simulation of the control system is carried out via solving an optimisation problem which gives the best value for the controlling parameter at each wind turbine run condition. Developing a genetic algorithm optimisation tool which is especially designed for wind turbine blades and integrating it with the aerodynamic performance evaluator, a design optimisation tool for blades equipped with flaps is constructed. The design optimisation tool is employed to carry out design case studies. The results of design case studies on wind turbine AWT 27 reveal that, as expected, the location of flap is a key parameter influencing the amount of improvement in the power extraction. The best location for placing a flap is at about 70% of the blade span from the root of the blade. The size of the flap has also significant effect on the amount of enhancement in the average power. This effect, however, reduces dramatically as the size increases. For constant speed rotors, adding flaps without re-designing the topology of the blade can improve the power extraction capability as high as of about 5%. However, with re-designing the blade pretwist the overall improvement can be reached as high as 12%.

Keywords: flaps, design blade, optimisation, simulation, genetic algorithm, WTAero

Procedia PDF Downloads 337
1634 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 163
1633 On Bianchi Type Cosmological Models in Lyra’s Geometry

Authors: R. K. Dubey

Abstract:

Bianchi type cosmological models have been studied on the basis of Lyra’s geometry. Exact solution has been obtained by considering a time dependent displacement field for constant deceleration parameter and varying cosmological term of the universe. The physical behavior of the different models has been examined for different cases.

Keywords: Bianchi type-I cosmological model, variable gravitational coupling, cosmological constant term, Lyra's model

Procedia PDF Downloads 354
1632 Navigating the Nexus of HIV/AIDS Care: Leveraging Statistical Insight to Transform Clinical Practice and Patient Outcomes

Authors: Nahashon Mwirigi

Abstract:

The management of HIV/AIDS is a global challenge, demanding precise tools to predict disease progression and guide tailored treatment. CD4 cell count dynamics, a crucial immune function indicator, play an essential role in understanding HIV/AIDS progression and enhancing patient care through effective modeling. While several models assess disease progression, existing methods often fall short in capturing the complex, non-linear nature of HIV/AIDS, especially across diverse demographics. A need exists for models that balance predictive accuracy with clinical applicability, enabling individualized care strategies based on patient-specific progression rates. This study utilizes patient data from Kenyatta National Hospital (2003–2014) to model HIV/AIDS progression across six CD4-defined states. The Exponential, 2-Parameter Weibull, and 3-Parameter Weibull models are employed to analyze failure rates and explore progression patterns by age and gender. Model selection is based on Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to identify models best representing disease progression variability across demographic groups. The 3-Parameter Weibull model emerges as the most effective, accurately capturing HIV/AIDS progression dynamics, particularly by incorporating delayed progression effects. This model reflects age and gender-specific variations, offering refined insights into patient trajectories and facilitating targeted interventions. One key finding is that older patients progress more slowly through CD4-defined stages, with a delayed onset of advanced stages. This suggests that older patients may benefit from extended monitoring intervals, allowing providers to optimize resources while maintaining consistent care. Recognizing slower progression in this demographic helps clinicians reduce unnecessary interventions, prioritizing care for faster-progressing groups. Gender-based analysis reveals that female patients exhibit more consistent progression, while male patients show greater variability. This highlights the need for gender-specific treatment approaches, as men may require more frequent assessments and adaptive treatment plans to address their variable progression. Tailoring treatment by gender can improve outcomes by addressing distinct risk patterns in each group. The model’s ability to account for both accelerated and delayed progression equips clinicians with a robust tool for estimating the duration of each disease stage. This supports individualized treatment planning, allowing clinicians to optimize antiretroviral therapy (ART) regimens based on demographic factors and expected disease trajectories. Aligning ART timing with specific progression patterns can enhance treatment efficacy and adherence. The model also has significant implications for healthcare systems, as its predictive accuracy enables proactive patient management, reducing the frequency of advanced-stage complications. For resource limited providers, this capability facilitates strategic intervention timing, ensuring that high-risk patients receive timely care while resources are allocated efficiently. Anticipating progression stages enhances both patient care and resource management, reinforcing the model’s value in supporting sustainable HIV/AIDS healthcare strategies. This study underscores the importance of models that capture the complexities of HIV/AIDS progression, offering insights to guide personalized, data-informed care. The 3-Parameter Weibull model’s ability to accurately reflect delayed progression and demographic risk variations presents a valuable tool for clinicians, supporting the development of targeted interventions and resource optimization in HIV/AIDS management.

Keywords: HIV/AIDS progression, 3-parameter Weibull model, CD4 cell count stages, antiretroviral therapy, demographic-specific modeling

Procedia PDF Downloads 8
1631 Lactate in Critically Ill Patients an Outcome Marker with Time

Authors: Sherif Sabri, Suzy Fawzi, Sanaa Abdelshafy, Ayman Nagah

Abstract:

Introduction: Static derangements in lactate homeostasis during ICU stay have become established as a clinically useful marker of increased risk of hospital and ICU mortality. Lactate indices or kinetic alteration of the anaerobic metabolism make it a potential parameter to evaluate disease severity and intervention adequacy. This is an inexpensive and simple clinical parameter that can be obtained by a minimally invasive means. Aim of work: Comparing the predictive value of dynamic indices of hyperlactatemia in the first twenty four hours of intensive care unit (ICU) admission with other static values are more commonly used. Patients and Methods: This study included 40 critically ill patients above 18 years old of both sexes with Hyperlactamia (≄ 2 m mol/L). Patients were divided into septic group (n=20) and low oxygen transport group (n=20), which include all causes of low-O2. Six lactate indices specifically relating to the first 24 hours of ICU admission were considered, three static indices and three dynamic indices. Results: There were no statistically significant differences among the two groups regarding age, most of the laboratory results including ABG and the need for mechanical ventilation. Admission lactate was significantly higher in low-oxygen transport group than the septic group [37.5±11.4 versus 30.6±7.8 P-value 0.034]. Maximum lactate was significantly higher in low-oxygen transport group than the septic group P-value (0.044). On the other hand absolute lactate (mg) was higher in septic group P-value (< 0.001). Percentage change of lactate was higher in the septic group (47.8±11.3) than the low-oxygen transport group (26.1±12.6) with highly significant P-value (< 0.001). Lastly, time weighted lactate was higher in the low-oxygen transport group (1.72±0.81) than the septic group (1.05±0.8) with significant P-value (0.012). There were statistically significant differences regarding lactate indices in survivors and non survivors, whether in septic or low-oxygen transport group. Conclusion: In critically ill patients, time weighted lactate and percent in lactate change in the first 24 hours can be an independent predictive factor in ICU mortality. Also, a rising compared to a falling blood lactate concentration over the first 24 hours can be associated with significant increase in the risk of mortality.

Keywords: critically ill patients, lactate indices, mortality in intensive care, anaerobic metabolism

Procedia PDF Downloads 241
1630 Search for EEG Correlates of Mental States Using EEG Neurofeedback Paradigm

Authors: Cyril Kaplan

Abstract:

26 participants played 4 EEG neurofeedback (NF) games encouraged to find their strategies to control the specific NF parameter. Mixed method analysis of performance in the games and post-session interviews led to the identification of states of consciousness that correlated with success in the game. We found that increase in left frontal beta activity was facilitated by evoking interest in observed surroundings, by wondering what is happening behind the window or what lies in a drawer in front.

Keywords: EEG neurofeedback, states of consciousness, frontal beta activity, mixed methods

Procedia PDF Downloads 141