Search results for: dimensional accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5682

Search results for: dimensional accuracy

342 TNF Modulation of Cancer Stem Cells in Renal Clear Cell Carcinoma

Authors: Rafia S. Al-lamki, Jun Wang, Simon Pacey, Jordan Pober, John R. Bradley

Abstract:

Tumor necrosis factor alpha (TNF), signaling through TNFR2, may act an autocrine growth factor for renal tubular epithelial cells. Clear cell renal carcinomas (ccRCC) contain cancer stem cells (CSCs) that give rise to progeny which form the bulk of the tumor. CSCs are rarely in cell cycle and, as non-proliferating cells, resist most chemotherapeutic agents. Thus, recurrence after chemotherapy may result from the survival of CSCs. Therapeutic targeting of both CSCs and the more differentiated bulk tumor populations may provide a more effective strategy for treatment of RCC. In this study, we hypothesized that TNFR2 signaling will induce CSCs in ccRCC to enter cell cycle so that treatment with ligands that engage TNFR2 will render CSCs susceptible to chemotherapy. To test this hypothesis, we have utilized wild-type TNF (wtTNF) or specific muteins selective for TNFR1 (R1TNF) or TNFR2 (R2TNF) to treat either short-term organ cultures of ccRCC and adjacent normal kidney (NK) tissue or cultures of CD133+ cells isolated from ccRCC and adjacent NK, hereafter referred to as stem cell-like cells (SCLCs). The effect of cyclophosphamide (CP), currently an effective anticancer agent, was tested on CD133+SCLCs from ccRCC and NK before and after R2TNF treatment. Responses to TNF were assessed by flow cytometry (FACS), immunofluorescence, and quantitative real-time PCR, TUNEL, and cell viability assays. Cytotoxic effect of CP was analyzed by Annexin V and propidium iodide staining with FACS. In addition, we assessed the effect of TNF on isolated SCLCs differentiation using a three-dimensional (3D) culture system. Clinical samples of ccRCC contain a greater number SCLCs compared to NK and the number of SCSC increases with higher tumor grade. Isolated SCLCs show expression of stemness markers (oct4, Nanog, Sox2, Lin28) but not differentiation markers (cytokeratin, CD31, CD45, and EpCAM). In ccRCC organ cultures, wtTNF and R2TNF increase CD133 and TNFR2 expression and promote cell cycle entry whereas wtTNF and R1TNF increase TNFR1 expression and promote cell death of SCLCs. Similar findings are observed in SCLCs isolated from NK but the effect was greater in SCLCs isolated from ccRCC. Application of CP distinctly triggered apoptotic and necrotic cell death in SLCSs pre-treatment with R2TNF as compared to CP treatment alone, with SCLCs from ccRCC more sensitive to CP compared to SLCS from NK. Furthermore, TNF promotes differentiation of SCLCs to an epithelial phenotype in 3D cultures, confirmed by cytokeratin expression and loss of stemness markers Nanog and Sox2. The differentiated cells show positive expression of TNF and TNFR2. These findings provide evidence that selective engagement of TNFR2 drive CSCs to cell proliferation/differentiation, and targeting of cycling cells with TNFR2 agonist in combination with anti-cancer agents may be a potential therapy for RCC.

Keywords: cancer stem cells, ccRCC, cell cycle, cell death, TNF, TNFR1, TNFR2, CD133

Procedia PDF Downloads 263
341 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection

Authors: Mahshid Arabi

Abstract:

With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.

Keywords: data protection, digital technologies, information security, modern management

Procedia PDF Downloads 33
340 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake

Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama

Abstract:

The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.

Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake

Procedia PDF Downloads 164
339 The Applications of Zero Water Discharge (ZWD) Systems for Environmental Management

Authors: Walter W. Loo

Abstract:

China declared the “zero discharge rules which leave no toxics into our living environment and deliver blue sky, green land and clean water to many generations to come”. The achievement of ZWD will provide conservation of water, soil and energy and provide drastic increase in Gross Domestic Products (GDP). Our society’s engine needs a major tune up; it is sputtering. ZWD is achieved in world’s space stations – no toxic air emission and the water is totally recycled and solid wastes all come back to earth. This is all done with solar power. These are all achieved under extreme temperature, pressure and zero gravity in space. ZWD can be achieved on earth under much less fluctuations in temperature, pressure and normal gravity environment. ZWD systems are not expensive and will have multiple beneficial returns on investment which are both financially and environmentally acceptable. The paper will include successful case histories since the mid-1970s. ZWD discharge can be applied to the following types of projects: nuclear and coal fire power plants with a closed loop system that will eliminate thermal water discharge; residential communities with wastewater treatment sump and recycle the water use as a secondary water supply; waste water treatment Plants with complete water recycling including water distillation to produce distilled water by very economical 24-hours solar power plant. Landfill remediation is based on neutralization of landfilled gas odor and preventing anaerobic leachate formation. It is an aerobic condition which will render landfill gas emission explosion proof. Desert development is the development of recovering soil moisture from soil and completing a closed loop water cycle by solar energy within and underneath an enclosed greenhouse. Salt-alkali land development can be achieved by solar distillation of salty shallow water into distilled water. The distilled water can be used for soil washing and irrigation and complete a closed loop water cycle with energy and water conservation. Heavy metals remediation can be achieved by precipitation of dissolved toxic metals below the plant or vegetation root zone by solar electricity without pumping and treating. Soil and groundwater remediation - abandoned refineries, chemical and pesticide factories can be remediated by in-situ electrobiochemical and bioventing treatment method without pumping or excavation. Toxic organic chemicals are oxidized into carbon dioxide and heavy metals precipitated below plant and vegetation root zone. New water sources: low temperature distilled water can be recycled for repeated use within a greenhouse environment by solar distillation; nano bubble water can be made from the distilled water with nano bubbles of oxygen, nitrogen and carbon dioxide from air (fertilizer water) and also eliminate the use of pesticides because the nano oxygen will break the insect growth chain in the larvae state. Three dimensional high yield greenhouses can be constructed by complete water recycling using the vadose zone soil as a filter with no farming wastewater discharge.

Keywords: greenhouses, no discharge, remediation of soil and water, wastewater

Procedia PDF Downloads 345
338 Genomic and Proteomic Variability in Glycine Max Genotypes in Response to Salt Stress

Authors: Faheema Khan

Abstract:

To investigate the ability of sensitive and tolerant genotype of Glycine max to adapt to a saline environment in a field, we examined the growth performance, water relation and activities of antioxidant enzymes in relation to photosynthetic rate, chlorophyll a fluorescence, photosynthetic pigment concentration, protein and proline in plants exposed to salt stress. Ten soybean genotypes (Pusa-20, Pusa-40, Pusa-37, Pusa-16, Pusa-24, Pusa-22, BRAGG, PK-416, PK-1042, and DS-9712) were selected and grown hydroponically. After 3 days of proper germination, the seedlings were transferred to Hoagland’s solution (Hoagland and Arnon 1950). The growth chamber was maintained at a photosynthetic photon flux density of 430 μmol m−2 s−1, 14 h of light, 10 h of dark and a relative humidity of 60%. The nutrient solution was bubbled with sterile air and changed on alternate days. Ten-day-old seedlings were given seven levels of salt in the form of NaCl viz., T1 = 0 mM NaCl, T2=25 mM NaCl, T3=50 mM NaCl, T4=75 mM NaCl, T5=100 mM NaCl, T6=125 mM NaCl, T7=150 mM NaCl. The investigation showed that genotype Pusa-24, PK-416 and Pusa-20 appeared to be the most salt-sensitive. genotypes as inferred from their significantly reduced length, fresh weight and dry weight in response to the NaCl exposure. Pusa-37 appeared to be the most tolerant genotype since no significant effect of NaCl treatment on growth was found. We observed a greater decline in the photosynthetic variables like photosynthetic rate, chlorophyll fluorescence and chlorophyll content, in salt-sensitive (Pusa-24) genotype than in salt-tolerant Pusa-37 under high salinity. Numerous primers were verified on ten soybean genotypes obtained from Operon technologies among which 30 RAPD primers shown high polymorphism and genetic variation. The Jaccard’s similarity coefficient values for each pairwise comparison between cultivars were calculated and similarity coefficient matrix was constructed. The closer varieties in the cluster behaved similar in their response to salinity tolerance. Intra-clustering within the two clusters precisely grouped the 10 genotypes in sub-cluster as expected from their physiological findings.Salt tolerant genotype Pusa-37, was further analysed by 2-Dimensional gel electrophoresis to analyse the differential expression of proteins at high salt stress. In the Present study, 173 protein spots were identified. Of these, 40 proteins responsive to salinity were either up- or down-regulated in Pusa-37. Proteomic analysis in salt-tolerant genotype (Pusa-37) led to the detection of proteins involved in a variety of biological processes, such as protein synthesis (12 %), redox regulation (19 %), primary and secondary metabolism (25 %), or disease- and defence-related processes (32 %). In conclusion, the soybean plants in our study responded to salt stress by changing their protein expression pattern. The photosynthetic, biochemical and molecular study showed that there is variability in salt tolerance behaviour in soybean genotypes. Pusa-24 is the salt-sensitive and Pusa-37 is the salt-tolerant genotype. Moreover this study gives new insights into the salt-stress response in soybean and demonstrates the power of genomic and proteomic approach in plant biology studies which finally could help us in identifying the possible regulatory switches (gene/s) controlling the salt tolerant genotype of the crop plants and their possible role in defence mechanism.

Keywords: glycine max, salt stress, RAPD, genomic and proteomic variability

Procedia PDF Downloads 423
337 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method

Authors: Dangut Maren David, Skaf Zakwan

Abstract:

Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.

Keywords: prognostics, data-driven, imbalance classification, deep learning

Procedia PDF Downloads 175
336 The Influence of Hydrolyzed Cartilage Collagen on General Mobility and Wellbeing of an Active Population

Authors: Sara De Pelsmaeker, Catarina Ferreira da Silva, Janne Prawit

Abstract:

Recent studies show that enzymatically hydrolysed collagen is absorbed and distributed to joint tissues, where it has analgesic and active anti-inflammatory properties. Reviews of the associated relevant literature also support this theory. However, these studies are all using hydrolyzed collagen from animal hide or skin. This study looks into the effect of daily supplementation of hydrolyzed cartilage collagen (HCC), which has a different composition. A consumer study was set up using a double-blind placebo-controlled design with a control group using twice a day 0.5gr of maltodextrin and an experimental group using twice 0.5g of HCC, over a trial period of 12 weeks. A follow-up phase of 4 weeks without supplementation was taken into the experiment to investigate the ‘wash-out’ phase. As this consumer study was conducted during the lockdown periods, a specific app was designed to follow up with the participants. The app had the advantage that in this way, the motivation of the participants was enhanced and the drop-out range of participants was lower than normally seen in consumer studies. Participants were recruited via various sports and health clubs across the UK as we targeted a general population of people that considered themselves in good health. Exclusion criteria were ‘not experiencing any medical conditions’ and ‘not taking any prescribed medication’. A minimum requirement was that they regularly engaged in some level of physical activity. The participants had to log the type of activity that they conducted and the duration of the activity. Weekly, participants were providing feedback on their joint health and subjective pain using the validated pain measuring instrument Visual Analogue Scale (VAS). The weekly repoAbstract Public Health and Wellbeing Conferencerting section in the app was designed with simplicity and based on the accuracy demonstrated in previous similar studies to track subjective pain measures of participants. At the beginning of the trial, each participant indicated their baseline on joint pain. The results of this consumer study indicated that HCC significantly improved joint health and subjective pain scores compared to the placebo group. No significant differences were found between different demographic groups (age or gender). The level of activity, going from high intensive training to regular walking, did not significantly influence the effect of the HCC. The results of the wash-out phase indicated that when the participants stopped the HCC supplementation, their subjective pain scores increased again to the baseline. In conclusion, the results gave a positive indication that the daily supplementation of HCC can contribute to the overall mobility and wellbeing of a general active population

Keywords: VAS-score, food supplement, mobility, joint health

Procedia PDF Downloads 164
335 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard

Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni

Abstract:

The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.

Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model

Procedia PDF Downloads 143
334 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 285
333 Quantified Metabolomics for the Determination of Phenotypes and Biomarkers across Species in Health and Disease

Authors: Miroslava Cuperlovic-Culf, Lipu Wang, Ketty Boyle, Nadine Makley, Ian Burton, Anissa Belkaid, Mohamed Touaibia, Marc E. Surrette

Abstract:

Metabolic changes are one of the major factors in the development of a variety of diseases in various species. Metabolism of agricultural plants is altered the following infection with pathogens sometimes contributing to resistance. At the same time, pathogens use metabolites for infection and progression. In humans, metabolism is a hallmark of cancer development for example. Quantified metabolomics data combined with other omics or clinical data and analyzed using various unsupervised and supervised methods can lead to better diagnosis and prognosis. It can also provide information about resistance as well as contribute knowledge of compounds significant for disease progression or prevention. In this work, different methods for metabolomics quantification and analysis from Nuclear Magnetic Resonance (NMR) measurements that are used for investigation of disease development in wheat and human cells will be presented. One-dimensional 1H NMR spectra are used extensively for metabolic profiling due to their high reliability, wide range of applicability, speed, trivial sample preparation and low cost. This presentation will describe a new method for metabolite quantification from NMR data that combines alignment of spectra of standards to sample spectra followed by multivariate linear regression optimization of spectra of assigned metabolites to samples’ spectra. Several different alignment methods were tested and multivariate linear regression result has been compared with other quantification methods. Quantified metabolomics data can be analyzed in the variety of ways and we will present different clustering methods used for phenotype determination, network analysis providing knowledge about the relationships between metabolites through metabolic network as well as biomarker selection providing novel markers. These analysis methods have been utilized for the investigation of fusarium head blight resistance in wheat cultivars as well as analysis of the effect of estrogen receptor and carbonic anhydrase activation and inhibition on breast cancer cell metabolism. Metabolic changes in spikelet’s of wheat cultivars FL62R1, Stettler, MuchMore and Sumai3 following fusarium graminearum infection were explored. Extensive 1D 1H and 2D NMR measurements provided information for detailed metabolite assignment and quantification leading to possible metabolic markers discriminating resistance level in wheat subtypes. Quantification data is compared to results obtained using other published methods. Fusarium infection induced metabolic changes in different wheat varieties are discussed in the context of metabolic network and resistance. Quantitative metabolomics has been used for the investigation of the effect of targeted enzyme inhibition in cancer. In this work, the effect of 17 β -estradiol and ferulic acid on metabolism of ER+ breast cancer cells has been compared to their effect on ER- control cells. The effect of the inhibitors of carbonic anhydrase on the observed metabolic changes resulting from ER activation has also been determined. Metabolic profiles were studied using 1D and 2D metabolomic NMR experiments, combined with the identification and quantification of metabolites, and the annotation of the results is provided in the context of biochemical pathways.

Keywords: metabolic biomarkers, metabolic network, metabolomics, multivariate linear regression, NMR quantification, quantified metabolomics, spectral alignment

Procedia PDF Downloads 339
332 Zinc Oxide Varistor Performance: A 3D Network Model

Authors: Benjamin Kaufmann, Michael Hofstätter, Nadine Raidl, Peter Supancic

Abstract:

ZnO varistors are the leading overvoltage protection elements in today’s electronic industry. Their highly non-linear current-voltage characteristics, very fast response times, good reliability and attractive cost of production are unique in this field. There are challenges and questions unsolved. Especially, the urge to create even smaller, versatile and reliable parts, that fit industry’s demands, brings manufacturers to the limits of their abilities. Although, the varistor effect of sintered ZnO is known since the 1960’s, and a lot of work was done on this field to explain the sudden exponential increase of conductivity, the strict dependency on sinter parameters, as well as the influence of the complex microstructure, is not sufficiently understood. For further enhancement and down-scaling of varistors, a better understanding of the microscopic processes is needed. This work attempts a microscopic approach to investigate ZnO varistor performance. In order to cope with the polycrystalline varistor ceramic and in order to account for all possible current paths through the material, a preferably realistic model of the microstructure was set up in the form of three-dimensional networks where every grain has a constant electric potential, and voltage drop occurs only at the grain boundaries. The electro-thermal workload, depending on different grain size distributions, was investigated as well as the influence of the metal-semiconductor contact between the electrodes and the ZnO grains. A number of experimental methods are used, firstly, to feed the simulations with realistic parameters and, secondly, to verify the obtained results. These methods are: a micro 4-point probes method system (M4PPS) to investigate the current-voltage characteristics between single ZnO grains and between ZnO grains and the metal electrode inside the varistor, micro lock-in infrared thermography (MLIRT) to detect current paths, electron back scattering diffraction and piezoresponse force microscopy to determine grain orientations, atom probe to determine atomic substituents, Kelvin probe force microscopy for investigating grain surface potentials. The simulations showed that, within a critical voltage range, the current flow is localized along paths which represent only a tiny part of the available volume. This effect could be observed via MLIRT. Furthermore, the simulations exhibit that the electric power density, which is inversely proportional to the number of active current paths, since this number determines the electrical active volume, is dependent on the grain size distribution. M4PPS measurements showed that the electrode-grain contacts behave like Schottky diodes and are crucial for asymmetric current path development. Furthermore, evaluation of actual data suggests that current flow is influenced by grain orientations. The present results deepen the knowledge of influencing microscopic factors on ZnO varistor performance and can give some recommendations on fabrication for obtaining more reliable ZnO varistors.

Keywords: metal-semiconductor contact, Schottky diode, varistor, zinc oxide

Procedia PDF Downloads 283
331 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method

Authors: Mai Abdul Latif, Yuntian Feng

Abstract:

Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.

Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear

Procedia PDF Downloads 225
330 Prediction of Terrorist Activities in Nigeria using Bayesian Neural Network with Heterogeneous Transfer Functions

Authors: Tayo P. Ogundunmade, Adedayo A. Adepoju

Abstract:

Terrorist attacks in liberal democracies bring about a few pessimistic results, for example, sabotaged public support in the governments they target, disturbing the peace of a protected environment underwritten by the state, and a limitation of individuals from adding to the advancement of the country, among others. Hence, seeking for techniques to understand the different factors involved in terrorism and how to deal with those factors in order to completely stop or reduce terrorist activities is the topmost priority of the government in every country. This research aim is to develop an efficient deep learning-based predictive model for the prediction of future terrorist activities in Nigeria, addressing low-quality prediction accuracy problems associated with the existing solution methods. The proposed predictive AI-based model as a counterterrorism tool will be useful by governments and law enforcement agencies to protect the lives of individuals in society and to improve the quality of life in general. A Heterogeneous Bayesian Neural Network (HETBNN) model was derived with Gaussian error normal distribution. Three primary transfer functions (HOTTFs), as well as two derived transfer functions (HETTFs) arising from the convolution of the HOTTFs, are namely; Symmetric Saturated Linear transfer function (SATLINS ), Hyperbolic Tangent transfer function (TANH), Hyperbolic Tangent sigmoid transfer function (TANSIG), Symmetric Saturated Linear and Hyperbolic Tangent transfer function (SATLINS-TANH) and Symmetric Saturated Linear and Hyperbolic Tangent Sigmoid transfer function (SATLINS-TANSIG). Data on the Terrorist activities in Nigeria gathered through questionnaires for the purpose of this study were used. Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error are the forecast prediction criteria. The results showed that the HETFs performed better in terms of prediction and factors associated with terrorist activities in Nigeria were determined. The proposed predictive deep learning-based model will be useful to governments and law enforcement agencies as an effective counterterrorism mechanism to understand the parameters of terrorism and to design strategies to deal with terrorism before an incident actually happens and potentially causes the loss of precious lives. The proposed predictive AI-based model will reduce the chances of terrorist activities and is particularly helpful for security agencies to predict future terrorist activities.

Keywords: activation functions, Bayesian neural network, mean square error, test error, terrorism

Procedia PDF Downloads 166
329 Effects of Gender on Kinematics Kicking in Soccer

Authors: Abdolrasoul Daneshjoo

Abstract:

Soccer is a game which draws more attention in different countries especially in Brazil. Kicking among different skills in soccer and soccer players is an excellent role for the success and preference of a team. The way of point gaining in this game is passing the ball over the goal lines which are gained by shoot skill in attack time and or during the penalty kicks.Regarding the above assumption, identifying the effective factors in instep kicking in different distances shoot with maximum force and high accuracy or pass and penalty kick, may assist the coaches and players in raising qualitative level of performing the skill.The aim of the present study was to study of a few kinematical parameters in instep kicking from 5 and 7 meter distance among the male and female elite soccer players.24 right dominant lower limb subjects (12 males and 12 females) among Tehran elite soccer players with average and the standard deviation (22.5 ± 1.5) & (22.08± 1.31) years, height of (179.5 ± 5.81) & (164.3 ± 4.09) cm, weight of (69.66 ± 4.09) & (53.16 ± 3.51) kg, %BMI (21.06 ± .731) & (19.67 ± .709), having playing history of (4 ± .73) & (3.08 ± .66) years respectively participated in this study. They had at least two years of continuous playing experience in Tehran soccer league.For sampling player's kick; Kinemetrix Motion analysis with three cameras with 1000 Hz was used. Five reflective markers were placed laterally on the kicking leg over anatomical points (the iliac crest, major trochanter, lateral epicondyle of femur, lateral malleolus, and lateral aspect of distal head of the fifth metatarsus). Instep kick was filmed, with one step approach and 30 to 45 degrees angle from stationary ball. Three kicks were filmed, one kick selected for further analyses. Using Kinemetrix 3D motion analysis software, the position of the markers was analyzed. Descriptive statistics were used to describe the mean and standard deviation, while the analysis of variance, and independent t-test (P < 0.05) were used to compare the kinematic parameters between two genders.Among the evaluated parameters, the knee acceleration, the thigh angular velocity, the angle of knee proportionately showed significant relationship with consequence of kick. While company performance on 5m in 2 genders, significant differences were observed in internal – external displacement of toe, ankle, hip and the velocity of toe, ankle and the acceleration of toe and the angular velocity of pelvic, thigh and before time contact . Significant differences showed the internal – external displacement of toe, the ankle, the knee and the hip, the iliac crest and the velocity of toe, the ankle and acceleration of ankle and angular velocity of the pelvic and the knee.

Keywords: biomechanics, kinematics, instep kicking, soccer

Procedia PDF Downloads 503
328 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 307
327 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.

Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes

Procedia PDF Downloads 44
326 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure

Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek

Abstract:

Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.

Keywords: health system outcomes, health reforms, health system assessment, health system evaluation

Procedia PDF Downloads 290
325 Experimental Study of the Behavior of Elongated Non-spherical Particles in Wall-Bounded Turbulent Flows

Authors: Manuel Alejandro Taborda Ceballos, Martin Sommerfeld

Abstract:

Transport phenomena and dispersion of non-spherical particle in turbulent flows are found everywhere in industrial application and processes. Powder handling, pollution control, pneumatic transport, particle separation are just some examples where the particle encountered are not only spherical. These types of multiphase flows are wall bounded and mostly highly turbulent. The particles found in these processes are rarely spherical but may have various shapes (e.g., fibers, and rods). Although research related to the behavior of regular non-spherical particles in turbulent flows has been carried out for many years, it is still necessary to refine models, especially near walls where the interaction fiber-wall changes completely its behavior. Imaging-based experimental studies on dispersed particle-laden flows have been applied for many decades for a detailed experimental analysis. These techniques have the advantages that they provide field information in two or three dimensions, but have a lower temporal resolution compared to point-wise techniques such as PDA (phase-Doppler anemometry) and derivations therefrom. The applied imaging techniques in dispersed two-phase flows are extensions from classical PIV (particle image velocimetry) and PTV (particle tracking velocimetry) and the main emphasis was simultaneous measurement of the velocity fields of both phases. In a similar way, such data should also provide adequate information for validating the proposed models. Available experimental studies on the behavior of non-spherical particles are uncommon and mostly based on planar light-sheet measurements. Especially for elongated non-spherical particles, however, three-dimensional measurements are needed to fully describe their motion and to provide sufficient information for validation of numerical computations. For further providing detailed experimental results allowing a validation of numerical calculations of non-spherical particle dispersion in turbulent flows, a water channel test facility was built around a horizontal closed water channel. Into this horizontal main flow, a small cross-jet laden with fiber-like particles was injected, which was also solely driven by gravity. The dispersion of the fibers was measured by applying imaging techniques based on a LED array for backlighting and high-speed cameras. For obtaining the fluid velocity fields, almost neutrally buoyant tracer was used. The discrimination between tracer and fibers was done based on image size which was also the basis to determine fiber orientation with respect to the inertial coordinate system. The synchronous measurement of fluid velocity and fiber properties also allow the collection of statistics of fiber orientation, velocity fields of tracer and fibers, the angular velocity of the fibers and the orientation between fiber and instantaneous relative velocity. Consequently, an experimental study the behavior of elongated non-spherical particles in wall bounded turbulent flows was achieved. The development of a comprehensive analysis was succeeded, especially near the wall region, where exists hydrodynamic wall interaction effects (e.g., collision or lubrication) and abrupt changes of particle rotational velocity. This allowed us to predict numerically afterwards the behavior of non-spherical particles within the frame of the Euler/Lagrange approach, where the particles are therein treated as “point-particles”.

Keywords: crossflow, non-spherical particles, particle tracking velocimetry, PIV

Procedia PDF Downloads 87
324 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties

Authors: E. Salem

Abstract:

Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.

Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames

Procedia PDF Downloads 115
323 Differences in Assessing Hand-Written and Typed Student Exams: A Corpus-Linguistic Study

Authors: Jutta Ransmayr

Abstract:

The digital age has long arrived at Austrian schools, so both society and educationalists demand that digital means should be integrated accordingly to day-to-day school routines. Therefore, the Austrian school-leaving exam (A-levels) can now be written either by hand or by using a computer. However, the choice of writing medium (pen and paper or computer) for written examination papers, which are considered 'high-stakes' exams, raises a number of questions that have not yet been adequately investigated and answered until recently, such as: What effects do the different conditions of text production in the written German A-levels have on the component of normative linguistic accuracy? How do the spelling skills of German A-level papers written with a pen differ from those that the students wrote on the computer? And how is the teacher's assessment related to this? Which practical desiderata for German didactics can be derived from this? In a trilateral pilot project of the Austrian Center for Digital Humanities (ACDH) of the Austrian Academy of Sciences and the University of Vienna in cooperation with the Austrian Ministry of Education and the Council for German Orthography, these questions were investigated. A representative Austrian learner corpus, consisting of around 530 German A-level papers from all over Austria (pen and computer written), was set up in order to subject it to a quantitative (corpus-linguistic and statistical) and qualitative investigation with regard to the spelling and punctuation performance of the high school graduates and the differences between pen- and computer-written papers and their assessments. Relevant studies are currently available mainly from the Anglophone world. These have shown that writing on the computer increases the motivation to write, has positive effects on the length of the text, and, in some cases, also on the quality of the text. Depending on the writing situation and other technical aids, better results in terms of spelling and punctuation could also be found in the computer-written texts as compared to the handwritten ones. Studies also point towards a tendency among teachers to rate handwritten texts better than computer-written texts. In this paper, the first comparable results from the German-speaking area are to be presented. Research results have shown that, on the one hand, there are significant differences between handwritten and computer-written work with regard to performance in orthography and punctuation. On the other hand, the corpus linguistic investigation and the subsequent statistical analysis made it clear that not only the teachers' assessments of the students’ spelling performance vary enormously but also the overall assessments of the exam papers – the factor of the production medium (pen and paper or computer) also seems to play a decisive role.

Keywords: exam paper assessment, pen and paper or computer, learner corpora, linguistics

Procedia PDF Downloads 171
322 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 112
321 Development and Validation of a Green Analytical Method for the Analysis of Daptomycin Injectable by Fourier-Transform Infrared Spectroscopy (FTIR)

Authors: Eliane G. Tótoli, Hérida Regina N. Salgado

Abstract:

Daptomycin is an important antimicrobial agent used in clinical practice nowadays, since it is very active against some Gram-positive bacteria that are particularly challenges for the medicine, such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE). The importance of environmental preservation has receiving special attention since last years. Considering the evident need to protect the natural environment and the introduction of strict quality requirements regarding analytical procedures used in pharmaceutical analysis, the industries must seek environmentally friendly alternatives in relation to the analytical methods and other processes that they follow in their routine. In view of these factors, green analytical chemistry is prevalent and encouraged nowadays. In this context, infrared spectroscopy stands out. This is a method that does not use organic solvents and, although it is formally accepted for the identification of individual compounds, also allows the quantitation of substances. Considering that there are few green analytical methods described in literature for the analysis of daptomycin, the aim of this work was the development and validation of a green analytical method for the quantification of this drug in lyophilized powder for injectable solution, by Fourier-transform infrared spectroscopy (FT-IR). Method: Translucent potassium bromide pellets containing predetermined amounts of the drug were prepared and subjected to spectrophotometric analysis in the mid-infrared region. After obtaining the infrared spectrum and with the assistance of the IR Solution software, quantitative analysis was carried out in the spectral region between 1575 and 1700 cm-1, related to a carbonyl band of the daptomycin molecule, and this band had its height analyzed in terms of absorbance. The method was validated according to ICH guidelines regarding linearity, precision (repeatability and intermediate precision), accuracy and robustness. Results and discussion: The method showed to be linear (r = 0.9999), precise (RSD% < 2.0), accurate and robust, over a concentration range from 0.2 to 0.6 mg/pellet. In addition, this technique does not use organic solvents, which is one great advantage over the most common analytical methods. This fact contributes to minimize the generation of organic solvent waste by the industry and thereby reduces the impact of its activities on the environment. Conclusion: The validated method proved to be adequate to quantify daptomycin in lyophilized powder for injectable solution and can be used for its routine analysis in quality control. In addition, the proposed method is environmentally friendly, which is in line with the global trend.

Keywords: daptomycin, Fourier-transform infrared spectroscopy, green analytical chemistry, quality control, spectrometry in IR region

Procedia PDF Downloads 381
320 Deep Learning for Image Correction in Sparse-View Computed Tomography

Authors: Shubham Gogri, Lucia Florescu

Abstract:

Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.

Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net

Procedia PDF Downloads 164
319 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 144
318 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 243
317 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 232
316 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 88
315 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence

Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej

Abstract:

In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.

Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction

Procedia PDF Downloads 106
314 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.

Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine

Procedia PDF Downloads 12
313 Role of Functional Divergence in Specific Inhibitor Design: Using γ-Glutamyltranspeptidase (GGT) as a Model Protein

Authors: Ved Vrat Verma, Rani Gupta, Manisha Goel

Abstract:

γ-glutamyltranspeptidase (GGT: EC 2.3.2.2) is an N-terminal nucleophile hydrolase conserved in all three domains of life. GGT plays a key role in glutathione metabolism where it catalyzes the breakage of the γ-glutamyl bonds and transfer of γ-glutamyl group to water (hydrolytic activity) or amino acids or short peptides (transpeptidase activity). GGTs from bacteria, archaea, and eukaryotes (human, rat and mouse) are homologous proteins sharing >50% sequence similarity and conserved four layered αββα sandwich like three dimensional structural fold. These proteins though similar in their structure to each other, are quite diverse in their enzyme activity: some GGTs are better at hydrolysis reactions but poor in transpeptidase activity, whereas many others may show opposite behaviour. GGT is known to be involved in various diseases like asthma, parkinson, arthritis, and gastric cancer. Its inhibition prior to chemotherapy treatments has been shown to sensitize tumours to the treatment. Microbial GGT is known to be a virulence factor too, important for the colonization of bacteria in host. However, all known inhibitors (mimics of its native substrate, glutamate) are highly toxic because they interfere with other enzyme pathways. However, a few successful efforts have been reported previously in designing species specific inhibitors. We aim to leverage the diversity seen in GGT family (pathogen vs. eukaryotes) for designing specific inhibitors. Thus, in the present study, we have used DIVERGE software to identify sites in GGT proteins, which are crucial for the functional and structural divergence of these proteins. Since, type II divergence sites vary in clade specific manner, so type II divergent sites were our focus of interest throughout the study. Type II divergent sites were identified for pathogen vs. eukaryotes clusters and sites were marked on clade specific representative structures HpGGT (2QM6) and HmGGT (4ZCG) of pathogen and eukaryotes clade respectively. The crucial divergent sites within 15 A radii of the binding cavity were highlighted, and in-silico mutations were performed on these sites to delineate the role of these sites on the mechanism of catalysis and protein folding. Further, the amino acid network (AAN) analysis was also performed by Cytoscape to delineate assortative mixing for cavity divergent sites which could strengthen our hypothesis. Additionally, molecular dynamics simulations were performed for wild complexes and mutant complexes close to physiological conditions (pH 7.0, 0.1 M ionic strength and 1 atm pressure) and the role of putative divergence sites and structural integrities of the homologous proteins have been analysed. The dynamics data were scrutinized in terms of RMSD, RMSF, non-native H-bonds and salt bridges. The RMSD, RMSF fluctuations of proteins complexes are compared, and the changes at protein ligand binding sites were highlighted. The outcomes of our study highlighted some crucial divergent sites which could be used for novel inhibitors designing in a species-specific manner. Since, for drug development, it is challenging to design novel drug by targeting similar protein which exists in eukaryotes, so this study could set up an initial platform to overcome this challenge and help to deduce the more effective targets for novel drug discovery.

Keywords: γ-glutamyltranspeptidase, divergence, species-specific, drug design

Procedia PDF Downloads 269