Search results for: zonal load prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4808

Search results for: zonal load prediction

1028 Effect of Inspiratory Muscle Training on Diaphragmatic Strength Following Coronary Revascularization

Authors: Abeer Ahmed Abdelhamed

Abstract:

Introduction: Postoperative pulmonary complications (PPCs) are the most common complications observed and managed after abdominal or cardiothoracic surgery. Hypoxemia, atelectasis, pleural effusion, or diaphragmatic dysfunction, are often a source of morbidity in cardiac surgery patients, and are more common in patients receiving unilateral or bilateral internal mammary artery (IMT) grafts than patients receiving saphenous vein (SV) grafts alone. Purpose: The aim of this work was to investigate the effect of Threshold load inspiratory muscle training on pulmonary gas exchange and maximum inspiratory pressure (MIP) in patient undergoing coronary revascularization. Subject: Thirty three male patients eligible for coronary revascularization were selected to participate in the study. Method: They were divided into two groups(17 patients in the intervention group and 16 patients in the control group), the interventional group received inspiratory muscle training at 30% of their maximum inspiratory pressure throughout the hospitalization period in addition to routine post operative care. Result: The results of this study showed a significant improvement on maximum inspiratory pressure(MIP), Arterial-alveolar pressure gradient (A-a gradient) and oxygen saturation in the intervention group. Conclusion: Inspiratory muscle training using threshold mode significantly improves maximum inspiratory pressure, pulmonary gas exchange tested by alveolar-arterial gradient and oxygen saturation in Patients undergoing coronary revascularization.

Keywords: coronary revascularization, inspiratory muscle training, maximum inspiratory pressure, pulmonary gas exchange

Procedia PDF Downloads 299
1027 Behavior of Composite Reinforced Concrete Circular Columns with Glass Fiber Reinforced Polymer I-Section

Authors: Hiba S. Ahmed, Abbas A. Allawi, Riyadh A. Hindi

Abstract:

Pultruded materials made of fiber-reinforced polymer (FRP) come in a broad range of shapes, such as bars, I-sections, C-sections, and other structural sections. These FRP materials are starting to compete with steel as structural materials because of their great resistance, low self-weight, and cheap maintenance costs-especially in corrosive conditions. This study aimed to evaluate the effectiveness of Glass Fiber Reinforced Polymer (GFRP) of the hybrid columns built by combining (GFRP) profiles with concrete columns because of their low cost and high structural efficiency. To achieve the aims of this study, nine circular columns with a diameter of (150 mm) and a height of (1000mm) were cast using normal concrete with compression strength equal to (35 MPa). The research involved three different types of reinforcement: hybrid circular columns type (IG) with GFRP I-section and 1% of the reinforcement ratio of steel bars, hybrid circular columns type (IS) with steel I-section and 1% of the reinforcement ratio of steel bars, (where the cross-section area of I-section for GFRP and steel was the same), compared with reference column (R) without I-section. To investigate the ultimate capacity, axial and lateral deformation, strain in longitudinal and transverse reinforcement, and failure mode of the circular column under different loading conditions (concentric and eccentric) with eccentricities of 25 mm and 50 mm, respectively. In the second part, an analytical finite element model will be performed using ABAQUS software to validate the experimental results.

Keywords: composite, columns, reinforced concrete, GFRP, axial load

Procedia PDF Downloads 54
1026 Integrative Transcriptomic Profiling of NK Cells and Monocytes: Advancing Diagnostic and Therapeutic Strategies for COVID-19

Authors: Salma Loukman, Reda Benmrid, Najat Bouchmaa, Hicham Hboub, Rachid El Fatimy, Rachid Benhida

Abstract:

In this study, it use integrated transcriptomic datasets from the GEO repository with the purpose of investigating immune dysregulation in COVID-19. Thus, in this context, we decided to be focused on NK cells and CD14+ monocytes gene expression, considering datasets GSE165461 and GSE198256, respectively. Other datasets with PBMCs, lung, olfactory, and sensory epithelium and lymph were used to provide robust validation for our results. This approach gave an integrated view of the immune responses in COVID-19, pointing out a set of potential biomarkers and therapeutic targets with special regard to standards of physiological conditions. IFI27, MKI67, CENPF, MBP, HBA2, TMEM158, THBD, HBA1, LHFPL2, SLA, and AC104564.3 were identified as key genes from our analysis that have critical biological processes related to inflammation, immune regulation, oxidative stress, and metabolic processes. Consequently, such processes are important in understanding the heterogeneous clinical manifestations of COVID-19—from acute to long-term effects now known as 'long COVID'. Subsequent validation with additional datasets consolidated these genes as robust biomarkers with an important role in the diagnosis of COVID-19 and the prediction of its severity. Moreover, their enrichment in key pathophysiological pathways presented them as potential targets for therapeutic intervention.The results provide insight into the molecular dynamics of COVID-19 caused by cells such as NK cells and other monocytes. Thus, this study constitutes a solid basis for targeted diagnostic and therapeutic development and makes relevant contributions to ongoing research efforts toward better management and mitigation of the pandemic.

Keywords: SARS-COV-2, RNA-seq, biomarkers, severity, long COVID-19, bio analysis

Procedia PDF Downloads 11
1025 Design, Synthesis and Pharmacological Investigation of Novel 2-Phenazinamine Derivatives as a Mutant BCR-ABL (T315I) Inhibitor

Authors: Gajanan M. Sonwane

Abstract:

Nowadays, the entire pharmaceutical industry is facing the challenge of increasing efficiency and innovation. The major hurdles are the growing cost of research and development and a concurrent stagnating number of new chemical entities (NCEs). Hence, the challenge is to select the most druggable targets and to search the equivalent drug-like compounds, which also possess specific pharmacokinetic and toxicological properties that allow them to be developed as drugs. The present research work includes the studies of developing new anticancer heterocycles by using molecular modeling techniques. The heterocycles synthesized through such methodology are much effective as various physicochemical parameters have been already studied and the structure has been optimized for its best fit in the receptor. Hence, on the basis of the literature survey and considering the need to develop newer anticancer agents, new phenazinamine derivatives were designed by subjecting the nucleus to molecular modeling, viz., GQSAR analysis and docking studies. Simultaneously, these designed derivatives were subjected to in silico prediction of biological activity through PASS studies and then in silico toxicity risk assessment studies. In PASS studies, it was found that all the derivatives exhibited a good spectrum of biological activities confirming its anticancer potential. The toxicity risk assessment studies revealed that all the derivatives obey Lipinski’s rule. Amongst these series, compounds 4c, 5b and 6c were found to possess logP and drug-likeness values comparable with the standard Imatinib (used for anticancer activity studies) and also with the standard drug methotrexate (used for antimitotic activity studies). One of the most notable mutations is the threonine to isoleucine mutation at codon 315 (T315I), which is known to be resistant to all currently available TKI. Enzyme assay planned for confirmation of target selective activity.

Keywords: drug design, tyrosine kinases, anticancer, Phenazinamine

Procedia PDF Downloads 115
1024 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane

Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo

Abstract:

Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.

Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining

Procedia PDF Downloads 84
1023 Optimizing the Window Geometry Using Fractals

Authors: K. Geetha Ramesh, A. Ramachandraiah

Abstract:

In an internal building space, daylight becomes a powerful source of illumination. The challenge therefore, is to develop means of utilizing both direct and diffuse natural light in buildings while maintaining and improving occupant's visual comfort, particularly at greater distances from the windows throwing daylight. The geometrical features of windows in a building have significant effect in providing daylight. The main goal of this research is to develop an innovative window geometry, which will effectively provide the daylight component adequately together with internal reflected component(IRC) and also the external reflected component(ERC), if any. This involves exploration of a light redirecting system using fractal geometry for windows, in order to penetrate and distribute daylight more uniformly to greater depths, minimizing heat gain and glare, and also to reduce building energy use substantially. Of late the creation of fractal geometrical window and the occurrence of daylight illuminance due to such windows is becoming an interesting study. The amount of daylight can change significantly based on the window geometry and sky conditions. This leads to the (i) exploration of various fractal patterns suitable for window designs, and (ii) quantification of the effect of chosen fractal window based on the relationship between the fractal pattern, size, orientation and glazing properties for optimizing daylighting. There are a lot of natural lighting applications able to predict the behaviour of a light in a room through a traditional opening - a regular window. The conventional prediction methodology involves the evaluation of the daylight factor, the internal reflected component and the external reflected component. Having evaluated the daylight illuminance level for a conventional window, the technical performance of a fractal window for an optimal daylighting is to be studied and compared with that of a regular window. The methodologies involved are highlighted in this paper.

Keywords: daylighting, fractal geometry, fractal window, optimization

Procedia PDF Downloads 299
1022 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 88
1021 A Comparative Study on Behavior Among Different Types of Shear Connectors using Finite Element Analysis

Authors: Mohd Tahseen Islam Talukder, Sheikh Adnan Enam, Latifa Akter Lithi, Soebur Rahman

Abstract:

Composite structures have made significant advances in construction applications during the last few decades. Composite structures are composed of structural steel shapes and reinforced concrete combined with shear connectors, which benefit each material's unique properties. Significant research has been conducted on different types of connectors’ behavior and shear capacity. Moreover, the AISC 360-16 “Specification for Steel Structural Buildings” consists of a formula for channel shear connectors' shear capacity. This research compares the behavior of C type and L type shear connectors using Finite Element Analysis. Experimental results from published literature are used to validate the finite element models. The 3-D Finite Element Model (FEM) was built using ABAQUS 2017 to investigate non-linear capabilities and the ultimate load-carrying potential of the connectors using push-out tests. The changes in connector dimensions were analyzed using this non-linear model in parametric investigations. The parametric study shows that by increasing the length of the shear connector by 10 mm, its shear strength increases by 21%. Shear capacity increased by 13% as the height was increased by 10 mm. The thickness of the specimen was raised by 1 mm, resulting in a 2% increase in shear capacity. However, the shear capacity of channel connectors was reduced by 21% due to an increase of thickness by 2 mm.

Keywords: finite element method, channel shear connector, angle shear connector, ABAQUS, composite structure, shear connector, parametric study, ultimate shear capacity, push-out test

Procedia PDF Downloads 120
1020 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty

Authors: D. S. Gomes, A. T. Silva

Abstract:

Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.

Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation

Procedia PDF Downloads 289
1019 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling, and proposes the challenges and improvement directions for DRL-based resource scheduling algorithms.

Keywords: resource scheduling, deep reinforcement learning, distributed system, artificial intelligence

Procedia PDF Downloads 110
1018 Proposing an Improved Managerial-Based Business Process Framework

Authors: Alireza Nikravanshallmani, Jamshid Dehmeshki, Mojtaba Ahmadi

Abstract:

Modeling of business processes, based on BPMN (Business Process Modeling Notation), helps analysts and managers to understand business processes, and, identify their shortages. These models provide a context to make rational decision of organizing business processes activities in an understandable manner. The purpose of this paper is to provide a framework for better understanding of business processes and their problems by reducing the cognitive load of displayed information for their audience at different managerial levels while keeping the essential information which are needed by them. For this reason, we integrate business process diagrams across the different managerial levels to develop a framework to improve the performance of business process management (BPM) projects. The proposed framework is entitled ‘Business process improvement framework based on managerial levels (BPIML)’. This framework, determine a certain type of business process diagrams (BPD) based on BPMN with respect to the objectives and tasks of the various managerial levels of organizations and their roles in BPM projects. This framework will make us able to provide the necessary support for making decisions about business processes. The framework is evaluated with a case study in a real business process improvement project, to demonstrate its superiority over the conventional method. A questionnaire consisted of 10 questions using Likert scale was designed and given to the participants (managers of Bank Refah Kargaran three managerial levels). By examining the results of the questionnaire, it can be said that the proposed framework provide support for correct and timely decisions by increasing the clarity and transparency of the business processes which led to success in BPM projects.

Keywords: business process management (BPM), business process modeling, business process reengineering (BPR), business process optimizing, BPMN

Procedia PDF Downloads 450
1017 Modeling Approach to Better Control Fouling in a Submerged Membrane Bioreactor for Wastewater Treatment: Development of Analytical Expressions in Steady-State Using ASM1

Authors: Benaliouche Hana, Abdessemed Djamal, Meniai Abdessalem, Lesage Geoffroy, Heran Marc

Abstract:

This paper presents a dynamic mathematical model of activated sludge which is able to predict the formation and degradation kinetics of SMP (Soluble microbial products) in membrane bioreactor systems. The model is based on a calibrated version of ASM1 with the theory of production and degradation of SMP. The model was calibrated on the experimental data from MBR (Mathematical modeling Membrane bioreactor) pilot plant. Analytical expressions have been developed, describing the concentrations of the main state variables present in the sludge matrix, with the inclusion of only six additional linear differential equations. The objective is to present a new dynamic mathematical model of activated sludge capable of predicting the formation and degradation kinetics of SMP (UAP and BAP) from the submerged membrane bioreactor (BRMI), operating at low organic load (C / N = 3.5), for two sludge retention times (SRT) fixed at 40 days and 60 days, to study their impact on membrane fouling, The modeling study was carried out under the steady-state condition. Analytical expressions were then validated by comparing their results with those obtained by simulations using GPS-X-Hydromantis software. These equations made it possible, by means of modeling approaches (ASM1), to identify the operating and kinetic parameters and help to predict membrane fouling.

Keywords: Activated Sludge Model No. 1 (ASM1), mathematical modeling membrane bioreactor, soluble microbial products, UAP, BAP, Modeling SMP, MBR, heterotrophic biomass

Procedia PDF Downloads 293
1016 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete

Authors: Farzad Danaei, Yilmaz Akkaya

Abstract:

In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.

Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient

Procedia PDF Downloads 75
1015 Rapid Identification and Diagnosis of the Pathogenic Leptospiras through Comparison among Culture, PCR and Real Time PCR Techniques from Samples of Human and Mouse Feces

Authors: S. Rostampour Yasouri, M. Ghane, M. Doudi

Abstract:

Leptospirosis is one of the most significant infectious and zoonotic diseases along with global spreading. This disease is causative agent of economoic losses and human fatalities in various countries, including Northern provinces of Iran. The aim of this research is to identify and compare the rapid diagnostic techniques of pathogenic leptospiras, considering the multifacetedness of the disease from a clinical manifestation and premature death of patients. In the spring and summer of 2020-2022, 25 fecal samples were collected from suspected leptospirosis patients and 25 Fecal samples from mice residing in the rice fields and factories in Tonekabon city. Samples were prepared by centrifugation and passing through membrane filters. Culture technique was used in liquid and solid EMJH media during one month of incubation at 30°C. Then, the media were examined microscopically. DNA extraction was conducted by extraction Kit. Diagnosis of leptospiras was enforced by PCR and Real time PCR (SYBR Green) techniques using lipL32 specific primer. Out of the patients, 11 samples (44%) and 8 samples (32%) were determined to be pathogenic Leptospira by Real time PCR and PCR technique, respectively. Out of the mice, 9 Samples (36%) and 3 samples (12%) were determined to be pathogenic Leptospira by the mentioned techniques, respectively. Although the culture technique is considered to be the gold standard technique, but due to the slow growth of pathogenic Leptospira and lack of colony formation of some species, it is not a fast technique. Real time PCR allowed rapid diagnosis with much higher accuracy compared to PCR because PCR could not completely identify samples with lower microbial load.

Keywords: culture, pathogenic leptospiras, PCR, real time PCR

Procedia PDF Downloads 84
1014 Correlation between Neck Circumference and Other Anthropometric Indices as a Predictor of Obesity

Authors: Madhur Verma, Meena Rajput, Kamal Kishore

Abstract:

Background: The general view that obesity is a problem of prosperous Western countries has been repealed with substantial evidence showing that middle-income countries like India are now at the heart of a fat explosion. Neck circumference has evolved as a promising index to measure obesity, because of the convenience of its use, even in culture sensitive population. Objectives: To determine whether neck circumference (NC) was associated with overweight and obesity and contributed to the prediction like other classical anthropometric indices. Methodology: Cross-sectional study consisting of 1080 adults (> 19 years) selected through Multi-stage random sampling between August 2013 and September 2014 using the pretested semi-structured questionnaire. After recruitment, the demographic and anthropometric parameters [BMI, Waist & Hip Circumference (WC, HC), Waist to hip ratio (WHR), waist to height ratio (WHtR), body fat percentage (BF %), neck circumference (NC)] were recorded & calculated as per standard procedures. Analysis was done using appropriate statistical tests. (SPSS, version 21.) Results: Mean age of study participants was 44.55+15.65 years. Overall prevalence of overweight & obesity as per modified criteria for Asian Indians (BMI ≥ 23 kg/m2) was 49.62% (Females-51.48%; Males-47.77%). Also, number of participants having high WHR, WHtR, BF%, WC & NC was 827(76.57%), 530(49.07%), 513(47.5%), 537(49.72%) & 376(34.81%) respectively. Variation of NC, BMI & BF% with age was non- significant. In both the genders, as per the Pearson’s correlational analysis, neck circumference was positively correlated with BMI (men, r=0.670 {p < 0.05}; women, r=0.564 {p < 0.05}), BF% (men, r=0.407 {p < 0.05}; women, r= 0.283 {p < 0.05}), WC (men, r=0.598{p < 0.05}; women, r=0.615 {p < 0.05}), HC (men, r=0.512{p < 0.05}; women, r=0.523{p < 0.05}), WHR (men, r= 0.380{p > 0.05}; women, r=0.022{p > 0.05}) & WHtR (men, r=0.318 {p < 0.05}; women, r=0.396{p < 0.05}). On ROC analysis, NC showed good discriminatory power to identify obesity with AUC (AUC for males: 0.822 & females: 0.873; p- value < 0.001) with maximum sensitivity and specificity at a cut-off value of 36.55 cms for males & 34.05cms for females. Conclusion: NC has fair validity as a community-based screener for overweight and obese individuals in the study context and has also correlated well with other classical indices.

Keywords: neck circumference, obesity, anthropometric indices, body fat percentage

Procedia PDF Downloads 247
1013 Internal Combustion Engine Fuel Composition Detection by Analysing Vibration Signals Using ANFIS Network

Authors: M. N. Khajavi, S. Nasiri, E. Farokhi, M. R. Bavir

Abstract:

Alcohol fuels are renewable, have low pollution and have high octane number; therefore, they are important as fuel in internal combustion engines. Percentage detection of these alcoholic fuels with gasoline is a complicated, time consuming, and expensive process. Nowadays, these processes are done in equipped laboratories, based on international standards. The aim of this research is to determine percentage detection of different fuels based on vibration analysis of engine block signals. By doing, so considerable saving in time and cost can be achieved. Five different fuels consisted of pure gasoline (G) as base fuel and combination of this fuel with different percent of ethanol and methanol are prepared. For example, volumetric combination of pure gasoline with 10 percent ethanol is called E10. By this convention, we made M10 (10% methanol plus 90% pure gasoline), E30 (30% ethanol plus 70% pure gasoline), and M30 (30% Methanol plus 70% pure gasoline) were prepared. To simulate real working condition for this experiment, the vehicle was mounted on a chassis dynamometer and run under 1900 rpm and 30 KW load. To measure the engine block vibration, a three axis accelerometer was mounted between cylinder 2 and 3. After acquisition of vibration signal, eight time feature of these signals were used as inputs to an Adaptive Neuro Fuzzy Inference System (ANFIS). The designed ANFIS was trained for classifying these five different fuels. The results show suitable classification ability of the designed ANFIS network with 96.3 percent of correct classification.

Keywords: internal combustion engine, vibration signal, fuel composition, classification, ANFIS

Procedia PDF Downloads 400
1012 In silico Subtractive Genomics Approach for Identification of Strain-Specific Putative Drug Targets among Hypothetical Proteins of Drug-Resistant Klebsiella pneumoniae Strain 825795-1

Authors: Umairah Natasya Binti Mohd Omeershffudin, Suresh Kumar

Abstract:

Klebsiella pneumoniae, a Gram-negative enteric bacterium that causes nosocomial and urinary tract infections. Particular concern is the global emergence of multidrug-resistant (MDR) strains of Klebsiella pneumoniae. Characterization of antibiotic resistance determinants at the genomic level plays a critical role in understanding, and potentially controlling, the spread of multidrug-resistant (MDR) pathogens. In this study, drug-resistant Klebsiella pneumoniae strain 825795-1 was investigated with extensive computational approaches aimed at identifying novel drug targets among hypothetical proteins. We have analyzed 1099 hypothetical proteins available in genome. We have used in-silico genome subtraction methodology to design potential and pathogen-specific drug targets against Klebsiella pneumoniae. We employed bioinformatics tools to subtract the strain-specific paralogous and host-specific homologous sequences from the bacterial proteome. The sorted 645 proteins were further refined to identify the essential genes in the pathogenic bacterium using the database of essential genes (DEG). We found 135 unique essential proteins in the target proteome that could be utilized as novel targets to design newer drugs. Further, we identified 49 cytoplasmic protein as potential drug targets through sub-cellular localization prediction. Further, we investigated these proteins in the DrugBank databases, and 11 of the unique essential proteins showed druggability according to the FDA approved drug bank databases with diverse broad-spectrum property. The results of this study will facilitate discovery of new drugs against Klebsiella pneumoniae.

Keywords: pneumonia, drug target, hypothetical protein, subtractive genomics

Procedia PDF Downloads 172
1011 Predicting Stem Borer Density in Maize Using RapidEye Data and Generalized Linear Models

Authors: Elfatih M. Abdel-Rahman, Tobias Landmann, Richard Kyalo, George Ong’amo, Bruno Le Ru

Abstract:

Maize (Zea mays L.) is a major staple food crop in Africa, particularly in the eastern region of the continent. The maize growing area in Africa spans over 25 million ha and 84% of rural households in Africa cultivate maize mainly as a means to generate food and income. Average maize yields in Sub Saharan Africa are 1.4 t/ha as compared to global average of 2.5–3.9 t/ha due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In East Africa, yield losses due to stem borers are currently estimated between 12% to 40% of the total production. The objective of the present study was therefore to predict stem borer larvae density in maize fields using RapidEye reflectance data and generalized linear models (GLMs). RapidEye images were captured for a test site in Kenya (Machakos) in January and in February 2015. Stem borer larva numbers were modeled using GLMs assuming Poisson (Po) and negative binomial (NB) distributions with error with log arithmetic link. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were employed to assess the models performance using a leave one-out cross-validation approach. Results showed that NB models outperformed Po ones in all study sites. RMSE and RPD ranged between 0.95 and 2.70, and between 2.39 and 6.81, respectively. Overall, all models performed similar when used the January and the February image data. We conclude that reflectance data from RapidEye data can be used to estimate stem borer larvae density. The developed models could to improve decision making regarding controlling maize stem borers using various integrated pest management (IPM) protocols.

Keywords: maize, stem borers, density, RapidEye, GLM

Procedia PDF Downloads 495
1010 Comparison of Elastic and Viscoelastic Modeling for Asphalt Concrete Surface Layer

Authors: Fouzieh Rouzmehr, Mehdi Mousavi

Abstract:

Hot mix asphalt concrete (HMAC) is a mixture of aggregates and bitumen. The primary ingredient that determines the mechanical properties of HMAC is the bitumen in it, which displays viscoelastic behavior under normal service conditions. For simplicity, asphalt concrete is considered an elastic material, but this is far from reality at high service temperatures and longer loading times. Viscoelasticity means that the material's stress-strain relationship depends on the strain rate and loading duration. The goal of this paper is to simulate the mechanical response of flexible pavements using linear elastic and viscoelastic modeling of asphalt concrete and predict pavement performance. Falling Weight Deflectometer (FWD) load will be simulated and the results for elastic and viscoelastic modeling will be evaluated. The viscoelastic simulation is performed by the Prony series, which will be modeled by using ANSYS software. Inflexible pavement design, tensile strain at the bottom of the surface layer and compressive strain at the top of the last layer plays an important role in the structural response of the pavement and they will imply the number of loads for fatigue (Nf) and rutting (Nd) respectively. The differences of these two modelings are investigated on fatigue cracking and rutting problem, which are the two main design parameters in flexible pavement design. Although the differences in rutting problem between the two models were negligible, in fatigue cracking, the viscoelastic model results were more accurate. Results indicate that modeling the flexible pavement with elastic material is efficient enough and gives acceptable results.

Keywords: flexible pavement, asphalt, FEM, viscoelastic, elastic, ANSYS, modeling

Procedia PDF Downloads 129
1009 Seismic Response of Structures of Reinforced Concrete Buildings: Regular and Irregular Configurations

Authors: Abdelhammid Chibane

Abstract:

Often, for architectural reasons or designs, several buildings have a non-uniform profile in elevation. Depending on the configuration of the construction and the arrangements structural elements, the non-uniform profile in elevation (the recess) is considered concept of a combination of non-uniform distributions of strength, stiffness, weight and geometry along the height of irregular structures. Therefore, this type of configuration can induce irregular distribution load causing a serious concentration stresses at the discontinuity. This therefore requires a serious behavioral treatment buildings in an earthquake. If appropriate measures are not taken into account, structural irregularity may become a major source of damage during earthquakesEarth. In the past, several research investigations have identified differences in dynamic response of irregular and regular porches. Among the most notable differences are the increments of displacements and ductility applications in floors located above the level of the shoulder and an increase in the contribution of the higher modes cisaillement1 efforts, ..., 10. The para -ssismiques codes recommend the methods of analysis Dynamic (or modal history) to establish the forces of calculation instead of the static method equivalent, which is basically applicable only to regular structures without major discontinuities in the mass, rigidity and strength along the height 11, 12 .To investigate the effects of irregular profiles on the structures, the main objective of this study was the assessment of the inelastic response, in terms of applications of ductility four types of non-uniform multi-stage structures subjected to relatively severe earthquakes. In the This study, only the parallel responses are analyzed setback.

Keywords: buildings, concentration stresses, ductility, ductility, designs, irregular structures

Procedia PDF Downloads 261
1008 Linear Decoding Applied to V5/MT Neuronal Activity on Past Trials Predicts Current Sensory Choices

Authors: Ben Hadj Hassen Sameh, Gaillard Corentin, Andrew Parker, Kristine Krug

Abstract:

Perceptual decisions about sequences of sensory stimuli often show serial dependence. The behavioural choice on one trial is often affected by the choice on previous trials. We investigated whether the neuronal signals in extrastriate visual area V5/MT on preceding trials might influence choice on the current trial and thereby reveal the neuronal mechanisms of sequential choice effects. We analysed data from 30 single neurons recorded from V5/MT in three Rhesus monkeys making sequential choices about the direction of rotation of a three-dimensional cylinder. We focused exclusively on the responses of neurons that showed significant choice-related firing (mean choice probability =0.73) while the monkey viewed perceptually ambiguous stimuli. Application of a wavelet transform to the choice-related firing revealed differences in the frequency band of neuronal activity that depended on whether the previous trial resulted in a correct choice for an unambiguous stimulus that was in the neuron’s preferred direction (low alpha and high beta and gamma) or non-preferred direction (high alpha and low beta and gamma). To probe this in further detail, we applied a regularized linear decoder to predict the choice for an ambiguous trial by referencing the neuronal activity of the preceding unambiguous trial. Neuronal activity on a previous trial provided a significant prediction of the current choice (61% correc, 95%Cl~52%t), even when limiting analysis to preceding trials that were correct and rewarded. These findings provide a potential neuronal signature of sequential choice effects in the primate visual cortex.

Keywords: perception, decision making, attention, decoding, visual system

Procedia PDF Downloads 137
1007 Observation of the Orthodontic Tooth's Long-Term Movement Using Stereovision System

Authors: Hao-Yuan Tseng, Chuan-Yang Chang, Ying-Hui Chen, Sheng-Che Chen, Chih-Han Chang

Abstract:

Orthodontic tooth treatment has demonstrated a high success rate in clinical studies. It has been agreed upon that orthodontic tooth movement is based on the ability of surrounding bone and periodontal ligament (PDL) to react to a mechanical stimulus with remodeling processes. However, the mechanism of the tooth movement is still unclear. Recent studies focus on the simple principle compression-tension theory while rare studies directly measure tooth movement. Therefore, tracking tooth movement information during orthodontic treatment is very important in clinical practice. The aim of this study is to investigate the mechanism responses of the tooth movement during the orthodontic treatments. A stereovision system applied to track the tooth movement of the patient with the stamp brackets. The system was established by two cameras with their relative position calibrate. And the orthodontic force measured by 3D printing model with the six-axis load cell to determine the initial force application. The result shows that the stereovision system accuracy revealed the measurement presents a maximum error less than 2%. For the study on patient tracking, the incisor moved about 0.9 mm during 60 days tracking, and half of movement occurred in the first few hours. After removing the orthodontic force in 100 hours, the distance between before and after position incisor tooth decrease 0.5 mm consisted with the release of the phenomenon. Using the stereovision system can accurately locate the three-dimensional position of the teeth and superposition of 3D coordinate system for all the data to integrate the complex tooth movement.

Keywords: orthodontic treatment, tooth movement, stereovision system, long-term tracking

Procedia PDF Downloads 421
1006 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation

Authors: Samuel Ahamefula Mba

Abstract:

Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.

Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation

Procedia PDF Downloads 92
1005 CO₂ Absorption Studies Using Amine Solvents with Fourier Transform Infrared Analysis

Authors: Avoseh Funmilola, Osman Khalid, Wayne Nelson, Paramespri Naidoo, Deresh Ramjugernath

Abstract:

The increasing global atmospheric temperature is of great concern and this has led to the development of technologies to reduce the emission of greenhouse gases into the atmosphere. Flue gas emissions from fossil fuel combustion are major sources of greenhouse gases. One of the ways to reduce the emission of CO₂ from flue gases is by post combustion capture process and this can be done by absorbing the gas into suitable chemical solvents before emitting the gas into the atmosphere. Alkanolamines are promising solvents for this capture process. Vapour liquid equilibrium of CO₂-alkanolamine systems is often represented by CO₂ loading and partial pressure of CO₂ without considering the liquid phase. The liquid phase of this system is a complex one comprising of 9 species. Online analysis of the process is important to monitor the concentrations of the liquid phase reacting and product species. Liquid phase analysis of CO₂-diethanolamine (DEA) solution was performed by attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy. A robust Calibration was performed for the CO₂-aqueous DEA system prior to an online monitoring experiment. The partial least square regression method was used for the analysis of the calibration spectra obtained. The models obtained were used for prediction of DEA and CO₂ concentrations in the online monitoring experiment. The experiment was performed with a newly built recirculating experimental set up in the laboratory. The set up consist of a 750 ml equilibrium cell and ATR-FTIR liquid flow cell. Measurements were performed at 400°C. The results obtained indicated that the FTIR spectroscopy combined with Partial least square method is an effective tool for online monitoring of speciation.

Keywords: ATR-FTIR, CO₂ capture, online analysis, PLS regression

Procedia PDF Downloads 195
1004 Wear Performance of SLM Fabricated 1.2709 Steel Nanocomposite Reinforced by TiC-WC for Mould and Tooling Applications

Authors: Daniel Ferreira, José M. Marques Oliveira, Filipe Oliveira

Abstract:

Wear phenomena is critical in injection moulding processes, causing failure of the components, and making the parts more expensive with an additional wasting time. When very abrasive materials are being injected inside the steel mould’s cavities, such as polymers reinforced with abrasive fibres, the consequences of the wear are more evident. Maraging steel (1.2709) is commonly employed in moulding components to resist in very aggressive injection conditions. In this work, the wear performance of the SLM produced 1.2709 maraging steel reinforced by ultrafine titanium and tungsten carbide (TiC-WC), was investigated using a pin-on-disk testing apparatus. A polypropylene reinforced with 40 wt.% fibreglass (PP40) disk, was used as the counterpart material. The wear tests were performed at 40 N constant load and 0.4 ms-1 sliding speed at room temperature and humidity conditions. The experimental results demonstrated that the wear rate in the 18Ni300-TiC-WC composite is lower than the unreinforced 18Ni300 matrix. The morphology and chemical composition of the worn surfaces was observed by 3D optical profilometry and scanning electron microscopy (SEM), respectively. The resulting debris, caused by friction, were also analysed by SEM and energy dispersive X-ray spectroscopy (EDS). Their morphology showed distinct shapes and sizes, which indicated that the wear mechanisms, may be different in maraging steel produced by casting and SLM. The coefficient of friction (COF) was recorded during the tests, which helped to elucidate the wear mechanisms involved.

Keywords: selective laser melting, nanocomposites, injection moulding, polypropylene with fibreglass

Procedia PDF Downloads 151
1003 Teaching Practices for Subverting Significant Retentive Learner Errors in Arithmetic

Authors: Michael Lousis

Abstract:

The systematic identification of the most conspicuous and significant errors made by learners during three-years of testing of their progress in learning Arithmetic throughout the development of the Kassel Project in England and Greece was accomplished. How much retentive these errors were over three-years in the officially provided school instruction of Arithmetic in these countries has also been shown. The learners’ errors in Arithmetic stemmed from a sample, which was comprised of two hundred (200) English students and one hundred and fifty (150) Greek students. The sample was purposefully selected according to the students’ participation in each testing session in the development of the three-year project, in both domains simultaneously in Arithmetic and Algebra. Specific teaching practices have been invented and are presented in this study for subverting these learners’ errors, which were found out to be retentive to the level of the nationally provided mathematical education of each country. The invention and the development of these proposed teaching practices were founded on the rationality of the theoretical accounts concerning the explanation, prediction and control of the errors, on the conceptual metaphor and on an analysis, which tried to identify the required cognitive components and skills of the specific tasks, in terms of Psychology and Cognitive Science as applied to information-processing. The aim of the implementation of these instructional practices is not only the subversion of these errors but the achievement of the mathematical competence, as this was defined to be constituted of three elements: appropriate representations - appropriate meaning - appropriately developed schemata. However, praxis is of paramount importance, because there is no independent of science ‘real-truth’ and because praxis serves as quality control when it takes the form of a cognitive method.

Keywords: arithmetic, cognitive science, cognitive psychology, information-processing paradigm, Kassel project, level of the nationally provided mathematical education, praxis, remedial mathematical teaching practices, retentiveness of errors

Procedia PDF Downloads 315
1002 A 3D Cell-Based Biosensor for Real-Time and Non-Invasive Monitoring of 3D Cell Viability and Drug Screening

Authors: Yuxiang Pan, Yong Qiu, Chenlei Gu, Ping Wang

Abstract:

In the past decade, three-dimensional (3D) tumor cell models have attracted increasing interest in the field of drug screening due to their great advantages in simulating more accurately the heterogeneous tumor behavior in vivo. Drug sensitivity testing based on 3D tumor cell models can provide more reliable in vivo efficacy prediction. The gold standard fluorescence staining is hard to achieve the real-time and label-free monitoring of the viability of 3D tumor cell models. In this study, micro-groove impedance sensor (MGIS) was specially developed for dynamic and non-invasive monitoring of 3D cell viability. 3D tumor cells were trapped in the micro-grooves with opposite gold electrodes for the in-situ impedance measurement. The change of live cell number would cause inversely proportional change to the impedance magnitude of the entire cell/matrigel to construct and reflect the proliferation and apoptosis of 3D cells. It was confirmed that 3D cell viability detected by the MGIS platform is highly consistent with the standard live/dead staining. Furthermore, the accuracy of MGIS platform was demonstrated quantitatively using 3D lung cancer model and sophisticated drug sensitivity testing. In addition, the parameters of micro-groove impedance chip processing and measurement experiments were optimized in details. The results demonstrated that the MGIS and 3D cell-based biosensor and would be a promising platform to improve the efficiency and accuracy of cell-based anti-cancer drug screening in vitro.

Keywords: micro-groove impedance sensor, 3D cell-based biosensors, 3D cell viability, micro-electromechanical systems

Procedia PDF Downloads 127
1001 Evaluation of NASA POWER and CRU Precipitation and Temperature Datasets over a Desert-prone Yobe River Basin: An Investigation of the Impact of Drought in the North-East Arid Zone of Nigeria

Authors: Yusuf Dawa Sidi, Abdulrahman Bulama Bizi

Abstract:

The most dependable and precise source of climate data is often gauge observation. However, long-term records of gauge observations, on the other hand, are unavailable in many regions around the world. In recent years, a number of gridded climate datasets with high spatial and temporal resolutions have emerged as viable alternatives to gauge-based measurements. However, it is crucial to thoroughly evaluate their performance prior to utilising them in hydroclimatic applications. Therefore, this study aims to assess the effectiveness of NASA Prediction of Worldwide Energy Resources (NASA POWER) and Climate Research Unit (CRU) datasets in accurately estimating precipitation and temperature patterns within the dry region of Nigeria from 1990 to 2020. The study employs widely used statistical metrics and the Standardised Precipitation Index (SPI) to effectively capture the monthly variability of precipitation and temperature and inter-annual anomalies in rainfall. The findings suggest that CRU exhibited superior performance compared to NASA POWER in terms of monthly precipitation and minimum and maximum temperatures, demonstrating a high correlation and much lower error values for both RMSE and MAE. Nevertheless, NASA POWER has exhibited a moderate agreement with gauge observations in accurately replicating monthly precipitation. The analysis of the SPI reveals that the CRU product exhibits superior performance compared to NASA POWER in accurately reflecting inter-annual variations in rainfall anomalies. The findings of this study indicate that the CRU gridded product is often regarded as the most favourable gridded precipitation product.

Keywords: CRU, climate change, precipitation, SPI, temperature

Procedia PDF Downloads 87
1000 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves

Authors: Hanifeh Imanian, Morteza Kolahdoozan

Abstract:

The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.

Keywords: dispersion, marine environment, mathematical-statistical relationship, oil spill

Procedia PDF Downloads 232
999 Space Telemetry Anomaly Detection Based On Statistical PCA Algorithm

Authors: Bassem Nassar, Wessam Hussein, Medhat Mokhtar

Abstract:

The crucial concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems in order to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important in order to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the aforementioned problem coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions and the results shows that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.

Keywords: space telemetry monitoring, multivariate analysis, PCA algorithm, space operations

Procedia PDF Downloads 415