Search results for: genome scale model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21157

Search results for: genome scale model

20887 PYTHEIA: A Scale for Assessing Rehabilitation and Assistive Robotics

Authors: Yiannis Koumpouros, Effie Papageorgiou, Alexandra Karavasili, Foteini Koureta

Abstract:

The objective of the present study was to develop a scale called PYTHEIA. The PYTHEIA is a self-reported measure for the assessment of rehabilitation and assistive robotics and other assistive technology devices. The development of PYTHEIA faced the absence of a valid instrument that can be used to evaluate the assistive robotic devices both as a whole, as well as any of their individual components or functionalities implemented. According to the results presented, PYTHEIA is a valid and reliable scale able to be applied to different target groups for the subjective evaluation of various assistive technology devices.

Keywords: rehabilitation, assistive technology, assistive robots, rehabilitation robots, scale, psychometric test, assessment, validation, user satisfaction

Procedia PDF Downloads 288
20886 Multiscale Model of Blast Explosion Human Injury Biomechanics

Authors: Raj K. Gupta, X. Gary Tan, Andrzej Przekwas

Abstract:

Bomb blasts from Improvised Explosive Devices (IEDs) account for vast majority of terrorist attacks worldwide. Injuries caused by IEDs result from a combination of the primary blast wave, penetrating fragments, and human body accelerations and impacts. This paper presents a multiscale computational model of coupled blast physics, whole human body biodynamics and injury biomechanics of sensitive organs. The disparity of the involved space- and time-scales is used to conduct sequential modeling of an IED explosion event, CFD simulation of blast loads on the human body and FEM modeling of body biodynamics and injury biomechanics. The paper presents simulation results for blast-induced brain injury coupling macro-scale brain biomechanics and micro-scale response of sensitive neuro-axonal structures. Validation results on animal models and physical surrogates are discussed. Results of our model can be used to 'replicate' filed blast loadings in laboratory controlled experiments using animal models and in vitro neuro-cultures.

Keywords: blast waves, improvised explosive devices, injury biomechanics, mathematical models, traumatic brain injury

Procedia PDF Downloads 224
20885 Mycoplasmas and Pathogenesis in Preventive Medicine

Authors: Narin Salehiyan

Abstract:

The later sequencing of the complete genomes of Mycoplasma genitalium and M. pneumoniae has pulled in significant consideration to the atomic science of mycoplasmas, the littlest self-replicating living beings. It shows up that we are presently much closer to the objective of defining, in atomic terms, the complete apparatus of a self-replicating cell. Comparative genomics based on comparison of the genomic cosmetics of mycoplasmal genomes with those of other microbes, has opened better approaches of looking at the developmental history of the mycoplasmas. There's presently strong hereditary bolster for the speculation that mycoplasmas have advanced as a department of gram-positive microbes by a handle of reductive advancement. Amid this prepare, the mycoplasmas misplaced significant parcels of their ancestors’ chromosomes but held the qualities basic for life. In this way, the mycoplasmal genomes carry a tall rate of preserved qualities, incredibly encouraging quality comment. The critical genome compaction that happened in mycoplasmas was made conceivable by receiving a parasitic mode of life. The supply of supplements from their has clearly empowered mycoplasmas to lose, amid advancement, the qualities for numerous assimilative forms. Amid their advancement and adjustment to a parasitic mode of life, the mycoplasmas have created different hereditary frameworks giving a profoundly plastic set of variable surface proteins to avoid the have safe framework.

Keywords: mycoplasma, plasma, pathogen, genome

Procedia PDF Downloads 37
20884 Experimental Study on Floating Breakwater Anchored by Piles

Authors: Yessi Nirwana Kurniadi, Nira Yunita Permata

Abstract:

Coastline is vulnerable to coastal erosion which damage infrastructure and buildings. Floating breakwaters are applied in order to minimize material cost but still can reduce wave height. In this paper, we investigated floating breakwater anchored by piles based on experimental study in the laboratory with model scale 1:8. Two type of floating model were tested with several combination wave height, wave period and surface water elevation to determined transmission coefficient. This experimental study proved that floating breakwater with piles can prevent wave height up to 27 cm. The physical model shows that ratio of depth to wave length is less than 0.6 and ratio of model width to wave length is less than 0.3. It is confirmed that if those ratio are less than those value, the transmission coefficient is 0.5. The result also showed that the first type model of floating breakwater can reduce wave height by 60.4 % while the second one can reduce up to 55.56 %.

Keywords: floating breakwater, experimental study, pile, transimission coefficient

Procedia PDF Downloads 512
20883 Factors Influencing University Student's Acceptance of New Technology

Authors: Fatma Khadra

Abstract:

The objective of this research is to identify the acceptance of new technology in a sample of 150 Participants from Qatar University. Based on the Technology Acceptance Model (TAM), we used the Davis’s scale (1989) which contains two item scales for Perceived Usefulness and Perceived Ease of Use. The TAM represents an important theoretical contribution toward understanding how users come to accept and use technology. This model suggests that when people are presented with a new technology, a number of variables influence their decision about how and when they will use it. The results showed that participants accept more technology because flexibility, clarity, enhancing the experience, enjoying, facility, and useful. Also, results showed that younger participants accept more technology than others.

Keywords: new technology, perceived usefulness, perceived ease of use, technology acceptance model

Procedia PDF Downloads 292
20882 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations

Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan

Abstract:

Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.

Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers

Procedia PDF Downloads 49
20881 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 27
20880 Genome-Wide Isoform Specific KDM5A/JARID1A/RBP2 Location Analysis Reveals Contribution of Chromatin-Interacting PHD Domain in Protein Recruitment to Binding Sites

Authors: Abul B. M. M. K. Islam, Nuria Lopez-Bigas, Elizaveta V. Benevolenskaya

Abstract:

RBP2 has shown to be important for cell differentiation control through epigenetic mechanism. The main aim of the present study is genome-wide location analysis of human RBP2 isoforms that differ in a histone-binding domain by ChIPseq. It is conceivable that the larger isoform (LI) of RBP2, which contains a specific H3K4me3 interacting domain, differs from the smaller isoform (SI) in genomic location, may account for the observed diversity in RBP2 function. To distinguish the two RBP2 isoforms, we used the fact that the SI lacks the C-terminal PHD domain and hence used the antibodies detecting both RBP2 isoforms (AI) through a common central domain, and the antibodies detecting only LI but not SI, through a C-terminal PHD domain. Overall our analysis suggests that RBP2 occupies about 77 nucleotides and binds GC rich motifs of active genes, does not bind to centromere, telomere, or enhancer regions, and binding sites are conserved compare to random. A striking difference between the only-SI and only-LI is that a large number of only-SI peaks are located in CpG islands and close to TSS compared to only-LI peaks. Enrichment analysis of the related genes indicates that several oncogenic pathways and metabolic pathways/processes are significantly enriched among only-SI/AI targets, but not LI/only-LI peak’s targets.

Keywords: bioinformatics, cancer, ChIP-seq, KDM5A

Procedia PDF Downloads 287
20879 Invention of Novel Technique of Process Scale Up by Using Solid Dosage Form

Authors: Shashank Tiwari, S. P. Mahapatra

Abstract:

The aim of this technique is to reduce the steps of process scales up, save time & cost of the industries. This technique will minimise the steps of process scale up. The new steps are, Novel Lab Scale, Novel Lab Scale Trials, Novel Trial Batches, Novel Exhibit Batches, Novel Validation Batches. In these steps, it is not divided to validation batches in three parts but the data of trials batches, Exhibit Batches and Validation batches are use and compile for production and used for validation. It also increases the batch size of the trial, exhibit batches. The new size of trials batches is not less than fifty Thousand, the exhibit batches increase up to two lack and the validation batches up to five lack. After preparing the batches all their data & drugs use for stability & maintain the validation record and compile data for the technology transfer in production department for preparing the marketed size batches.

Keywords: batches, technique, preparation, scale up, validation

Procedia PDF Downloads 333
20878 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR

Authors: Pascal Mwenge, Tumisang Seodigeng

Abstract:

The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.

Keywords: biodiesel, calibration, chemometrics, methanolysis, multivariate analysis, transesterification, FTIR

Procedia PDF Downloads 128
20877 A Refined Nonlocal Strain Gradient Theory for Assessing Scaling-Dependent Vibration Behavior of Microbeams

Authors: Xiaobai Li, Li Li, Yujin Hu, Weiming Deng, Zhe Ding

Abstract:

A size-dependent Euler–Bernoulli beam model, which accounts for nonlocal stress field, strain gradient field and higher order inertia force field, is derived based on the nonlocal strain gradient theory considering velocity gradient effect. The governing equations and boundary conditions are derived both in dimensional and dimensionless form by employed the Hamilton principle. The analytical solutions based on different continuum theories are compared. The effect of higher order inertia terms is extremely significant in high frequency range. It is found that there exists an asymptotic frequency for the proposed beam model, while for the nonlocal strain gradient theory the solutions diverge. The effect of strain gradient field in thickness direction is significant in low frequencies domain and it cannot be neglected when the material strain length scale parameter is considerable with beam thickness. The influence of each of three size effect parameters on the natural frequencies are investigated. The natural frequencies increase with the increasing material strain gradient length scale parameter or decreasing velocity gradient length scale parameter and nonlocal parameter.

Keywords: Euler-Bernoulli Beams, free vibration, higher order inertia, Nonlocal Strain Gradient Theory, velocity gradient

Procedia PDF Downloads 251
20876 Genetic Identification of Crop Cultivars Using Barcode System

Authors: Kesavan Markkandan, Ha Young Park, Seung-Il Yoo, Sin-Gi Park, Junhyung Park

Abstract:

For genetic identification of crop cultivars, insertions/deletions (InDel) markers have been preferred currently because they are easy to use, PCR based, co-dominant and relatively abundant. However, new InDels need to be developed for genetic studies of new varieties due to the difference of allele frequencies in InDels among the population groups. These new varieties are evolved with low levels of genetic diversity in specific genome loci with high recombination rate. In this study, we described soybean barcode system approach based on InDel makers, each of which is specific to a variation block (VB), where the genomes split by all assumed recombination sites. Firstly, VBs in crop cultivars were mined for transferability to VB-specific InDel markers. Secondly, putative InDels in the VB regions were identified for the development of barcode system by analyzing particular cultivar’s whole genome data. Thirdly, common VB-specific InDels from all cultivars were selected by gel electrophoresis, which were converted as 2D barcode types according to comparing amplicon polymorphisms in the five cultivars to the reference cultivar. Finally, the polymorphism of the selected markers was assessed with other cultivars, and the barcode system that allows a clear distinction among those cultivars is described. The same approach can be applicable for other commercial crops. Hence, VB-based genetic identification not only minimize the molecular markers but also useful for assessing cultivars and for marker-assisted breeding in other crop species.

Keywords: variation block, polymorphism, InDel marker, genetic identification

Procedia PDF Downloads 360
20875 Evaluation of Compatibility between Produced and Injected Waters and Identification of the Causes of Well Plugging in a Southern Tunisian Oilfield

Authors: Sonia Barbouchi, Meriem Samcha

Abstract:

Scale deposition during water injection into aquifer of oil reservoirs is a serious problem experienced in the oil production industry. One of the primary causes of scale formation and injection well plugging is mixing two waters which are incompatible. Considered individually, the waters may be quite stable at system conditions and present no scale problems. However, once they are mixed, reactions between ions dissolved in the individual waters may form insoluble products. The purpose of this study is to identify the causes of well plugging in a southern Tunisian oilfield, where fresh water has been injected into the producing wells to counteract the salinity of the formation waters and inhibit the deposition of halite. X-ray diffraction (XRD) mineralogical analysis has been carried out on scale samples collected from the blocked well. Two samples collected from both formation water and injected water were analysed using inductively coupled plasma atomic emission spectroscopy, ion chromatography and other standard laboratory techniques. The results of complete waters analysis were the typical input parameters, to determine scaling tendency. Saturation indices values related to CaCO3, CaSO4, BaSO4 and SrSO4 scales were calculated for the water mixtures at different share, under various conditions of temperature, using a computerized scale prediction model. The compatibility study results showed that mixing the two waters tends to increase the probability of barite deposition. XRD analysis confirmed the compatibility study results, since it proved that the analysed deposits consisted predominantly of barite with minor galena. At the studied temperatures conditions, the tendency for barite scale is significantly increasing with the increase of fresh water share in the mixture. The future scale inhibition and removal strategies to be implemented in the concerned oilfield are being derived in a large part from the results of the present study.

Keywords: compatibility study, produced water, scaling, water injection

Procedia PDF Downloads 144
20874 Scale, Technique and Composition Effects of CO2 Emissions under Trade Liberalization of EGS: A CGE Evaluation for Argentina

Authors: M. Priscila Ramos, Omar O. Chisari, Juan Pablo Vila Martínez

Abstract:

Current literature about trade liberalization of environmental goods and services (EGS) raises doubts about the extent of the triple win-win situation for trade, development and the environment. However, much of this literature does not consider the possibility that this agreement carries technological transmissions, either through trade or foreign direct investment. This paper presents a computable general equilibrium model calibrated for Argentina, where there are alternative technologies (one dirty and one clean according to carbon emissions) to produce the same goods. In this context, the trade liberalization of EGS allows to increase GDP, trade, reduce unemployment and improve the households welfare. However, the capital mobility appears as the key assumption to jointly reach the environmental target, when the positive scale effect generated by the increase in trade is offset by the change in the composition of production (composition and technical effects by the use of the clean alternative technology) and of consumption (composition effect by substitution of relatively lesspolluting imported goods).

Keywords: CGE modeling, CO2 emissions, composition effect, scale effect, technique effect, trade liberalization of EGS

Procedia PDF Downloads 356
20873 Influential Factors of Employees’ Work Motivation: Case Study of Siam Thai Co., Ltd

Authors: Pitsanu Poonpetpun, Witthaya Mekhum, Warangkana Kongsil

Abstract:

This research was an attempt to study work motivation of employees in Siam Thai Co., Ltd. The study took place in Rayong with 59 employees as participants. The research tool was questionnaires which consisted of sets of questions about company’s policy, management, executives and good relationship within the firm. The questionnaires style was rating scale with 5 score bands. The questionnaires were analyzed by percentage, frequency, mean and standard deviation. From the study, the result showed that policy and management were in moderate scale, executive and managers were in moderate scale and relationship within the firm were in high scale.

Keywords: motivation, job, performance, employees

Procedia PDF Downloads 241
20872 Modeling Stream Flow with Prediction Uncertainty by Using SWAT Hydrologic and RBNN Neural Network Models for Agricultural Watershed in India

Authors: Ajai Singh

Abstract:

Simulation of hydrological processes at the watershed outlet through modelling approach is essential for proper planning and implementation of appropriate soil conservation measures in Damodar Barakar catchment, Hazaribagh, India where soil erosion is a dominant problem. This study quantifies the parametric uncertainty involved in simulation of stream flow using Soil and Water Assessment Tool (SWAT), a watershed scale model and Radial Basis Neural Network (RBNN), an artificial neural network model. Both the models were calibrated and validated based on measured stream flow and quantification of the uncertainty in SWAT model output was assessed using ‘‘Sequential Uncertainty Fitting Algorithm’’ (SUFI-2). Though both the model predicted satisfactorily, but RBNN model performed better than SWAT with R2 and NSE values of 0.92 and 0.92 during training, and 0.71 and 0.70 during validation period, respectively. Comparison of the results of the two models also indicates a wider prediction interval for the results of the SWAT model. The values of P-factor related to each model shows that the percentage of observed stream flow values bracketed by the 95PPU in the RBNN model as 91% is higher than the P-factor in SWAT as 87%. In other words the RBNN model estimates the stream flow values more accurately and with less uncertainty. It could be stated that RBNN model based on simple input could be used for estimation of monthly stream flow, missing data, and testing the accuracy and performance of other models.

Keywords: SWAT, RBNN, SUFI 2, bootstrap technique, stream flow, simulation

Procedia PDF Downloads 332
20871 Phosphate Use Efficiency in Plants: A GWAS Approach to Identify the Pathways Involved

Authors: Azizah M. Nahari, Peter Doerner

Abstract:

Phosphate (Pi) is one of the essential macronutrients in plant growth and development, and it plays a central role in metabolic processes in plants, particularly photosynthesis and respiration. Limitation of crop productivity by Pi is widespread and is likely to increase in the future. Applications of Pi fertilizers have improved soil Pi fertility and crop production; however, they have also caused environmental damage. Therefore, in order to reduce dependence on unsustainable Pi fertilizers, a better understanding of phosphate use efficiency (PUE) is required for engineering nutrient-efficient crop plants. Enhanced Pi efficiency can be achieved by improved productivity per unit Pi taken up. We aim to identify, by using association mapping, general features of the most important loci that contribute to increased PUE to allow us to delineate the physiological pathways involved in defining this trait in the model plant Arabidopsis. As PUE is in part determined by the efficiency of uptake, we designed a hydroponic system to avoid confounding effects due to differences in root system architecture leading to differences in Pi uptake. In this system, 18 parental lines and 217 lines of the MAGIC population (a Multiparent Advanced Generation Inter-Cross) grown in high and low Pi availability conditions. The results showed revealed a large variation of PUE in the parental lines, indicating that the MAGIC population was well suited to identify PUE loci and pathways. 2 of 18 parental lines had the highest PUE in low Pi while some lines responded strongly and increased PUE with increased Pi. Having examined the 217 MAGIC population, considerable variance in PUE was found. A general feature was the trend of most lines to exhibit higher PUE when grown in low Pi conditions. Association mapping is currently in progress, but initial observations indicate that a wide variety of physiological processes are involved in influencing PUE in Arabidopsis. The combination of hydroponic growth methods and genome-wide association mapping is a powerful tool to identify the physiological pathways underpinning complex quantitative traits in plants.

Keywords: hydroponic system growth, phosphate use efficiency (PUE), Genome-wide association mapping, MAGIC population

Procedia PDF Downloads 299
20870 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 52
20869 The Effect of Action Potential Duration and Conduction Velocity on Cardiac Pumping Efficacy: Simulation Study

Authors: Ana Rahma Yuniarti, Ki Moo Lim

Abstract:

Slowed myocardial conduction velocity (CV) and shortened action potential duration (APD) due to some reason are associated with an increased risk of re-entrant excitation, predisposing to cardiac arrhythmia. That is because both of CV reduction and APD shortening induces shortening of wavelength. In this study, we investigated quantitatively the cardiac mechanical responses under various CV and APD using multi-scale computational model of the heart. The model consisted of electrical model coupled with the mechanical contraction model together with a lumped model of the circulatory system. The electrical model consisted of 149.344 numbers of nodes and 183.993 numbers of elements of tetrahedral mesh, whereas the mechanical model consisted of 356 numbers of nodes and 172 numbers of elements of hexahedral mesh with hermite basis. We performed the electrical simulation with two scenarios: 1) by varying the CV values with constant APD and 2) by varying the APD values with constant CV. Then, we compared the electrical and mechanical responses for both scenarios. Our simulation showed that faster CV and longer APD induced largest resultants wavelength and generated better cardiac pumping efficacy by increasing the cardiac output and consuming less energy. This is due to the long wave propagation and faster conduction generated more synchronous contraction of whole ventricle.

Keywords: conduction velocity, action potential duration, mechanical contraction model, circulatory model

Procedia PDF Downloads 182
20868 Early Age Behavior of Wind Turbine Gravity Foundations

Authors: Janet Modu, Jean-Francois Georgin, Laurent Briancon, Eric Antoinet

Abstract:

The current practice during the repowering phase of wind turbines is deconstruction of existing foundations and construction of new foundations to accept larger wind loads or once the foundations have reached the end of their service lives. The ongoing research project FUI25 FEDRE (Fondations d’Eoliennes Durables et REpowering) therefore serves to propose scalable wind turbine foundation designs to allow reuse of the existing foundations. To undertake this research, numerical models and laboratory-scale models are currently being utilized and implemented in the GEOMAS laboratory at INSA Lyon following instrumentation of a reference wind turbine situated in the Northern part of France. Sensors placed within both the foundation and the underlying soil monitor the evolution of stresses from the foundation’s early age to stresses during service. The results from the instrumentation form the basis of validation for both the laboratory and numerical works conducted throughout the project duration. The study currently focuses on the effect of coupled mechanisms (Thermal-Hydro-Mechanical-Chemical) that induce stress during the early age of the reinforced concrete foundation, and scale factor considerations in the replication of the reference wind turbine foundation at laboratory-scale. Using THMC 3D models on COMSOL Multi-physics software, the numerical analysis performed on both the laboratory-scale and the full-scale foundations simulate the thermal deformation, hydration, shrinkage (desiccation and autogenous) and creep so as to predict the initial damage caused by internal processes during concrete setting and hardening. Results show a prominent effect of early age properties on the damage potential in full-scale wind turbine foundations. However, a prediction of the damage potential at laboratory scale shows significant differences in early age stresses in comparison to the full-scale model depending on the spatial position in the foundation. In addition to the well-known size effect phenomenon, these differences may contribute to inaccuracies encountered when predicting ultimate deformations of the on-site foundation using laboratory scale models.

Keywords: cement hydration, early age behavior, reinforced concrete, shrinkage, THMC 3D models, wind turbines

Procedia PDF Downloads 153
20867 Research Design for Developing and Validating Ice-Hockey Team Diagnostics Scale

Authors: Gergely Geczi

Abstract:

In the modern world, ice hockey (and, in a broader sense, team sports) is becoming an increasingly popular field of entertainment. Although the main element is most likely perceived as the show itself, winning is an inevitable part of the successful operation of any sports team. In this paper, the author creates a research design allowing him to develop and validate an ice-hockey team-focused diagnostics scale, which enables researchers and practitioners to identify the problems associated with underperforming teams. The construction of the scale starts with personal interviews with experts of the field, carefully chosen from the sector of Hungarian ice hockey. Based on the interviews, the author is shown to be in the position to create the categories and the relevant items for the scale. When constructed, the next step is the validation process on a Hungarian sample. Data for validation are acquired through reaching the licensed database of the Hungarian Ice-Hockey Federation involving Hungarian ice-hockey coaches and players. The Ice-Hockey Team Diagnostics Scale is to be created to orient practitioners in understanding both effective and underperforming teamwork.

Keywords: diagnostics scale, effective versus underperforming team work, ice-hockey, research design

Procedia PDF Downloads 111
20866 Optimization of Extraction Conditions and Characteristics of Scale collagen From Sardine: Sardina pilchardus

Authors: F. Bellali, M. Kharroubi, M. Loutfi, N.Bourhim

Abstract:

In Morocco, fish processing industry is an important source income for a large amount of byproducts including skins, bones, heads, guts and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Scales from Sardina plichardus resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic and bio medical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. Moreover, the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The basic principle of RSM is to determinate model equations that describe interrelations between the independent variables and the dependent variables.

Keywords: Sardina pilchardus, scales, valorization, collagen extraction, response surface methodology

Procedia PDF Downloads 393
20865 Self-Disclosure and Suicide

Authors: Netta Horesh Reinman

Abstract:

The inability to communicate feelings and thoughts to people close to oneself may be an important risk factor for suicidal behavior. This inability has been operationalized in the concept of “self-disclosure.” The purpose of this paper was to evaluate the correlation of self-disclosure with suicidal behavior in adolescents. Eighty consecutive admissions to an adolescent psychiatric inpatient unit were evaluated. Thirty-four were suicide attempters, 18 were suicidal ideators, and 18 were non-suicidal. Assessment measures included the Child Suicide Potential Scale, the Suicide Intent Scale, the Suicide Ideation Scale, and the Self-Disclosure Scale. The results show that low self-disclosure levels are associated with suicidal thinking, suicide attempts and suicidal attitudes. Thus, low self-disclosure may well be a risk factor worthy of further evaluation in the attempt to understand adolescent suicidal behavior.

Keywords: self disclosure, suicide, adolescents, treatment

Procedia PDF Downloads 99
20864 Development and Validation of Employee Trust Scale: Factor Structure, Reliability and Validity

Authors: Chua Bee Seok, Getrude Cosmas, Jasmine Adela Mutang, Shazia Iqbal Hashmi

Abstract:

The aims of this study were to determine the factor structure and psychometric properties (i.e., reliability and convergent validity) of the employees trust scale, a newly created instrument by the researchers. The employees trust scale initially contained 82 items to measure employee’s trust toward their supervisors. A sample of 818 (343 females, 449 males) employees were selected randomly from public and private organization sectors in Kota Kinabalu, Sabah, Malaysia. Their ages ranged from 19 to 67 years old with the mean of 34.55 years old. Their average tenure with their current employer was 11.2 years (s.d. = 7.5 years). The respondents were asked to complete the employees trust scale, as well as a managerial trust questionnaire from Mishra. The exploratory factor analysis on employee’s trust toward their supervisor’s extracted three factors, labeled 'trustworthiness' (32 items), 'position status' (11 items) and 'relationship' (6 items) which accounted for 62.49% of the total variance. Trustworthiness factors were re-categorized into three sub factors: competency (11 items), benevolence (8 items) and integrity (13 items). All factors and sub factors of the scales demonstrated clear reliability with internal consistency of Cronbach’s Alpha above 0.85. The convergent validity of the Scale was supported by an expected pattern of correlations (positive and significant correlation) between the score of all factors and sub factors of the scale and the score on the managerial trust questionnaire which measured the same construct. The convergent validity of employees trust scale was further supported by the significant and positive inter correlation between the factors and sub factors of the scale. The results suggest that the employees trust scale is a reliable and valid measure. However, further studies need to be carried out in other groups of sample as to further validate the Scale.

Keywords: employees trust scale, psychometric properties, trustworthiness, position status, relationship

Procedia PDF Downloads 438
20863 Modelling of Cavity Growth in Underground Coal Gasification

Authors: Preeti Aghalayam, Jay Shah

Abstract:

Underground coal gasification (UCG) is the in-situ gasification of unmineable coals to produce syngas. In UCG, gasifying agents are injected into the coal seam, and a reactive cavity is formed due to coal consumption. The cavity formed is typically hemispherical, and this report consists of the MATLAB model of the UCG cavity to predict the composition of the output gases. There are seven radial and two time-variant ODEs. A MATLAB solver (ode15s) is used to solve the radial ODEs from the above equations. Two for-loops are implemented in the model, i.e., one for time variations and another for radial variation. In the time loop, the radial odes are solved using the MATLAB solver. The radial loop is nested inside the time loop, and the density odes are numerically solved using the Euler method. The model is validated by comparing it with the literature results of laboratory-scale experiments. The model predicts the radial and time variation of the product gases inside the cavity.

Keywords: gasification agent, MATLAB model, syngas, underground coal gasification (UCG)

Procedia PDF Downloads 175
20862 Low Back Pain and Patients Lifting Behaviors among Nurses Working in Al Sadairy Hospital, Aljouf

Authors: Fatma Abdel Moneim Al Tawil

Abstract:

Low back pain (LBP) among nurses has been the subject of research studies worldwide. However, evidence of the influence of patients lifting behaviors and LBP among nurses in Saudi Arabia remains scarce. The purpose of this study was to investigate the relationship between LBP and nurses lifting behaviors. LBP questionnaire was distributed to 100 nurses working in Alsadairy Hospital distributed as Emergency unit(9),Coronary Care unit (9), Intensive Care Unit (7), Dialysis unit (30), Burn unit (5), surgical unit (11), Medical (14) and, X-ray unit (15). The questionnaire included demographic data, attitude scale, Team work scale, Back pain history and Knowledge scale. Regarding to emergency unit, there is appositive significant relation between teamwork scale and Knowledge as r = (0.807) and P =0.05. Regarding to ICU unit, there is a positive significant relation between teamwork scale and attitude scale as r= (0.781) and P =0.05. Regarding to Dialysis unit, there is a positive significant relation between attitude scale and teamwork scale as r=(0.443) and P =0.05. The findings suggest enhanced awareness of occupational safety with safe patient handling practices among nursing students must be emphasized and integrated into their educational curriculum. Moreover, back pain prevention program should incorporate the promotion of an active lifestyle and fitness training the implementation of institutional patient handling policies.

Keywords: low back pain, lifting behaviors, nurses, team work

Procedia PDF Downloads 414
20861 Determination of Johnson-Cook Material and Failure Model Constants for High Tensile Strength Tendon Steel in Post-Tensioned Concrete Members

Authors: I. Gkolfinopoulos, N. Chijiwa

Abstract:

To evaluate the remaining capacity in concrete tensioned members, it is important to accurately estimate damage in precast concrete tendons. In this research Johnson-Cook model and damage parameters of high-strength steel material were calculated by static and dynamic uniaxial tensile tests. Replication of experimental results was achieved through finite element analysis for both single 8-noded three-dimensional element as well as the full-scale dob-bone shaped model and relevant model parameters are proposed. Finally, simulation results in terms of strain and deformation were verified using digital image correlation analysis.

Keywords: DIC analysis, Johnson-Cook, quasi-static, dynamic, rupture, tendon

Procedia PDF Downloads 116
20860 1-g Shake Table Tests to Study the Impact of PGA on Foundation Settlement in Liquefiable Soil

Authors: Md. Kausar Alam, Mohammad Yazdi, Peiman Zogh, Ramin Motamed

Abstract:

The liquefaction-induced ground settlement has caused severe damage to structures in the past decades. However, the amount of building settlement caused by liquefaction is directly proportional to the intensity of the ground shaking. To reduce this soil liquefaction effect, it is essential to examine the influence of peak ground acceleration (PGA). Unfortunately, limited studies have been carried out on this issue. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada Reno to evaluate the influence of PGA with the same duration in liquefiable soil layers. The model is prepared based on a large-scale shake table with a scaling factor of N = 5, which has been conducted at the University of California, San Diego. The model ground has three soil layers with relative densities of 50% for crust, 30% for liquefiable, and 90% for dense layer, respectively. In addition, a shallow foundation is seated over an unsaturated crust layer. After preparing the model, the input motions having various peak ground accelerations (i.e., 0.16g, 0.25g, and 0.37g) for the same duration (10 sec) were applied. Based on the experimental results, when the PGA increased from 0.16g to 0.37g, the foundation increased from 20 mm to 100 mm. In addition, the expected foundation settlement based on the scaling factor was 25 mm, while the actual settlement for PGA 0.25g for 10 seconds was 50 mm.

Keywords: foundation settlement, liquefaction, peak ground acceleration, shake table test

Procedia PDF Downloads 60
20859 Life Cycle Assessment of Biogas Energy Production from a Small-Scale Wastewater Treatment Plant in Central Mexico

Authors: Joel Bonales, Venecia Solorzano, Carlos Garcia

Abstract:

A great percentage of the wastewater generated in developing countries don’t receive any treatment, which leads to numerous environmental impacts. In response to this, a paradigm change in the current wastewater treatment model based on large scale plants towards a small and medium scale based model has been proposed. Nevertheless, small scale wastewater treatment (SS-WTTP) with novel technologies such as anaerobic digesters, as well as the utilization of derivative co-products such as biogas, still presents diverse environmental impacts which must be assessed. This study consisted in a Life Cycle Assessment (LCA) performed to a SS-WWTP which treats wastewater from a small commercial block in the city of Morelia, Mexico. The treatment performed in the SS-WWTP consists in anaerobic and aerobic digesters with a daily capacity of 5,040 L. Two different scenarios were analyzed: the current plant conditions and a hypothetical energy use of biogas obtained in situ. Furthermore, two different allocation criteria were applied: full impact allocation to the system’s main product (treated water) and substitution credits for replacing Mexican grid electricity (biogas) and clean water pumping (treated water). The results showed that the analyzed plant had bigger impacts than what has been reported in the bibliography in the basis of wastewater volume treated, which may imply that this plant is currently operating inefficiently. The evaluated impacts appeared to be focused in the aerobic digestion and electric generation phases due to the plant’s particular configuration. Additional findings prove that the allocation criteria applied is crucial for the interpretation of impacts and that that the energy use of the biogas obtained in this plant can help mitigate associated climate change impacts. It is concluded that SS-WTTP is a environmentally sound alternative for wastewater treatment from a systemic perspective. However, this type of studies must be careful in the selection of the allocation criteria and replaced products, since these factors have a great influence in the results of the assessment.

Keywords: biogas, life cycle assessment, small scale treatment, wastewater treatment

Procedia PDF Downloads 102
20858 A Comparative Analysis of the Performance of COSMO and WRF Models in Quantitative Rainfall Prediction

Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Mary Nsabagwa, Triphonia Jacob Ngailo, Joachim Reuder, Sch¨attler Ulrich, Musa Semujju

Abstract:

The Numerical weather prediction (NWP) models are considered powerful tools for guiding quantitative rainfall prediction. A couple of NWP models exist and are used at many operational weather prediction centers. This study considers two models namely the Consortium for Small–scale Modeling (COSMO) model and the Weather Research and Forecasting (WRF) model. It compares the models’ ability to predict rainfall over Uganda for the period 21st April 2013 to 10th May 2013 using the root mean square (RMSE) and the mean error (ME). In comparing the performance of the models, this study assesses their ability to predict light rainfall events and extreme rainfall events. All the experiments used the default parameterization configurations and with same horizontal resolution (7 Km). The results show that COSMO model had a tendency of largely predicting no rain which explained its under–prediction. The COSMO model (RMSE: 14.16; ME: -5.91) presented a significantly (p = 0.014) higher magnitude of error compared to the WRF model (RMSE: 11.86; ME: -1.09). However the COSMO model (RMSE: 3.85; ME: 1.39) performed significantly (p = 0.003) better than the WRF model (RMSE: 8.14; ME: 5.30) in simulating light rainfall events. All the models under–predicted extreme rainfall events with the COSMO model (RMSE: 43.63; ME: -39.58) presenting significantly higher error magnitudes than the WRF model (RMSE: 35.14; ME: -26.95). This study recommends additional diagnosis of the models’ treatment of deep convection over the tropics.

Keywords: comparative performance, the COSMO model, the WRF model, light rainfall events, extreme rainfall events

Procedia PDF Downloads 241