Search results for: displacement prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3080

Search results for: displacement prediction

290 Self-rated Health as a Predictor of Hospitalizations in Patients with Bipolar Disorder and Major Depression: A Prospective Cohort Study of the United Kingdom Biobank

Authors: Haoyu Zhao, Qianshu Ma, Min Xie, Yunqi Huang, Yunjia Liu, Huan Song, Hongsheng Gui, Mingli Li, Qiang Wang

Abstract:

Rationale: Bipolar disorder (BD) and major depressive disorder (MDD), as severe chronic illnesses that restrict patients’ psychosocial functioning and reduce their quality of life, are both categorized into mood disorders. Emerging evidence has suggested that the reliability of self-rated health (SRH) was wellvalidated and that the risk of various health outcomes, including mortality and health care costs, could be predicted by SRH. Compared with other lengthy multi-item patient-reported outcomes (PRO) measures, SRH was proven to have a comparable predictive ability to predict mortality and healthcare utilization. However, to our knowledge, no study has been conducted to assess the association between SRH and hospitalization among people with mental disorders. Therefore, our study aims to determine the association between SRH and subsequent all-cause hospitalizations in patients with BD and MDD. Methods: We conducted a prospective cohort study on people with BD or MDD in the UK from 2006 to 2010 using UK Biobank touchscreen questionnaire data and linked administrative health databases. The association between SRH and 2-year all-cause hospitalizations was assessed using proportional hazard regression after adjustment for sociodemographics, lifestyle behaviors, previous hospitalization use, the Elixhauser comorbidity index, and environmental factors. Results: A total of 29,966 participants were identified, experiencing 10,279 hospitalization events. Among the cohort, the average age was 55.88 (SD 8.01) years, 64.02% were female, and 3,029 (10.11%), 15,972 (53.30%), 8,313 (27.74%), and 2,652 (8.85%) reported excellent, good, fair, and poor SRH, respectively. Among patients reporting poor SRH, 54.19% had a hospitalization event within 2 years compared with 22.65% for those having excellent SRH. In the adjusted analysis, patients with good, fair, and poor SRH had 1.31 (95% CI 1.21-1.42), 1.82 (95% CI 1.68-1.98), and 2.45 (95% CI 2.22, 2.70) higher hazards of hospitalization, respectively, than those with excellent SRH. Conclusion: SRH was independently associated with subsequent all-cause hospitalizations in patients with BD or MDD. This large study facilitates rapid interpretation of SRH values and underscores the need for proactive SRH screening in this population, which might inform resource allocation and enhance high-risk population detection.

Keywords: severe mental illnesses, hospitalization, risk prediction, patient-reported outcomes

Procedia PDF Downloads 137
289 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform

Procedia PDF Downloads 278
288 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis

Authors: H. Jung, N. Kim, B. Kang, J. Choe

Abstract:

History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.

Keywords: history matching, principal component analysis, reservoir modelling, support vector machine

Procedia PDF Downloads 135
287 Synthesis of High-Pressure Performance Adsorbent from Coconut Shells Polyetheretherketone for Methane Adsorption

Authors: Umar Hayatu Sidik

Abstract:

Application of liquid base petroleum fuel (petrol and diesel) for transportation fuel causes emissions of greenhouse gases (GHGs), while natural gas (NG) reduces the emissions of greenhouse gases (GHGs). At present, compression and liquefaction are the most matured technology used for transportation system. For transportation use, compression requires high pressure (200–300 bar) while liquefaction is impractical. A relatively low pressure of 30-40 bar is achievable by adsorbed natural gas (ANG) to store nearly compressed natural gas (CNG). In this study, adsorbents for high-pressure adsorption of methane (CH4) was prepared from coconut shells and polyetheretherketone (PEEK) using potassium hydroxide (KOH) and microwave-assisted activation. Design expert software version 7.1.6 was used for optimization and prediction of preparation conditions of the adsorbents for CH₄ adsorption. Effects of microwave power, activation time and quantity of PEEK on the adsorbents performance toward CH₄ adsorption was investigated. The adsorbents were characterized by Fourier transform infrared spectroscopy (FTIR), thermogravimetric (TG) and derivative thermogravimetric (DTG) and scanning electron microscopy (SEM). The ideal CH4 adsorption capacities of adsorbents were determined using volumetric method at pressures of 5, 17, and 35 bar at an ambient temperature and 5 oC respectively. Isotherm and kinetics models were used to validate the experimental results. The optimum preparation conditions were found to be 15 wt% amount of PEEK, 3 minutes activation time and 300 W microwave power. The highest CH4 uptake of 9.7045 mmol CH4 adsorbed/g adsorbent was recorded by M33P15 (300 W of microwave power, 3 min activation time and 15 wt% amount of PEEK) among the sorbents at an ambient temperature and 35 bar. The CH4 equilibrium data is well correlated with Sips, Toth, Freundlich and Langmuir. Isotherms revealed that the Sips isotherm has the best fit, while the kinetics studies revealed that the pseudo-second-order kinetic model best describes the adsorption process. In all scenarios studied, a decrease in temperature led to an increase in adsorption of both gases. The adsorbent (M33P15) maintained its stability even after seven adsorption/desorption cycles. The findings revealed the potential of coconut shell-PEEK as CH₄ adsorbents.

Keywords: adsorption, desorption, activated carbon, coconut shells, polyetheretherketone

Procedia PDF Downloads 33
286 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration

Authors: S. J. Addinell, T. Richard, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.

Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis

Procedia PDF Downloads 202
285 The Development of a Precision Irrigation System for Durian

Authors: Chatrabhuti Pipop, Visessri Supattra, Charinpanitkul Tawatchai

Abstract:

Durian is one of the top agricultural products exported by Thailand. There is the massive market potential for the durian industry. While the global demand for Thai durians, especially the demand from China, is very high, Thailand's durian supply is far from satisfying strong demand. Poor agricultural practices result in low yields and poor quality of fruit. Most irrigation systems currently used by the farmers are fixed schedule or fixed rates that ignore actual weather conditions and crop water requirements. In addition, the technologies emerging are too difficult and complex and prices are too high for the farmers to adopt and afford. Many farmers leave the durian trees to grow naturally. With improper irrigation and nutrient management system, durians are vulnerable to a variety of issues, including stunted growth, not flowering, diseases, and death. Technical development or research for durian is much needed to support the wellbeing of the farmers and the economic development of the country. However, there are a limited number of studies or development projects for durian because durian is a perennial crop requiring a long time to obtain the results to report. This study, therefore, aims to address the problem of durian production by developing an autonomous and precision irrigation system. The system is designed and equipped with an industrial programmable controller, a weather station, and a digital flow meter. Daily water requirements are computed based on weather data such as rainfall and evapotranspiration for daily irrigation with variable flow rates. A prediction model is also designed as a part of the system to enhance the irrigation schedule. Before the system was installed in the field, a simulation model was built and tested in a laboratory setting to ensure its accuracy. Water consumption was measured daily before and after the experiment for further analysis. With this system, the crop water requirement is precisely estimated and optimized based on the data from the weather station. Durian will be irrigated at the right amount and at the right time, offering the opportunity for higher yield and higher income to the farmers.

Keywords: Durian, precision irrigation, precision agriculture, smart farm

Procedia PDF Downloads 90
284 Hydrodynamic and Water Quality Modelling to Support Alternative Fuels Maritime Operations Incident Planning & Impact Assessments

Authors: Chow Jeng Hei, Pavel Tkalich, Low Kai Sheng Bryan

Abstract:

Due to the growing demand for sustainability in the maritime industry, there has been a significant increase in focus on alternative fuels such as biofuels, liquefied natural gas (LNG), hydrogen, methanol and ammonia to reduce the carbon footprint of vessels. Alternative fuels offer efficient transportability and significantly reduce carbon dioxide emissions, a critical factor in combating global warming. In an era where the world is determined to tackle climate change, the utilization of methanol is projected to witness a consistent rise in demand, even during downturns in the oil and gas industry. Since 2022, there has been an increase in methanol loading and discharging operations for industrial use in Singapore. These operations were conducted across various storage tank terminals at Jurong Island of varying capacities, which are also used to store alternative fuels for bunkering requirements. The key objective of this research is to support the green shipping industries in the transformation to new fuels such as methanol and ammonia, especially in evolving the capability to inform risk assessment and management of spills. In the unlikely event of accidental spills, a highly reliable forecasting system must be in place to provide mitigation measures and ahead planning. The outcomes of this research would lead to an enhanced metocean prediction capability and, together with advanced sensing, will continuously build up a robust digital twin of the bunkering operating environment. Outputs from the developments will contribute to management strategies for alternative marine fuel spills, including best practices, safety challenges and crisis management. The outputs can also benefit key port operators and the various bunkering, petrochemicals, shipping, protection and indemnity, and emergency response sectors. The forecasted datasets provide a forecast of the expected atmosphere and hydrodynamic conditions prior to bunkering exercises, enabling a better understanding of the metocean conditions ahead and allowing for more refined spill incident management planning

Keywords: clean fuels, hydrodynamics, coastal engineering, impact assessments

Procedia PDF Downloads 42
283 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 36
282 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 13
281 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System

Authors: Abisha Isaac Mohanlal

Abstract:

Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.

Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery

Procedia PDF Downloads 198
280 The Radicalization of Islam in the Syrian Conflict: A Systematic Review from the Interreligious Dialogue Perspective

Authors: Cosette Maiky

Abstract:

Seven years have passed since the crisis erupted and the list of challenges to peacebuilding and interreligious dialogue is still growing ever more discouraging: Violence, displacement, sectarianism, discrimination, radicalisation, fragmentation, and collapse of various social and economic infrastructure have notoriously plagued the war-torn country. As the situation in Syria and neighbouring countries is still creating a real concern about the future of the social cohesion and the coexistence in the region, in her function as Field Expert on Arab Countries at King Abdullah bin Abdelaziz Centre for Interreligious and Intercultural Dialogue, the author shall present a systematic review paper that focuses on the radicalization of Islam in Syria. The exercise was based on a series of research questions that guided both the review of literature as well as the interviews. Their relative meaningfulness shall be assessed and trade-offs discussed in each case to ensure that key questions were addressed and to avoid unnecessary effort. There was an element of flexibility, as the assessment progressed, to further provide and inject additional generic questions. The main sources for the information were: Documents and literature with a direct bearing on the issues of relevance collected in all available formats and information collected through key informant interviews. This latter was particularly helpful to understand what some of the capacity constraints are, as well as the gaps, enablers and barriers. Respondents were selected among those who are engaged in IRD activities clearly linked to peacebuilding (i.e. religious leaders, leaders in religious communities, peace actors, religious actors, conflict parties, minority groups, women initiatives, youth initiatives, civil society organizations, academia, etc.), with relevant professional qualifications and work experience. During the research process, the Consultant carefully took account of sensitivities around terminologies as well as a highly insecure and dynamic context. The Consultant (Arabic native speaker), therefore, adapted terminologies while conducting interviews according to the area and respondent. Findings revealed: the deep ideological polarization and lack of trust dividing communities and preventing meaningful dialogue opportunities; the challenge of prioritizing IRD and peacebuilding work in the context of such a severe humanitarian crisis facing the country; the need to engage religious leaders and institutions in peacebuilding processes and initiatives, the need to have institutions with specific IRD mandate, which can have a sustainable influence on peace through various levels of interventions (from grassroots level to policy and research), and lastly, the need to address stigma in media representation of Muslims and Islam. While religion and religious agendas have been massively used for political issues and power play in the Middle East – and elsewhere, more extensive policy and research efforts are needed to highlight the positive role of religion and religious actors in dialogue and peacebuilding processes.

Keywords: radicalisation, Islam, Syria, conflict

Procedia PDF Downloads 146
279 SARS-CoV-2: Prediction of Critical Charged Amino Acid Mutations

Authors: Atlal El-Assaad

Abstract:

Viruses change with time through mutations and result in new variants that may persist or disappear. A Mutation refers to an actual change in the virus genetic sequence, and a variant is a viral genome that may contain one or more mutations. Critical mutations may cause the virus to be more transmissible, with high disease severity, and more vulnerable to diagnostics, therapeutics, and vaccines. Thus, variants carrying such mutations may increase the risk to human health and are considered variants of concern (VOC). Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) - the contagious in humans, positive-sense single-stranded RNA virus that caused coronavirus disease 2019 (COVID-19) - has been studied thoroughly, and several variants were revealed across the world with their corresponding mutations. SARS-CoV-2 has four structural proteins, known as the S (spike), E (envelope), M (membrane), and N (nucleocapsid) proteins, but prior study and vaccines development focused on genetic mutations in the S protein due to its vital role in allowing the virus to attach and fuse with the membrane of a host cell. Specifically, subunit S1 catalyzes attachment, whereas subunit S2 mediates fusion. In this perspective, we studied all charged amino acid mutations of the SARS-CoV-2 viral spike protein S1 when bound to Antibody CC12.1 in a crystal structure and assessed the effect of different mutations. We generated all missense mutants of SARS-CoV-2 protein amino acids (AAs) within the SARS-CoV-2:CC12.1 complex model. To generate the family of mutants in each complex, we mutated every charged amino acid with all other charged amino acids (Lysine (K), Arginine (R), Glutamic Acid (E), and Aspartic Acid (D)) and studied the new binding of the complex after each mutation. We applied Poisson-Boltzmann electrostatic calculations feeding into free energy calculations to determine the effect of each mutation on binding. After analyzing our data, we identified charged amino acids keys for binding. Furthermore, we validated those findings against published experimental genetic data. Our results are the first to propose in silico potential life-threatening mutations of SARS-CoV-2 beyond the present mutations found in the five common variants found worldwide.

Keywords: SARS-CoV-2, variant, ionic amino acid, protein-protein interactions, missense mutation, AESOP

Procedia PDF Downloads 79
278 Durability Analysis of a Knuckle Arm Using VPG System

Authors: Geun-Yeon Kim, S. P. Praveen Kumar, Kwon-Hee Lee

Abstract:

A steering knuckle arm is the component that connects the steering system and suspension system. The structural performances such as stiffness, strength, and durability are considered in its design process. The former study suggested the lightweight design of a knuckle arm considering the structural performances and using the metamodel-based optimization. The six shape design variables were defined, and the optimum design was calculated by applying the kriging interpolation method. The finite element method was utilized to predict the structural responses. The suggested knuckle was made of the aluminum Al6082, and its weight was reduced about 60% in comparison with the base steel knuckle, satisfying the design requirements. Then, we investigated its manufacturability by performing foraging analysis. The forging was done as hot process, and the product was made through two-step forging. As a final step of its developing process, the durability is investigated by using the flexible dynamic analysis software, LS-DYNA and the pre and post processor, eta/VPG. Generally, a car make does not provide all the information with the part manufacturer. Thus, the part manufacturer has a limit in predicting the durability performance with the unit of full car. The eta/VPG has the libraries of suspension, tire, and road, which are commonly used parts. That makes a full car modeling. First, the full car is modeled by referencing the following information; Overall Length: 3,595mm, Overall Width: 1,595mm, CVW (Curve Vehicle Weight): 910kg, Front Suspension: MacPherson Strut, Rear Suspension: Torsion Beam Axle, Tire: 235/65R17. Second, the road is selected as the cobblestone. The road condition of the cobblestone is almost 10 times more severe than that of usual paved road. Third, the dynamic finite element analysis using the LS-DYNA is performed to predict the durability performance of the suggested knuckle arm. The life of the suggested knuckle arm is calculated as 350,000km, which satisfies the design requirement set up by the part manufacturer. In this study, the overall design process of a knuckle arm is suggested, and it can be seen that the developed knuckle arm satisfies the design requirement of the durability with the unit of full car. The VPG analysis is successfully performed even though it does not an exact prediction since the full car model is very rough one. Thus, this approach can be used effectively when the detail to full car is not given.

Keywords: knuckle arm, structural optimization, Metamodel, forging, durability, VPG (Virtual Proving Ground)

Procedia PDF Downloads 398
277 An Object-Oriented Modelica Model of the Water Level Swell during Depressurization of the Reactor Pressure Vessel of the Boiling Water Reactor

Authors: Rafal Bryk, Holger Schmidt, Thomas Mull, Ingo Ganzmann, Oliver Herbst

Abstract:

Prediction of the two-phase water mixture level during fast depressurization of the Reactor Pressure Vessel (RPV) resulting from an accident scenario is an important issue from the view point of the reactor safety. Since the level swell may influence the behavior of some passive safety systems, it has been recognized that an assumption which at the beginning may be considered as a conservative one, not necessary leads to a conservative result. This paper discusses outcomes obtained during simulations of the water dynamics and heat transfer during sudden depressurization of a vessel filled up to a certain level with liquid water under saturation conditions and with the rest of the vessel occupied by saturated steam. In case of the pressure decrease e.g. due to the main steam line break, the liquid water evaporates abruptly, being a reason thereby, of strong transients in the vessel. These transients and the sudden emergence of void in the region occupied at the beginning by liquid, cause elevation of the two-phase mixture. In this work, several models calculating the water collapse and swell levels are presented and validated against experimental data. Each of the models uses different approach to calculate void fraction. The object-oriented models were developed with the Modelica modelling language and the OpenModelica environment. The models represent the RPV of the Integral Test Facility Karlstein (INKA) – a dedicated test rig for simulation of KERENA – a new Boiling Water Reactor design of Framatome. The models are based on dynamic mass and energy equations. They are divided into several dynamic volumes in each of which, the fluid may be single-phase liquid, steam or a two-phase mixture. The heat transfer between the wall of the vessel and the fluid is taken into account. Additional heat flow rate may be applied to the first volume of the vessel in order to simulate the decay heat of the reactor core in a similar manner as it is simulated at INKA. The comparison of the simulations results against the reference data shows a good agreement.

Keywords: boiling water reactor, level swell, Modelica, RPV depressurization, thermal-hydraulics

Procedia PDF Downloads 182
276 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico

Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos

Abstract:

Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.

Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis

Procedia PDF Downloads 127
275 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database

Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang

Abstract:

For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.

Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree

Procedia PDF Downloads 206
274 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 125
273 Mathematical Study of CO₂ Dispersion in Carbonated Water Injection Enhanced Oil Recovery Using Non-Equilibrium 2D Simulator

Authors: Ahmed Abdulrahman, Jalal Foroozesh

Abstract:

CO₂ based enhanced oil recovery (EOR) techniques have gained massive attention from major oil firms since they resolve the industry's two main concerns of CO₂ contribution to the greenhouse effect and the declined oil production. Carbonated water injection (CWI) is a promising EOR technique that promotes safe and economic CO₂ storage; moreover, it mitigates the pitfalls of CO₂ injection, which include low sweep efficiency, early CO₂ breakthrough, and the risk of CO₂ leakage in fractured formations. One of the main challenges that hinder the wide adoption of this EOR technique is the complexity of accurate modeling of the kinetics of CO₂ mass transfer. The mechanisms of CO₂ mass transfer during CWI include the slow and gradual cross-phase CO₂ diffusion from carbonated water (CW) to the oil phase and the CO₂ dispersion (within phase diffusion and mechanical mixing), which affects the oil physical properties and the spatial spreading of CO₂ inside the reservoir. A 2D non-equilibrium compositional simulator has been developed using a fully implicit finite difference approximation. The material balance term (k) was added to the governing equation to account for the slow cross-phase diffusion of CO₂ from CW to the oil within the gird cell. Also, longitudinal and transverse dispersion coefficients have been added to account for CO₂ spatial distribution inside the oil phase. The CO₂-oil diffusion coefficient was calculated using the Sigmund correlation, while a scale-dependent dispersivity was used to calculate CO₂ mechanical mixing. It was found that the CO₂-oil diffusion mechanism has a minor impact on oil recovery, but it tends to increase the amount of CO₂ stored inside the formation and slightly alters the residual oil properties. On the other hand, the mechanical mixing mechanism has a huge impact on CO₂ spatial spreading (accurate prediction of CO₂ production) and the noticeable change in oil physical properties tends to increase the recovery factor. A sensitivity analysis has been done to investigate the effect of formation heterogeneity (porosity, permeability) and injection rate, it was found that the formation heterogeneity tends to increase CO₂ dispersion coefficients, and a low injection rate should be implemented during CWI.

Keywords: CO₂ mass transfer, carbonated water injection, CO₂ dispersion, CO₂ diffusion, cross phase CO₂ diffusion, within phase CO2 diffusion, CO₂ mechanical mixing, non-equilibrium simulation

Procedia PDF Downloads 143
272 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project

Authors: Shahnam Behnam Malekzadeh, Ian Kerr, Tyson Kaempffer, Teague Harper, Andrew Watson

Abstract:

The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and bedding planes at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including bedding plane elevations and coordinates. Thirteen (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ±55cm, while the actual results showed that 69% of predicted elevations were within ±79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ±99cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.

Keywords: case-based reasoning, geological feature, geology, piezometer, pressure sensor, core logging, dam construction

Procedia PDF Downloads 55
271 Study of Biomechanical Model for Smart Sensor Based Prosthetic Socket Design System

Authors: Wei Xu, Abdo S. Haidar, Jianxin Gao

Abstract:

Prosthetic socket is a component that connects the residual limb of an amputee with an artificial prosthesis. It is widely recognized as the most critical component that determines the comfort of a patient when wearing the prosthesis in his/her daily activities. Through the socket, the body weight and its associated dynamic load are distributed and transmitted to the prosthesis during walking, running or climbing. In order to achieve a good-fit socket for an individual amputee, it is essential to obtain the biomechanical properties of the residual limb. In current clinical practices, this is achieved by a touch-and-feel approach which is highly subjective. Although there have been significant advancements in prosthetic technologies such as microprocessor controlled knee and ankle joints in the last decade, the progress in designing a comfortable socket has been rather limited. This means that the current process of socket design is still very time-consuming, and highly dependent on the expertise of the prosthetist. Supported by the state-of-the-art sensor technologies and numerical simulations, a new socket design system is being developed to help prosthetists achieve rapid design of comfortable sockets for above knee amputees. This paper reports the research work related to establishing biomechanical models for socket design. Through numerical simulation using finite element method, comprehensive relationships between pressure on residual limb and socket geometry were established. This allowed local topological adjustment for the socket so as to optimize the pressure distributions across the residual limb. When the full body weight of a patient is exerted on the residual limb, high pressures and shear forces between the residual limb and the socket occur. During numerical simulations, various hyperplastic models, namely Ogden, Yeoh and Mooney-Rivlin, were used, and their effectiveness in representing the biomechanical properties of soft tissues of the residual limb was evaluated. This also involved reverse engineering, which resulted in an optimal representative model under compression test. To validate the simulation results, a range of silicone models were fabricated. They were tested by an indentation device which yielded the force-displacement relationships. Comparisons of results obtained from FEA simulations and experimental tests showed that the Ogden model did not fit well the soft tissue material indentation data, while the Yeoh model gave the best representation of the soft tissue mechanical behavior under indentation. Compared with hyperplastic model, the result showed that elastic model also had significant errors. In addition, normal and shear stress distributions on the surface of the soft tissue model were obtained. The effect of friction in compression testing and the influence of soft tissue stiffness and testing boundary conditions were also analyzed. All these have contributed to the overall goal of designing a good-fit socket for individual above knee amputees.

Keywords: above knee amputee, finite element simulation, hyperplastic model, prosthetic socket

Procedia PDF Downloads 176
270 Identification and Characterization of in Vivo, in Vitro and Reactive Metabolites of Zorifertinib Using Liquid Chromatography Lon Trap Mass Spectrometry

Authors: Adnan A. Kadi, Nasser S. Al-Shakliah, Haitham Al-Rabiah

Abstract:

Zorifertinib is a novel, potent, oral, a small molecule used to treat non-small cell lung cancer (NSCLC). zorifertinib is an Epidermal Growth Factor Receptor (EGFR) inhibitor and has good blood–brain barrier permeability for (NSCLC) patients with EGFR mutations. zorifertinibis currently at phase II/III clinical trials. The current research reports the characterization and identification of in vitro, in vivo and reactive intermediates of zorifertinib. Prediction of susceptible sites of metabolism and reactivity pathways (cyanide and GSH) of zorifertinib were performed by the Xenosite web predictor tool. In-vitro metabolites of zorifertinib were performed by incubation with rat liver microsomes (RLMs) and isolated perfused rat liver hepatocytes. Extraction of zorifertinib and it's in vitro metabolites from the incubation mixtures were done by protein precipitation. In vivo metabolism was done by giving a single oral dose of zorifertinib(10 mg/Kg) to Sprague Dawely rats in metabolic cages by using oral gavage. Urine was gathered and filtered at specific time intervals (0, 6, 12, 18, 24, 48, 72,96and 120 hr) from zorifertinib dosing. A similar volume of ACN was added to each collected urine sample. Both layers (organic and aqueous) were injected into liquid chromatography ion trap mass spectrometry(LC-IT-MS) to detect vivozorifertinib metabolites. N-methyl piperizine ring and quinazoline group of zorifertinib undergoe metabolism forming iminium and electro deficient conjugated system respectively, which are very reactive toward nucleophilic macromolecules. Incubation of zorifertinib with RLMs in the presence of 1.0 mM KCN and 1.0 Mm glutathione were made to check reactive metabolites as it is often responsible for toxicities associated with this drug. For in vitro metabolites there were nine in vitro phase I metabolites, four in vitro phase II metabolites, eleven reactive metabolites(three cyano adducts, five GSH conjugates metabolites, and three methoxy metabolites of zorifertinib were detected by LC-IT-MS. For in vivo metabolites, there were eight in vivo phase I, tenin vivo phase II metabolitesofzorifertinib were detected by LC-IT-MS. In vitro and in vivo phase I metabolic pathways wereN- demthylation, O-demethylation, hydroxylation, reduction, defluorination, and dechlorination. In vivo phase II metabolic reaction was direct conjugation of zorifertinib with glucuronic acid and sulphate.

Keywords: in vivo metabolites, in vitro metabolites, cyano adducts, GSH conjugate

Procedia PDF Downloads 171
269 Reinventing Business Education: Filling the Knowledge Gap on the Verge of the 4th Industrial Revolution

Authors: Elena Perepelova

Abstract:

As the world approaches the 4th industrial revolution, income inequality has become one of the major societal concerns. Displacement of workers by technology becomes a reality, and in return, new skills and competencies are required. More important than ever, education needs to help individuals understand the wider world around them and make global connections. The author argues for the necessity to incorporate business, economics and finance studies as a part of primary education and offer access to business education to the general population with the primary objective to understand how the world functions. The paper offers a fresh look at existing business theory through an innovative program called 'Usefulnomics'. Realizing that the subject of Economics, Finance and Business are perceived as overwhelming for a large part of the population, the author has taken a holistic approach and created a program that simplifies the definitions of the existing concepts and shifts from the traditional breakdown into subjects and specialties to a teaching method that is based exclusively on real-life example case studies and group debates, in order to better grasp the concepts and put them into context. The paper findings are the result of a two-year project and experimental work with students from UK, USA, Malaysia, Russia, and Spain. The author conducted extensive research through on-line and in-person classes and workshops as well as in-depth interviews of primary and secondary grade students to assess their understanding of what is a business, how businesses operate and the role businesses play in their communities. The findings clearly indicate that students of all ages often understood business concepts and processes only in an intuitive way, which resulted in misconceptions and gaps in knowledge. While knowledge gaps were easier to identify and correct in primary school students, as students’ age increased, the learning process became distorted by career choices, political views, and the students’ actual (or perceived) economic status. While secondary school students recognized more concepts, their real understanding was often on par with upper primary school age students. The research has also shown that lack of correct vocabulary created a strong barrier to communication and real-life application or further learning. Based on these findings, each key business concept was practiced and put into context with small groups of students in order to design the content and format which would be well accepted and understood by the target group. As a result, the final learning program package was based on case studies from daily modern life and used a wide range of examples: from popular brands and well-known companies to basic commodities. In the final stage, the content and format were put into practice in larger classrooms. The author would like to share the key findings from the research, the resulting learning program as well as present new ideas on how the program could be further enriched and adapted so schools and organizations can deliver it.

Keywords: business, finance, economics, lifelong learning, XXI century skills

Procedia PDF Downloads 96
268 Early Age Behavior of Wind Turbine Gravity Foundations

Authors: Janet Modu, Jean-Francois Georgin, Laurent Briancon, Eric Antoinet

Abstract:

The current practice during the repowering phase of wind turbines is deconstruction of existing foundations and construction of new foundations to accept larger wind loads or once the foundations have reached the end of their service lives. The ongoing research project FUI25 FEDRE (Fondations d’Eoliennes Durables et REpowering) therefore serves to propose scalable wind turbine foundation designs to allow reuse of the existing foundations. To undertake this research, numerical models and laboratory-scale models are currently being utilized and implemented in the GEOMAS laboratory at INSA Lyon following instrumentation of a reference wind turbine situated in the Northern part of France. Sensors placed within both the foundation and the underlying soil monitor the evolution of stresses from the foundation’s early age to stresses during service. The results from the instrumentation form the basis of validation for both the laboratory and numerical works conducted throughout the project duration. The study currently focuses on the effect of coupled mechanisms (Thermal-Hydro-Mechanical-Chemical) that induce stress during the early age of the reinforced concrete foundation, and scale factor considerations in the replication of the reference wind turbine foundation at laboratory-scale. Using THMC 3D models on COMSOL Multi-physics software, the numerical analysis performed on both the laboratory-scale and the full-scale foundations simulate the thermal deformation, hydration, shrinkage (desiccation and autogenous) and creep so as to predict the initial damage caused by internal processes during concrete setting and hardening. Results show a prominent effect of early age properties on the damage potential in full-scale wind turbine foundations. However, a prediction of the damage potential at laboratory scale shows significant differences in early age stresses in comparison to the full-scale model depending on the spatial position in the foundation. In addition to the well-known size effect phenomenon, these differences may contribute to inaccuracies encountered when predicting ultimate deformations of the on-site foundation using laboratory scale models.

Keywords: cement hydration, early age behavior, reinforced concrete, shrinkage, THMC 3D models, wind turbines

Procedia PDF Downloads 148
267 Plastic Behavior of Steel Frames Using Different Concentric Bracing Configurations

Authors: Madan Chandra Maurya, A. R. Dar

Abstract:

Among the entire natural calamities earthquake is the one which is most devastating. If the losses due to all other calamities are added still it will be very less than the losses due to earthquakes. So it means we must be ready to face such a situation, which is only possible if we make our structures earthquake resistant. A review of structural damages to the braced frame systems after several major earthquakes—including recent earthquakes—has identified some anticipated and unanticipated damage. This damage has prompted many engineers and researchers around the world to consider new approaches to improve the behavior of braced frame systems. Extensive experimental studies over the last fourty years of conventional buckling brace components and several braced frame specimens have been briefly reviewed, highlighting that the number of studies on the full-scale concentric braced frames is still limited. So for this reason the study surrounds the words plastic behavior, steel structure, brace frame system. In this study, there are two different analytical approaches which have been used to predict the behavior and strength of an un-braced frame. The first is referred as incremental elasto-plastic analysis a plastic approach. This method gives a complete load-deflection history of the structure until collapse. It is based on the plastic hinge concept for fully plastic cross sections in a structure under increasing proportional loading. In this, the incremental elasto-plastic analysis- hinge by hinge method is used in this study because of its simplicity to know the complete load- deformation history of two storey un-braced scaled model. After that the experiments were conducted on two storey scaled building model with and without bracing system to know the true or experimental load deformation curve of scaled model. Only way, is to understand and analyze these techniques and adopt these techniques in our structures. The study named as Plastic Behavior of Steel Frames using Different Concentric Bracing Configurations deals with all this. This study aimed at improving the already practiced traditional systems and to check the behavior and its usefulness with respect to X-braced system as reference model i.e. is how plastically it is different from X-braced. Laboratory tests involved determination of plastic behavior of these models (with and without brace) in terms of load-deformation curve. Thus, the aim of this study is to improve the lateral displacement resistance capacity by using new configuration of brace member in concentric manner which is different from conventional concentric brace. Once the experimental and manual results (using plastic approach) compared, simultaneously the results from both approach were also compared with nonlinear static analysis (pushover analysis) approach using ETABS i.e how both the previous results closely depicts the behavior in pushover curve and upto what limit. Tests results shows that all the three approaches behaves somewhat in similar manner upto yield point and also the applicability of elasto-plastic analysis (hinge by hinge method) to know the plastic behavior. Finally the outcome from three approaches shows that the newer one configuration which is chosen for study behaves in-between the plane frame (without brace or reference frame) and the conventional X-brace frame.

Keywords: elasto-plastic analysis, concentric steel braced frame, pushover analysis, ETABS

Procedia PDF Downloads 205
266 Influence of Ride Control Systems on the Motions Response and Passenger Comfort of High-Speed Catamarans in Irregular Waves

Authors: Ehsan Javanmardemamgheisi, Javad Mehr, Jason Ali-Lavroff, Damien Holloway, Michael Davis

Abstract:

During the last decades, a growing interest in faster and more efficient waterborne transportation has led to the development of high-speed vessels for both commercial and military applications. To satisfy this global demand, a wide variety of arrangements of high-speed crafts have been proposed by designers. Among them, high-speed catamarans have proven themselves to be a suitable Roll-on/Roll-off configuration for carrying passengers and cargo due to widely spaced demi hulls, a wide deck zone, and a high ratio of deadweight to displacement. To improve passenger comfort and crew workability and enhance the operability and performance of high-speed catamarans, mitigating the severity of motions and structural loads using Ride Control Systems (RCS) is essential.In this paper, a set of towing tank tests was conducted on a 2.5 m scaled model of a 112 m Incat Tasmania high-speed catamaran in irregular head seas to investigate the effect of different ride control algorithms including linear and nonlinear versions of the heave control, pitch control, and local control on motion responses and passenger comfort of the full-scale ship. The RCS included a centre bow-fitted T-Foil and two transom-mounted stern tabs. All the experiments were conducted at the Australian Maritime College (AMC) towing tank at a model speed of 2.89 m/s (37 knots full scale), a modal period of 1.5 sec (10 sec full scale) and two significant wave heights of 60 mm and 90 mm, representing full-scale wave heights of 2.7 m and 4 m, respectively. Spectral analyses were performed using Welch’s power spectral density method on the vertical motion time records of the catamaran model to calculate heave and pitch Response Amplitude Operators (RAOs). Then, noting that passenger discomfort arises from vertical accelerations and that the vertical accelerations vary at different longitudinal locations within the passenger cabin due to the variations in amplitude and relative phase of the pitch and heave motions, the vertical accelerations were calculated at three longitudinal locations (LCG, T-Foil, and stern tabs). Finally, frequency-weighted Root Mean Square (RMS) vertical accelerations were calculated to estimate Motion Sickness Dose Value (MSDV) of the ship based on ISO 2631-recommendations. It was demonstrated that in small seas, implementing a nonlinear pitch control algorithm reduces the peak pitch motions by 41%, the vertical accelerations at the forward location by 46%, and motion sickness at the forward position by around 20% which provides great potential for further improvement in passenger comfort, crew workability, and operability of high-speed catamarans.

Keywords: high-speed catamarans, ride control system, response amplitude operators, vertical accelerations, motion sickness, irregular waves, towing tank tests.

Procedia PDF Downloads 49
265 Exploration of Classic Models of Precipitation in Iran: A Case Study of Sistan and Baluchestan Province

Authors: Mohammad Borhani, Ahmad Jamshidzaei, Mehdi Koohsari

Abstract:

The study of climate has captivated human interest throughout history. In response to this fascination, individuals historically organized their daily activities in alignment with prevailing climatic conditions and seasonal variations. Understanding the elements and specific climatic parameters of each region, such as precipitation, which directly impacts human life, is essential because, in recent years, there has been a significant increase in heavy rainfall in various parts of the world attributed to the effects of climate change. Climate prediction models suggest a future scenario characterized by an increase in severe precipitation events and related floods on a global scale. This is a result of human-induced greenhouse gas emissions causing changes in the natural precipitation patterns. The Intergovernmental Panel on Climate Change reported global warming in 2001. The average global temperature has shown an increasing trend since 1861. In the 20th century, this increase has been between (0/2 ± 0/6) °C. The present study focused on examining the trend of monthly, seasonal, and annual precipitation in Sistan and Baluchestan provinces. The study employed data obtained from 13 precipitation measurement stations managed by the Iran Water Resources Management Company, encompassing daily precipitation records spanning the period from 1997 to 2016. The results indicated that the total monthly precipitation at the studied stations in Sistan and Baluchestan province follows a sinusoidal trend. The highest intense precipitation was observed in January, February, and March, while the lowest occurred in September, October, and then November. The investigation of the trend of seasonal precipitation in this province showed that precipitation follows an upward trend in the autumn season, reaching its peak in winter, and then shows a decreasing trend in spring and summer. Also, the examination of average precipitation indicated that the highest yearly precipitation occurred in 1997 and then in 2004, while the lowest annual precipitation took place between 1999 and 2001. The analysis of the annual precipitation trend demonstrates a decrease in precipitation from 1997 to 2016 in Sistan and Baluchestan province.

Keywords: climate change, extreme precipitation, greenhouse gas, trend analysis

Procedia PDF Downloads 36
264 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth in Patients with Lymph Nodes Metastases

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This paper is devoted to mathematical modelling of the progression and stages of breast cancer. We propose Consolidated mathematical growth model of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases (CoM-III) as a new research tool. We are interested in: 1) modelling the whole natural history of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; 2) developing adequate and precise CoM-III which reflects relations between primary tumor and secondary distant metastases; 3) analyzing the CoM-III scope of application; 4) implementing the model as a software tool. Firstly, the CoM-III includes exponential tumor growth model as a system of determinate nonlinear and linear equations. Secondly, mathematical model corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for secondary distant metastases growth in patients with lymph nodes metastases; 3) ‘visible period’ for secondary distant metastases growth in patients with lymph nodes metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-III model and predictive software: a) detect different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; b) make forecast of the period of the distant metastases appearance in patients with lymph nodes metastases; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoM-III: the number of doublings for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases. The CoM-III enables, for the first time, to predict the whole natural history of primary tumor and secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-III describes correctly primary tumor and secondary distant metastases growth of IA, IIA, IIB, IIIB (T1-4N1-3M0) stages in patients with lymph nodes metastases (N1-3); b) facilitates the understanding of the appearance period and inception of secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, primary tumor, secondary metastases, survival

Procedia PDF Downloads 281
263 Alphabet Recognition Using Pixel Probability Distribution

Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay

Abstract:

Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.

Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix

Procedia PDF Downloads 359
262 Application of Shore Protective Structures in Optimum Land Using of Defense Sites Located in Coastal Cities

Authors: Mir Ahmad Lashteh Neshaei, Hamed Afsoos Biria, Ata Ghabraei, Mir Abdolhamid Mehrdad

Abstract:

Awareness of effective land using issues in coastal area including protection of natural ecosystems and coastal environment due to the increasing of human life along the coast is of great importance. There are numerous valuable structures and heritages which are located in defence sites and waterfront area. Marine structures such as groins, sea walls and detached breakwaters are constructed in coast to improve the coast stability against bed erosion due to changing wave and climate pattern. Marine mechanisms and interaction with the shore protection structures need to be intensively studied. Groins are one of the most prominent structures that are used in shore protection to create a safe environment for coastal area by maintaining the land against progressive coastal erosion. The main structural function of a groin is to control the long shore current and littoral sediment transport. This structure can be submerged and provide the necessary beach protection without negative environmental impact. However, for submerged structures adopted for beach protection, the shoreline response to these structures is not well understood at present. Nowadays, modelling and computer simulation are used to assess beach morphology in the vicinity of marine structures to reduce their environmental impact. The objective of this study is to predict the beach morphology in the vicinity of submerged groins and comparison with non-submerged groins with focus on a part of the coast located in Dahane sar Sefidrood, Guilan province, Iran where serious coast erosion has occurred recently. The simulations were obtained using a one-line model which can be used as a first approximation of shoreline prediction in the vicinity of groins. The results of the proposed model are compared with field measurements to determine the shape of the coast. Finally, the results of the present study show that using submerged groins can have a good efficiency to control the beach erosion without causing severe environmental impact to the coast. The important outcome from this study can be employed in optimum designing of defence sites in the coastal cities to improve their efficiency in terms of re-using the heritage lands.

Keywords: submerged structures, groin, shore protective structures, coastal cities

Procedia PDF Downloads 293
261 The Effect of Air Filter Performance on Gas Turbine Operation

Authors: Iyad Al-Attar

Abstract:

Air filters are widely used in gas turbines applications to ensure that the large mass (500kg/s) of clean air reach the compressor. The continuous demand of high availability and reliability has highlighted the critical role of air filter performance in providing enhanced air quality. In addition to being challenged with different environments [tropical, coastal, hot], gas turbines confront wide array of atmospheric contaminants with various concentrations and particle size distributions that would lead to performance degradation and components deterioration. Therefore, the role of air filters is of a paramount importance since fouled compressor can reduce power output and availability of the gas turbine to over 70 % throughout operation. Consequently, accurate filter performance prediction is critical tool in their selection considering their role in minimizing the economic impact of outages. In fact, actual performance of Efficient Particulate Air [EPA] filters used in gas turbine tend to deviate from the performance predicted by laboratory results. This experimental work investigates the initial pressure drop and fractional efficiency curves of full-scale pleated V-shaped EPA filters used globally in gas turbine. The investigation involved examining the effect of different operational conditions such as flow rates [500 to 5000 m3/h] and design parameters such as pleat count [28, 30, 32 and 34 pleats per 100mm]. This experimental work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase of flow rates and pleat density. The reasons, which led to surface area losses of filtration media, are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. This paper also demonstrates that the effect of increasing the flow rate has more pronounced effect on filter performance compared to pleating density. This experimental work suggests that a valid comparison of the pleat densities should be based on the effective surface area, namely, the area that participates in the filtration process, and not the total surface area the pleat density provides. Throughout this study, optimal pleat count that satisfies both initial pressure drop and efficiency requirements may not have necessarily existed.

Keywords: filter efficiency, EPA Filters, pressure drop, permeability

Procedia PDF Downloads 216