Search results for: inverse analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27144

Search results for: inverse analysis

27024 A Study of Non Linear Partial Differential Equation with Random Initial Condition

Authors: Ayaz Ahmad

Abstract:

In this work, we present the effect of noise on the solution of a partial differential equation (PDE) in three different setting. We shall first consider random initial condition for two nonlinear dispersive PDE the non linear Schrodinger equation and the Kortteweg –de vries equation and analyse their effect on some special solution , the soliton solutions.The second case considered a linear partial differential equation , the wave equation with random initial conditions allow to substantially decrease the computational and data storage costs of an algorithm to solve the inverse problem based on the boundary measurements of the solution of this equation. Finally, the third example considered is that of the linear transport equation with a singular drift term, when we shall show that the addition of a multiplicative noise term forbids the blow up of solutions under a very weak hypothesis for which we have finite time blow up of a solution in the deterministic case. Here we consider the problem of wave propagation, which is modelled by a nonlinear dispersive equation with noisy initial condition .As observed noise can also be introduced directly in the equations.

Keywords: drift term, finite time blow up, inverse problem, soliton solution

Procedia PDF Downloads 186
27023 Ensemble Sampler For Infinite-Dimensional Inverse Problems

Authors: Jeremie Coullon, Robert J. Webber

Abstract:

We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.

Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction

Procedia PDF Downloads 129
27022 Analysis of Cross-Correlations in Emerging Markets Using Random Matrix Theory

Authors: Thomas Chinwe Urama, Patrick Oseloka Ezepue, Peters Chimezie Nnanwa

Abstract:

This paper investigates the universal financial dynamics in two dominant stock markets in Sub-Saharan Africa, through an in-depth analysis of the cross-correlation matrix of price returns in Nigerian Stock Market (NSM) and Johannesburg Stock Exchange (JSE), for the period 2009 to 2013. The strength of correlations between stocks is known to be higher in JSE than that of the NSM. Particularly important for modelling Nigerian derivatives in the future, the interactions of other stocks with the oil sector are weak, whereas the banking sector has strong positive interactions with the other sectors in the stock exchange. For the JSE, it is the oil sector and beverages that have greater sectorial correlations, instead of the banks which have the weaker correlation with other sectors in the stock exchange.

Keywords: random matrix theory, cross-correlations, emerging markets, option pricing, eigenvalues eigenvectors, inverse participation ratios and implied volatility

Procedia PDF Downloads 270
27021 Comparitive Analysis of Islamic and Conventional Banking Systems in Terms of Profitability: A Study on Emerging Market Economies

Authors: Alimshan Faizulayev, Eralp Bektas, Abdul Ghafar Ismail, Bezhan Rustamov

Abstract:

This paper performs empirical analysis on determinants of profitability in Islamic and Conventional Banks. The main focus of this study is to evaluate and measure of financial performance of Islamic banking firms operating in Egypt, Iran, Malaysia, Pakistan, Turkey, UAE in contrast to Conventional ones in those countries. To evaluate empirically performance of the banks, various financial ratios are employed. We measure performance in terms of liquidity, profitability, solvency, and efficiency. In this work, t-test, F-test, and OLS analysis are used to make hypothesis tests. Our findings reveal that there are similarities and differences in profitability determinants of Islamic and Conventional banking firms. The cost to revenue ratio has inverse relationship with profitability indicators in both banking systems. However, there are differences in financial performances between Conventional Banks and Islamic banks which are found in overall picture of all banks in terms of net income margin.

Keywords: Islamic banking, conventional banking, GDP growth, emerging market economies

Procedia PDF Downloads 363
27020 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots

Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra

Abstract:

Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.

Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation

Procedia PDF Downloads 160
27019 Modelling and Detecting the Demagnetization Fault in the Permanent Magnet Synchronous Machine Using the Current Signature Analysis

Authors: Yassa Nacera, Badji Abderrezak, Saidoune Abdelmalek, Houassine Hamza

Abstract:

Several kinds of faults can occur in a permanent magnet synchronous machine (PMSM) systems: bearing faults, electrically short/open faults, eccentricity faults, and demagnetization faults. Demagnetization fault means that the strengths of permanent magnets (PM) in PMSM decrease, and it causes low output torque, which is undesirable for EVs. The fault is caused by physical damage, high-temperature stress, inverse magnetic field, and aging. Motor current signature analysis (MCSA) is a conventional motor fault detection method based on the extraction of signal features from stator current. a simulation model of the PMSM under partial demagnetization and uniform demagnetization fault was established, and different degrees of demagnetization fault were simulated. The harmonic analyses using the Fast Fourier Transform (FFT) show that the fault diagnosis method based on the harmonic wave analysis is only suitable for partial demagnetization fault of the PMSM and does not apply to uniform demagnetization fault of the PMSM.

Keywords: permanent magnet, diagnosis, demagnetization, modelling

Procedia PDF Downloads 34
27018 In-situ Observations Using SEM-EBSD for Bending Deformation in Single-Crystal Materials

Authors: Yuko Matayoshi, Takashi Sakai, Yin-Gjum Jin, Jun-ichi Koyama

Abstract:

To elucidate the material characteristics of single crystals of pure aluminum and copper, the respective relations between crystallographic orientations and micro structures were examined, along with bending and mechanical properties. The texture distribution was also analysed. Bending tests were performed in a SEM apparatus while its behaviors were observed. Some analytical results related to crystal direction maps, inverse pole figures, and textures were obtained from electron back scatter diffraction (EBSD) analyses.

Keywords: pure aluminum, pure copper, single crystal, bending, SEM-EBSD analysis, texture, microstructure

Procedia PDF Downloads 345
27017 Prediction of Physical Properties and Sound Absorption Performance of Automotive Interior Materials

Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Seong-Jin Cho, Tae-Hyeon Oh, Dae-Kyu Park

Abstract:

Sound absorption coefficient is considered important when designing because noise affects emotion quality of car. It is designed with lots of experiment tunings in the field because it is unreliable to predict it for multi-layer material. In this paper, we present the design of sound absorption for automotive interior material with multiple layers using estimation software of sound absorption coefficient for reverberation chamber. Additionally, we introduce the method for estimation of physical properties required to predict sound absorption coefficient of car interior materials with multiple layers too. It is calculated by inverse algorithm. It is very economical to get information about physical properties without expensive equipment. Correlation test is carried out to ensure reliability for accuracy. The data to be used for the correlation is sound absorption coefficient measured in the reverberation chamber. In this way, it is considered economical and efficient to design automotive interior materials. And design optimization for sound absorption coefficient is also easy to implement when it is designed.

Keywords: sound absorption coefficient, optimization design, inverse algorithm, automotive interior material, multiple layers nonwoven, scaled reverberation chamber, sound impedance tubes

Procedia PDF Downloads 281
27016 Occupational Attainment of Second Generation of Ethnic Minority Immigrants in the UK

Authors: Rukhsana Kausar, Issam Malki

Abstract:

The integration and assimilation of ethnic minority immigrants (EMIs) and their subsequent generations remains a serious unsettled issue in most of the host countries. This study conducts the labour market gender analysis to investigate specifically whether second generation of ethnic minority immigrants in the UK is gaining access to professional and managerial employment and advantaged occupational positions on par with their native counterparts. The data used to examine the labour market achievements of EMIs is taken from Labour Force Survey (LFS) for the period 2014-2018. We apply a multivalued treatment under ignorability as proposed by Cattaneo (2010), which refers to treatment effects under the assumptions of (i) selection – on – observables and (ii) common support. We report estimates of Average Treatment Effect (ATE), Average Treatment Effect on the Treated (ATET), and Potential Outcomes Means (POM) using three estimators, including the Regression Adjustment (RA), Augmented Inverse Probability Weighting (AIPW) and Inverse Probability Weighting- Regression Adjustment (IPWRA). We consider two cases: the case with four categories where the first-generation natives are the base category, the second case combine all natives as a base group. Our findings suggest the following. Under Case 1, the estimated probabilities and differences across groups are consistently similar and highly significant. As expected, first generation natives have the highest probability for higher career attainment among both men and women. The findings also suggest that first generation immigrants perform better than the remaining two groups, including the second-generation natives and immigrants. Furthermore, second generation immigrants have higher probability to attain higher professional career, while this is lower for a managerial career. Similar conclusions are reached under Case 2. That is to say that both first – generation and second – generation immigrants have a lower probability for higher career and managerial attainment. First – generation immigrants are found to perform better than second – generation immigrants.

Keywords: immigrnats, second generation, occupational attainment, ethnicity

Procedia PDF Downloads 86
27015 Trajectory Tracking of a Redundant Hybrid Manipulator Using a Switching Control Method

Authors: Atilla Bayram

Abstract:

This paper presents the trajectory tracking control of a spatial redundant hybrid manipulator. This manipulator consists of two parallel manipulators which are a variable geometry truss (VGT) module. In fact, each VGT module with 3-degress of freedom (DOF) is a planar parallel manipulator and their operational planes of these VGT modules are arranged to be orthogonal to each other. Also, the manipulator contains a twist motion part attached to the top of the second VGT module to supply the missing orientation of the endeffector. These three modules constitute totally 7-DOF hybrid (parallel-parallel) redundant spatial manipulator. The forward kinematics equations of this manipulator are obtained, then, according to these equations, the inverse kinematics is solved based on an optimization with the joint limit avoidance. The dynamic equations are formed by using virtual work method. In order to test the performance of the redundant manipulator and the controllers presented, two different desired trajectories are followed by using the computed force control method and a switching control method. The switching control method is combined with the computed force control method and genetic algorithm. In the switching control method, the genetic algorithm is only used for fine tuning in the compensation of the trajectory tracking errors.

Keywords: computed force method, genetic algorithm, hybrid manipulator, inverse kinematics of redundant manipulators, variable geometry truss

Procedia PDF Downloads 311
27014 A Data Driven Approach for the Degradation of a Lithium-Ion Battery Based on Accelerated Life Test

Authors: Alyaa M. Younes, Nermine Harraz, Mohammad H. Elwany

Abstract:

Lithium ion batteries are currently used for many applications including satellites, electric vehicles and mobile electronics. Their ability to store relatively large amount of energy in a limited space make them most appropriate for critical applications. Evaluation of the life of these batteries and their reliability becomes crucial to the systems they support. Reliability of Li-Ion batteries has been mainly considered based on its lifetime. However, another important factor that can be considered critical in many applications such as in electric vehicles is the cycle duration. The present work presents the results of an experimental investigation on the degradation behavior of a Laptop Li-ion battery (type TKV2V) and the effect of applied load on the battery cycle time. The reliability was evaluated using an accelerated life test. Least squares linear regression with median rank estimation was used to estimate the Weibull distribution parameters needed for the reliability functions estimation. The probability density function, failure rate and reliability function under each of the applied loads were evaluated and compared. An inverse power model is introduced that can predict cycle time at any stress level given.

Keywords: accelerated life test, inverse power law, lithium-ion battery, reliability evaluation, Weibull distribution

Procedia PDF Downloads 147
27013 Assessing the Impact of Covid-19 Pandemic on Waste Management Workers in Ghana

Authors: Mensah-Akoto Julius, Kenichi Matsui

Abstract:

This paper examines the impact of COVID-19 on waste management workers in Ghana. A questionnaire survey was conducted among 60 waste management workers in Accra metropolis, the capital region of Ghana, to understand the impact of the COVID-19 pandemic on waste generation, workers’ safety in collecting solid waste, and service delivery. To find out correlations between the pandemic and safety of waste management workers, a regression analysis was used. Regarding waste generation, the results show the pandemic led to the highest annual per capita solid waste generation, or 3,390 tons, in 2020. Regarding the safety of workers, the regression analysis shows a significant and inverse association between COVID-19 and waste management services. This means that contaminated wastes may infect field workers with COVID-19 due to their direct exposure. A rise in new infection cases would have a negative impact on the safety and service delivery of the workers. The result also shows that an increase in economic activities negatively impacts waste management workers. The analysis, however, finds no statistical relationship between workers’ service deliveries and employees’ salaries. The study then discusses how municipal waste management authorities can ensure safe and effective waste collection during the pandemic.

Keywords: Covid-19, waste management worker, waste collection, Ghana

Procedia PDF Downloads 171
27012 The Mediation Role of Loneliness in the Relationship between Interpersonal Trust and Empathy

Authors: Ghazal Doostmohammadi, Susan Rahimzadeh

Abstract:

Aim: This research aimed to investigate the relationship between empathy and interpersonal trust and recognize the mediating role of loneliness between them in both genders. Methods: With a correlational descriptive design, 192 university students (130 female and 62 male) responded to the questionnaires on “empathy quotient,” “loneliness,” and “interpersonal trust” tests. These tests were designed and validated by experts in the field. Data were analysed using Pearson correlation and path analysis, which is a statistical technique that uses standard linear regression equations to determine the degree of conformity of a theoretical causal model with reality. Results: The data analysis showed that there was no significant correlation between interpersonal trust, both with loneliness (t=0.169) and empathy (t=0.186), while there was a significant negative correlation (t=0.359) between empathy and loneliness. This means that there is an inverse correlation between empathy and loneliness. The path analysis confirmed the hypothesis of the research about the mediating role of loneliness between empathy and interpersonal trust. But gender did not play a role in this relationship. Conclusion: As an outcome, clinical professionals and education trainers should pay more attention to interpersonal trust as a basic need and try to recreate and shape it to prevent people's social breakdown, and on the other hand, self-disclosure training (especially in Men), expression of feelings and courage should be given double importance to prevent the consequences of loneliness.

Keywords: empathy, loneliness, interpersonal trust, gender

Procedia PDF Downloads 55
27011 Sidelobe Free Inverse Synthetic Aperture Radar Imaging of Non Cooperative Moving Targets Using WiFi

Authors: Jiamin Huang, Shuliang Gui, Zengshan Tian, Fei Yan, Xiaodong Wu

Abstract:

In recent years, with the rapid development of radio frequency technology, the differences between radar sensing and wireless communication in terms of receiving and sending channels, signal processing, data management and control are gradually shrinking. There has been a trend of integrated communication radar sensing. However, most of the existing radar imaging technologies based on communication signals are combined with synthetic aperture radar (SAR) imaging, which does not conform to the practical application case of the integration of communication and radar. Therefore, in this paper proposes a high-precision imaging method using communication signals based on the imaging mechanism of inverse synthetic aperture radar (ISAR) imaging. This method makes full use of the structural characteristics of the orthogonal frequency division multiplexing (OFDM) signal, so the sidelobe effect in distance compression is removed and combines radon transform and Fractional Fourier Transform (FrFT) parameter estimation methods to achieve ISAR imaging of non-cooperative targets. The simulation experiment and measured results verify the feasibility and effectiveness of the method, and prove its broad application prospects in the field of intelligent transportation.

Keywords: integration of communication and radar, OFDM, radon, FrFT, ISAR

Procedia PDF Downloads 92
27010 Preliminary Geophysical Assessment of Soil Contaminants around Wacot Rice Factory Argungu, North-Western Nigeria

Authors: A. I. Augie, Y. Alhassan, U. Z. Magawata

Abstract:

Geophysical investigation was carried out at wacot rice factory Argungu north-western Nigeria, using the 2D electrical resistivity method. The area falls between latitude 12˚44′23ʺN to 12˚44′50ʺN and longitude 4032′18′′E to 4032′39′′E covering a total area of about 1.85 km. Two profiles were carried out with Wenner configuration using resistivity meter (Ohmega). The data obtained from the study area were modeled using RES2DIVN software which gave an automatic interpretation of the apparent resistivity data. The inverse resistivity models of the profiles show the high resistivity values ranging from 208 Ωm to 651 Ωm. These high resistivity values in the overburden were due to dryness and compactness of the strata that lead to consolidation, which is an indication that the area is free from leachate contaminations. However, from the inverse model, there are regions of low resistivity values (1 Ωm to 18 Ωm), these zones were observed and identified as clayey and the most contaminated zones. The regions of low resistivity thereby indicated the leachate plume or the highly leachate concentrated zones due to similar resistivity values in both clayey and leachate. The regions of leachate are mainly from the factory into the surrounding area and its groundwater. The maximum leachate infiltration was found at depths 1 m to 15.9 m (P1) and 6 m to 15.9 m (P2) vertically, as well as distance along the profiles from 67 m to 75 m (P1), 155 m to 180 m (P1), and 115 m to 192 m (P2) laterally.

Keywords: contaminant, leachate, soil, groundwater, electrical, resistivity

Procedia PDF Downloads 139
27009 Inversion of Gravity Data for Density Reconstruction

Authors: Arka Roy, Chandra Prakash Dubey

Abstract:

Inverse problem generally used for recovering hidden information from outside available data. Vertical component of gravity field we will be going to use for underneath density structure calculation. Ill-posing nature is main obstacle for any inverse problem. Linear regularization using Tikhonov formulation are used for appropriate choice of SVD and GSVD components. For real time data handle, signal to noise ratios should have to be less for reliable solution. In our study, 2D and 3D synthetic model with rectangular grid are used for gravity field calculation and its corresponding inversion for density reconstruction. Fine grid also we have considered to hold any irregular structure. Keeping in mind of algebraic ambiguity factor number of observation point should be more than that of number of data point. Picard plot is represented here for choosing appropriate or main controlling Eigenvalues for a regularized solution. Another important study is depth resolution plot (DRP). DRP are generally used for studying how the inversion is influenced by regularizing or discretizing. Our further study involves real time gravity data inversion of Vredeforte Dome South Africa. We apply our method to this data. The results include density structure is in good agreement with known formation in that region, which puts an additional support of our method.

Keywords: depth resolution plot, gravity inversion, Picard plot, SVD, Tikhonov formulation

Procedia PDF Downloads 183
27008 Higher Freshwater Fish and Sea Fish Intake Is Inversely Associated with Liver Cancer in Patients with Hepatitis B

Authors: Maomao Cao

Abstract:

Background and aims While the association between higher consumption of fish and lower liver cancer risk has been confirmed, however, the association between specific fish intake and liver cancer risk remains unknown. We aimed to identify the association between specific fish consumption and the risk of liver cancer. Methods: Based on a community-based seropositive hepatitis B cohort involving 18404 individuals, face to face interview was conducted by a standardized questionnaire to acquire baseline information. Three common fish types in this study were analyzed, including freshwater fish, sea fish, and small fish (shrimp, crab, conch, and shell). All participants received liver cancer screening, and possible cases were identified by CT or MRI. Multivariable logistic models were applied to estimate the odds ratio (OR) and 95% confidence intervals (CI). Multivariate multiple imputations were utilized to impute observations with missing values. Results: 179 liver cancer cases were identified. Consumption of freshwater fish and sea fish at least once a week had a strong inverse association with liver cancer risk compared with the lowest intake level, with an adjusted OR of 0.53 (95% CI, 0.38-0.75) and 0.38 (95% CI, 0.19-0.73), respectively. This inverse association was also observed after the imputation. There was no statistically significant association between intake of small fish and liver cancer risk (OR=0.58, 95%, CI 0.32-1.08). Conclusions: Our findings suggest that consumption of freshwater fish and sea fish at least once a week could reduce liver cancer risk.

Keywords: cross-sectional study, fish intake, liver cancer, risk factor

Procedia PDF Downloads 242
27007 Application of Remote Sensing and GIS for Delineating Groundwater Potential Zones of Ariyalur, Southern Part of India

Authors: G. Gnanachandrasamy, Y. Zhou, S. Venkatramanan, T. Ramkumar, S. Wang

Abstract:

The natural resources of groundwater are the most precious resources around the world that balances are shrinking day by day. In connection, there is an urgency need for demarcation of potential groundwater zone. For these rationale integration of geographical information system (GIS) and remote sensing techniques (RS) for the hydrological studies have become a dramatic change in the field of hydrological research. These techniques are provided to locate the potential zone of groundwater. This research has been made to indent groundwater potential zone in Ariyalur of the southern part of India with help of GIS and remote sensing techniques. To identify the groundwater potential zone used by different thematic layers of geology, geomorphology, drainage, drainage density, lineaments, lineaments density, soil and slope with inverse distance weighting (IDW) methods. From the overall result reveals that the potential zone of groundwater in the study area classified into five classes named as very good (12.18 %), good (22.74 %), moderate (32.28 %), poor (27.7 %) and very poor (5.08 %). This technique suggested that very good potential zone of groundwater occurred in patches of northern and central parts of Jayamkondam, Andimadam and Palur regions in Ariyalur district. The result exhibited that inverse distance weighting method offered in this research is an effective tool for interpreting groundwater potential zones for suitable development and management of groundwater resources in different hydrogeological environments.

Keywords: GIS, groundwater potential zone, hydrology, remote sensing

Procedia PDF Downloads 171
27006 Experimental Investigation and Numerical Simulations of the Cylindrical Machining of a Ti-6Al-4V Tree

Authors: Mohamed Sahli, David Bassir, Thierry Barriere, Xavier Roizard

Abstract:

Predicting the behaviour of the Ti-6Al-4V alloy during the turning operation was very important in the choice of suitable cutting tools and also in the machining strategies. In this study, a 3D model with thermo-mechanical coupling has been proposed to study the influence of cutting parameters and also lubrication on the performance of cutting tools. The constants of the constitutive Johnson-Cook model of Ti-6Al-4V alloy were identified using inverse analysis based on the parameters of the orthogonal cutting process. Then, numerical simulations of the finishing machining operation were developed and experimentally validated for the cylindrical stock removal stage with the finishing cutting tool.

Keywords: titanium turning, cutting tools, FE simulation, chip

Procedia PDF Downloads 150
27005 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration

Authors: Danny Barash

Abstract:

Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.

Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods

Procedia PDF Downloads 210
27004 Dietary Patterns and Hearing Loss in Older People

Authors: N. E. Gallagher, C. E. Neville, N. Lyner, J. Yarnell, C. C. Patterson, J. E. Gallacher, Y. Ben-Shlomo, A. Fehily, J. V. Woodside

Abstract:

Hearing loss is highly prevalent in older people and can reduce quality of life substantially. Emerging research suggests that potentially modifiable risk factors, including risk factors previously related to cardiovascular disease risk, may be associated with a decreased or increased incidence of hearing loss. This has prompted investigation into the possibility that certain nutrients, foods or dietary patterns may also be associated with incidence of hearing loss. The aim of this study was to determine any associations between dietary patterns and hearing loss in men enrolled in the Caerphilly study. The Caerphilly prospective cohort study began in 1979-1983 with recruitment of 2512 men aged 45-59 years. Dietary data was collected using a self-administered, semi-quantitative, 56-item food frequency questionnaire (FFQ) at baseline (1979-1983), and 7-day weighed food intake (WI) in a 30% sub-sample, while pure-tone unaided audiometric threshold was assessed at 0.5, 1, 2 and 4 kHz, between 1984 and 1988. Principal components analysis (PCA) was carried out to determine a posteriori dietary patterns and multivariate linear and logistic regression models were used to examine associations with hearing level (pure tone average (PTA) of frequencies 0.5, 1, 2 and 4 kHz in decibels (dB)) for linear regression and with hearing loss (PTA>25dB) for logistic regression. Three dietary patterns were determined using PCA on the FFQ data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, both linear and logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P<0.001) and linear regression analysis showed a significant association between the High sugar/Alcohol avoider pattern and hearing loss (P=0.04). Three similar dietary patterns were determined using PCA on the WI data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P=0.02) and a significant association between the Traditional pattern and hearing loss (P=0.04). A Healthy dietary pattern was found to be significantly inversely associated with hearing loss in middle-aged men in the Caerphilly study. Furthermore, a High sugar/Alcohol avoider pattern (FFQ) and a Traditional pattern (WI) were associated with poorer hearing levels. Consequently, the role of dietary factors in hearing loss remains to be fully established and warrants further investigation.

Keywords: ageing, diet, dietary patterns, hearing loss

Procedia PDF Downloads 210
27003 Spatial Distribution of Heavy Metals in Khark Island-Iran Using Geographic Information System

Authors: Abbas Hani, Maryam Jassasizadeh

Abstract:

The concentrations of Cd, Pb, and Ni were determined from 40 soil samples collected in surface soils of Khark Island. Geostatistic methods and GIS were used to identify heavy metal sources and their spatial pattern. Principal component analysis coupled with correlation between heavy metals showed that level of mentioned heavy metal was lower than the standard level. Then the data obtained from the soil analyzing were studied for the purposes of normal distribution. The best way of interior finding for cadmium and nickel was ordinary kriging and the best way of interpolation of lead was inverse distance weighted. The result of this study help us to understand heavy metals distribution and make decision for remediation of soil pollution.

Keywords: geostatistics, ordinary kriging, heavy metals, GIS, Khark

Procedia PDF Downloads 138
27002 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis

Authors: Danni Cheng

Abstract:

Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.

Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin

Procedia PDF Downloads 73
27001 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging

Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati

Abstract:

Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.

Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization

Procedia PDF Downloads 46
27000 An Analytical Study on the Effect of Chronic Liver Disease Severity and Etiology on Lipid Profiles

Authors: Thinakar Mani Balusamy, Venkateswaran A. R., Bharat Narasimhan, Ratnakar Kini S., Kani Sheikh M., Prem Kumar K., Pugazhendi Thangavelu, Arun Murugan, Sibi Thooran Karmegam, Radhakrishnan N., Mohammed Noufal, Amit Soni

Abstract:

Background and Aims: The liver is integral to lipid metabolism, and a compromise in its function leads to perturbations in these pathways. In this study, we hope to determine the correlation between CLD severity and its effect on lipid parameters. We also look at the etiology-specific effects on lipid levels. Materials and Methods: This is a retrospective cross-sectional analysis of 250 patients with cirrhosis compared to 250 healthy age and sex-matched controls. Severity assessment of CLD using MELD and Child-Pugh scores was performed and etiological details collected. A questionnaire was used to obtain patient demographic details and lastly, a fasting lipid profile (Total, LDL, HDL cholesterol, Triglycerides and VLDL) was obtained. Results: All components of the lipid profile declined linearly with increasing severity of CLD as determined by MELD and Child-Pugh scores. Lipid levels were clearly lower in CLD patients as compared to healthy controls. Interestingly, preliminary analysis indicated that CLD of different etiologies had differential effects on Lipid profiles. This aspect is under further analysis. Conclusion: All components of the lipid profile were definitely lower in CLD patients as compared to controls and demonstrated an inverse correlation with increasing severity. The utilization of this parameter as a prognosticating aid requires further study. Additionally, preliminary analysis indicates that various CLD etiologies appear to have specific effects on the lipid profile – a finding under further analysis.

Keywords: CLD, cholesterol, HDL, LDL, lipid profile, triglycerides, VLDL

Procedia PDF Downloads 197
26999 Exploring Subjective Simultaneous Mixed Emotion Experiences in Middle Childhood

Authors: Esther Burkitt

Abstract:

Background: Evidence is mounting that mixed emotions can be experienced simultaneously in different ways across the lifespan. Four types of patterns of simultaneously mixed emotions (sequential, prevalent, highly parallel, and inverse types) have been identified in middle childhood and adolescence. Moreover, the recognition of these experiences tends to develop firstly when children consider peers rather than the self. This evidence from children and adolescents is based on examining the presence of experiences specified in adulthood. The present study, therefore, applied an exhaustive coding scheme to investigate whether children experience types of previously unidentified simultaneous mixed emotional experiences. Methodology: One hundred and twenty children (60 girls) aged 7 years 1 month - 9 years 2 months (X=8 years 1 month; SD = 10 months) were recruited from mainstream schools across the UK. Two age groups were formed (youngest, n = 61, 7 years 1 month- 8 years 1 months: oldest, n = 59, 8 years 2 months – 9 years 2 months) and allocated to one of two conditions hearing vignettes describing happy and sad mixed emotion events in age and gender-matched protagonist or themselves. Results: Loglinear analyses identified new types of flexuous, vertical, and other experiences along with established sequential, prevalent, highly parallel, and inverse types of experience. Older children recognised more complex experiences other than the self-condition. Conclusion: Several additional types of simultaneously mixed emotions are recognised in middle childhood. The theoretical relevance of simultaneous mixed emotion processing in childhood is considered, and the potential utility of the findings in emotion assessments is discussed.

Keywords: emotion, childhood, self, other

Procedia PDF Downloads 50
26998 Evaluating Mechanical Properties of CoNiCrAlY Coating from Miniature Specimen Testing at Elevated Temperature

Authors: W. Wen, G. Jackson, S. Maskill, D. G. McCartney, W. Sun

Abstract:

CoNiCrAlY alloys have been widely used as bond coats for thermal barrier coating (TBC) systems because of low cost, improved control of composition, and the feasibility to tailor the coatings microstructures. Coatings are in general very thin structures, and therefore it is impossible to characterize the mechanical responses of the materials via conventional mechanical testing methods. Due to this reason, miniature specimen testing methods, such as the small punch test technique, have been developed. This paper presents some of the recent research in evaluating the mechanical properties of the CoNiCrAlY coatings at room and high temperatures, through the use of small punch testing and the developed miniature specimen tensile testing, applicable to a range of temperature, to investigate the elastic-plastic and creep behavior as well as ductile-brittle transition temperature (DBTT) behavior. An inverse procedure was developed to derive the mechanical properties from such tests for the coating materials. A two-layer specimen test method is also described. The key findings include: 1) the temperature-dependent coating properties can be accurately determined by the miniature tensile testing within a wide range of temperature; 2) consistent DBTTs can be identified by both the SPT and miniature tensile tests (~ 650 °C); and 3) the FE SPT modelling has shown good capability of simulating the early local cracking. In general, the temperature-dependent material behaviors of the CoNiCrAlY coating has been effectively characterized using miniature specimen testing and inverse method.

Keywords: NiCoCrAlY coatings, mechanical properties, DBTT, miniature specimen testing

Procedia PDF Downloads 133
26997 Rejection Sensitivity and Romantic Relationships: A Systematic Review and Meta-Analysis

Authors: Mandira Mishra, Mark Allen

Abstract:

This meta-analysis explored whether rejection sensitivity relates to facets of romantic relationships. A comprehensive literature search identified 60 studies (147 effect sizes; 16,955 participants) that met inclusion criteria. Data were analysed using inverse-variance weighted random effects meta-analysis. Mean effect sizes from 21 meta-analyses provided evidence that more rejection sensitive individuals report lower levels of relationship satisfaction and relationship closeness, lower levels of perceived partner satisfaction, a greater likelihood of intimate partner violence (perpetration and victimization), higher levels of relationship concerns and relationship conflict, and higher levels of jealousy and self-silencing behaviours. There was also some evidence that rejection sensitive individuals are more likely to engage in risky sexual behaviour and are more prone to sexual compulsivity. There was no evidence of publication bias and various levels of heterogeneity in computed averages. Random effects meta-regression identified participant age and sex as important moderators of pooled mean effects. These findings provide a foundation for the theoretical development of rejection sensitivity in romantic relationships and should be of interest to relationship and marriage counsellors and other relationship professionals.

Keywords: intimate partner violence, relationship satisfaction, commitment, sexual orientation, risky sexual behaviour

Procedia PDF Downloads 56
26996 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 110
26995 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy

Authors: M. Regina Carreira-Lopez

Abstract:

Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.

Keywords: hypernymy, information retrieval, lightweight ontology, resonance

Procedia PDF Downloads 102