Search results for: highly accurate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6685

Search results for: highly accurate

6295 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: finite elements, Lagrangian, optimal stress location, serendipity

Procedia PDF Downloads 104
6294 Modeling Core Flooding Experiments for Co₂ Geological Storage Applications

Authors: Avinoam Rabinovich

Abstract:

CO₂ geological storage is a proven technology for reducing anthropogenic carbon emissions, which is paramount for achieving the ambitious net zero emissions goal. Core flooding experiments are an important step in any CO₂ storage project, allowing us to gain information on the flow of CO₂ and brine in the porous rock extracted from the reservoir. This information is important for understanding basic mechanisms related to CO₂ geological storage as well as for reservoir modeling, which is an integral part of a field project. In this work, a different method for constructing accurate models of CO₂-brine core flooding will be presented. Results for synthetic cases and real experiments will be shown and compared with numerical models to exhibit their predictive capabilities. Furthermore, the various mechanisms which impact the CO₂ distribution and trapping in the rock samples will be discussed, and examples from models and experiments will be provided. The new method entails solving an inverse problem to obtain a three-dimensional permeability distribution which, along with the relative permeability and capillary pressure functions, constitutes a model of the flow experiments. The model is more accurate when data from a number of experiments are combined to solve the inverse problem. This model can then be used to test various other injection flow rates and fluid fractions which have not been tested in experiments. The models can also be used to bridge the gap between small-scale capillary heterogeneity effects (sub-core and core scale) and large-scale (reservoir scale) effects, known as the upscaling problem.

Keywords: CO₂ geological storage, residual trapping, capillary heterogeneity, core flooding, CO₂-brine flow

Procedia PDF Downloads 65
6293 Religiosity and Involvement in Purchasing Convenience Foods: Using Two-Step Cluster Analysis to Identify Heterogenous Muslim Consumers in the UK

Authors: Aisha Ijaz

Abstract:

The paper focuses on the impact of Muslim religiosity on convenience food purchases and involvement experienced in a non-Muslim culture. There is a scarcity of research on the purchasing patterns of Muslim diaspora communities residing in risk societies, particularly in contexts where there is an increasing inclination toward industrialized food items alongside a renewed interest in the concept of natural foods. The United Kingdom serves as an appropriate setting for this study due to the increasing Muslim population in the country, paralleled by the expanding Halal Food Market. A multi-dimensional framework is proposed, testing for five forms of involvement, specifically Purchase Decision Involvement, Product Involvement, Behavioural Involvement, Intrinsic Risk and Extrinsic Risk. Quantitative cross-sectional consumer data were collected through a face-to-face survey contact method with 141 Muslims during the summer of 2020 in Liverpool located in the Northwest of England. proportion formula was utilitsed, and the population of interest was stratified by gender and age before recruitment took place through local mosques and community centers. Six input variables were used (intrinsic religiosity and involvement dimensions), dividing the sample into 4 clusters using the Two-Step Cluster Analysis procedure in SPSS. Nuanced variances were observed in the type of involvement experienced by religiosity group, which influences behaviour when purchasing convenience food. Four distinct market segments were identified: highly religious ego-involving (39.7%), less religious active (26.2%), highly religious unaware (16.3%), less religious concerned (17.7%). These segments differ significantly with respects to their involvement, behavioural variables (place of purchase and information sources used), socio-cultural (acculturation and social class), and individual characteristics. Choosing the appropriate convenience food is centrally related to the value system of highly religious ego-involving first-generation Muslims, which explains their preference for shopping at ethnic food stores. Less religious active consumers are older and highly alert in information processing to make the optimal food choice, relying heavily on product label sources. Highly religious unaware Muslims are less dietary acculturated to the UK diet and tend to rely on digital and expert advice sources. The less-religious concerned segment, who are typified by younger age and third generation, are engaged with the purchase process because they are worried about making unsuitable food choices. Research implications are outlined and potential avenues for further explorations are identified.

Keywords: consumer behaviour, consumption, convenience food, religion, muslims, UK

Procedia PDF Downloads 52
6292 The Importance of Clinicopathological Features for Differentiation Between Crohn's Disease and Ulcerative Colitis

Authors: Ghada E. Esheba, Ghadeer F. Alharthi, Duaa A. Alhejaili, Rawan E. Hudairy, Wafaa A. Altaezi, Raghad M. Alhejaili

Abstract:

Background: Inflammatory bowel disease (IBD) consists of two specific gastrointestinal disorders: ulcerative colitis (UC) and Crohn's disease (CD). Despite their distinct natures, these two diseases share many similar etiologic, clinical and pathological features, as a result, their accurate differential diagnosis may sometimes be difficult. Correct diagnosis is important because surgical treatment and long-term prognosis differ from UC and CD. Aim: This study aims to study the characteristic clinicopathological features which help in the differential diagnosis between UC and CD, and assess the disease activity in ulcerative colitis. Materials and methods: This study was carried out on 50 selected cases. The cases included 27 cases of UC and 23 cases of CD. All the cases were examined using H& E and immunohistochemically for bcl-2 expression. Results: Characteristic features of UC include: decrease in mucous content, irregular or villous surface, crypt distortion, and cryptitis, whereas the main cardinal histopathological features seen in CD were: epitheloid granuloma, transmural chronic inflammation, absence of mucin depletion, irregular surface, or crypt distortion. 3 cases of UC were found to be associated with dysplasia. UC mucosa contains fewer Bcl-2+ cells compared with CD mucosa. Conclusion: This study using multiple parameters such clinicopathological features and Bcl-2 expression as studied by immunohistochemical stain, helped to gain an accurate differentiation between UC and CD. Furthermore, this work spotted the light on the activity and different grades of UC which could be important for the prediction of relapse.

Keywords: Crohn's disease, dysplasia, inflammatory bowel disease, ulcerative colitis

Procedia PDF Downloads 187
6291 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering

Authors: Emiel Caron

Abstract:

Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.

Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics

Procedia PDF Downloads 188
6290 A Literature Review of Emotional Labor and Non-Task Behavior

Authors: Yeong-Gyeong Choi, Kyoung-Seok Kim

Abstract:

This study, literature review research, intends to deal with the problem of conceptual ambiguity among research on emotional labor, and to look into the evolutionary trends and changing aspects of defining the concept of emotional labor. In addition, in existing studies, deep acting and surface acting are highly related to a positive outcome variable and a negative outcome variable, respectively. It was confirmed that for employees performing emotional labor, deep acting and surface acting are highly related to OCB and CWB, respectively. While positive emotion that employees come to experience during job performance process can easily trigger a positive non-task behavior such as OCB, negative emotion that employees experience through excessive workload or unfair treatment can easily induce a negative behavior like CWB. The two management behaviors of emotional labor, surface acting and deep acting, can have either a positive or negative effect on non-task behavior of employees, depending on which one they would choose. Thus, the purpose of this review paper is to clarify the relationship between emotional labor and non-task behavior more specifically.

Keywords: emotion labor, non-task behavior, OCB, CWB

Procedia PDF Downloads 344
6289 Fabrication of Coatable Polarizer by Guest-Host System for Flexible Display Applications

Authors: Rui He, Seung-Eun Baik, Min-Jae Lee, Myong-Hoon Lee

Abstract:

The polarizer is one of the most essential optical elements in LCDs. Currently, the most widely used polarizers for LCD is the derivatives of the H-sheet polarizer. There is a need for coatable polarizers which are much thinner and more stable than H-sheet polarizers. One possible approach to obtain thin, stable, and coatable polarizers is based on the use of highly ordered guest-host system. In our research, we aimed to fabricate coatable polarizer based on highly ordered liquid crystalline monomer and dichroic dye ‘guest-host’ system, in which the anisotropic absorption of light could be achieved by aligning a dichroic dye (guest) in the cooperative motion of the ordered liquid crystal (host) molecules. Firstly, we designed and synthesized a new reactive liquid crystalline monomer containing polymerizable acrylate groups as the ‘host’ material. The structure was confirmed by 1H-NMR and IR spectroscopy. The liquid crystalline behavior was studied by differential scanning calorimetry (DSC) and polarized optical microscopy (POM). It was confirmed that the monomers possess highly ordered smectic phase at relatively low temperature. Then, the photocurable ‘guest-host’ system was prepared by mixing the liquid crystalline monomer, dichroic dye and photoinitiator. Coatable polarizers were fabricated by spin-coating above mixture on a substrate with alignment layer. The in-situ photopolymerization was carried out at room temperature by irradiating UV light, resulting in the formation of crosslinked structure that stabilized the aligned dichroic dye molecules. Finally, the dichroic ratio (DR), order parameter (S) and polarization efficiency (PE) were determined by polarized UV/Vis spectroscopy. We prepared the coatable polarizers by using different type of dichroic dyes to meet the requirement of display application. The results reveal that the coatable polarizers at a thickness of 8μm exhibited DR=12~17 and relatively high PE (>96%) with the highest PE=99.3%, which possess potential for the LCD or flexible display applications.

Keywords: coatable polarizer, display, guest-host, liquid crystal

Procedia PDF Downloads 248
6288 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning

Authors: Jiahao Tian, Michael D. Porter

Abstract:

Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.

Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation

Procedia PDF Downloads 63
6287 Cavity-Type Periodically-Poled LiNbO3 Device for Highly-Efficient Third-Harmonic Generation

Authors: Isao Tomita

Abstract:

We develop a periodically-poled LiNbO3 (PPLN) device for highly-efficient third-harmonic generation (THG), where the THG efficiency is enhanced with a cavity. THG can usually be produced via χ(3)-nonlinear materials by optical pumping with very high pump-power. Instead, we here propose THG by moderate-power pumping through a specially-designed PPLN device containing only χ(2)-nonlinearity, where sum-frequency generation in the χ(2) process is employed for the mixing of a pump beam and a second-harmonic-generation (SHG) beam produced from the pump beam. The cavity is designed to increase the SHG power with dichroic mirrors attached to both ends of the device that perfectly reflect the SHG beam back to the device and yet let the pump and THG beams pass through the mirrors. This brings about a THG-power enhancement because of THG power proportional to the enhanced SHG power. We examine the THG-efficiency dependence on the mirror reflectance and show that very high THG-efficiency is obtained at moderate pump-power when compared with that of a cavity-free PPLN device.

Keywords: cavity, periodically-poled LiNbO₃, sum-frequency generation, third-harmonic generation

Procedia PDF Downloads 260
6286 Fabrication of Biosensor Based on Layered Double Hydroxide/Polypyrrole/Carbon Paste Electrode for Determination of Anti-Hypertensive and Prostatic Hyperplasia Drug Terazosin

Authors: Amira M. Hassanein, Nehal A. Salahuddin, Atsunori Matsuda, Toshiaki Hattori, Mona N. Elfiky

Abstract:

New insights into the design of highly sensitive, carbon-based electrochemical sensors are presented in this work. This was achieved by exploring the interesting properties of conductive (Mg/Al) layered double hydroxide- Dodecyl Sulphate/Polypyrrole nanocomposites which were synthesized by in-situ polymerization of pyrrole during the assembly of (Mg/Al) layered double hydroxide, and by employing the anionic surfactant Dodecyl sulphate as a modifier. The morphology and surface area of the nanocomposites changed with the percentage of Pyrrole. Under optimal conditions, the modified carbon paste electrode successfully achieved detection limits of 0.057 and 0.134 nmol.L-1 of Terazosin hydrochloride in pharmaceutical formulation and spiked human serum fluid, respectively. Moreover, the sensors are highly stable, reusable, and free from interference by other commonly present excipients in drug formulations.

Keywords: layered double hydroxide, polypyrrole, terazosin hydrochloride, square-wave adsorptive anodic stripping voltammetry

Procedia PDF Downloads 215
6285 The Impact of University League Tables on the Development of Non-Elite Universities. A Case Study of England

Authors: Lois Cheung

Abstract:

This article examines the impact of League Tables on non-elite universities in the English higher education system. The purpose of this study is to explore the use of rankings in strategic planning by low-ranked universities in this highly competitive higher education market. A sample of non-elite universities was selected for a content analysis based on the measures used by The Guardian rankings. Interestingly, these universities care about their rankings within a single national system. The content analysis appears to be an effective approach to investigating the presence of such influences. It is particularly noteworthy that all sampled universities use these measure terminologies in their strategic plans, missions and news coverage on their institutional web-pages. This analysis may be an example of the key challenges that many low-ranking universities in England are probably facing in the highly competitive and diversified higher education market. These universities use rankings to communicate with their stakeholders, mainly students, in order to fill places to secure their major source of funding. The study concludes with comments on the likely effects of the rankings paradigm in undermining the contributions of non-elite universities.

Keywords: League tables, measures, post-1992 universities, ranking, strategy

Procedia PDF Downloads 176
6284 Anthropomorphic Interfaces For User Trust in a Highly Automated Driving

Authors: Clarisse Lawson-Guidigbe, Nicolas Louveton, Kahina Amokrane-Ferka, Jean-Marc Andre

Abstract:

Trust in automated driving systems is receiving growing attention in the research community. Anthropomorphism has been identified by past research as a trust-building factor. In this paper, we consider three anthropomorphic interfaces integrating three versions of a virtual assistant. We attempt to measure the impact of each of these interfaces on trust in the automated driving system. An experiment following a between-subject design was conducted in a driving simulator (N = 36) to evaluate participants’ performance and experience in two handover situations (a simple one and a critical one). Perception of anthropomorphism and trust was measured using scales, while participants’ experience was measured during elicitation interviews. We found no significant difference between the three interfaces regarding the perception of anthropomorphism, trust levels, or experience. However, regarding participants’ performance, we found a significant difference between the three interfaces in the simple handover situations but not the critical one. Learnings from anthropomorphism and trust measurement scales are discussed and suggestions for further research are proposed.

Keywords: highly automated driving, trust, anthropomorphic design, mindful anthropomorphism, mindless anthropomorphism

Procedia PDF Downloads 141
6283 Uniqueness and Repeatability Analysis for Slim Tube Determined Minimum Miscibility Pressure

Authors: Waqar Ahmad Butt, Gholamreza Vakili Nezhaad, Ali Soud Al Bemani, Yahya Al Wahaibi

Abstract:

Miscible gas injection processes as secondary recovery methods can be applied to a huge number of mature reservoirs to improve the trapped oil displacement. Successful miscible gas injection processes require an accurate estimation of the minimum miscibility pressure (MMP) to make injection process feasible, economical, and effective. There are several methods of MMP determination like slim tube approach, vanishing interfacial tension and rising bubble apparatus but slim tube is the deployed experimental technique in this study. Slim tube method is assumed to be non-standardized for MMP determination with respect to both operating procedure and design. Therefore, 25 slim tube runs were being conducted with three different coil lengths (12, 18 and 24 m) of constant diameter using three different injection rates (0.08, 0.1 and 0.15 cc/min) to evaluate uniqueness and repeatability of determined MMP. A trend of decrease in MMP with increase in coil length was found. No unique trend was found between MMP and injection rate. Lowest MMP and highest recovery were observed with highest coil length and lowest injection rate. It shows that slim tube measured MMP does not depend solely on interacting fluids characteristics but also affected by used coil selection and injection rate choice. Therefore, both slim tube design and procedure need to be standardized. It is recommended to use lowest possible injection rate and estimated coil length depending upon the distance between injections and producing wells for accurate and reliable MMP determination.

Keywords: coil length, injection rate, minimum miscibility pressure, multiple contacts miscibility

Procedia PDF Downloads 248
6282 Numerical Study of Off-Design Performance of a Highly Loaded Low Pressure Turbine Cascade

Authors: Shidvash Vakilipour, Mehdi Habibnia, Rouzbeh Riazi, Masoud Mohammadi, Mohammad H. Sabour

Abstract:

The flow field passing through a highly loaded low pressure (LP) turbine cascade is numerically investigated at design and off-design conditions. The Field Operation And Manipulation (OpenFOAM) platform is used as the computational Fluid Dynamics (CFD) tool. Firstly, the influences of grid resolution on the results of k-ε, k-ω, and LES turbulence models are investigated and compared with those of experimental measurements. A numerical pressure under-shoot is appeared near the end of blade pressure surface which is sensitive to grid resolution and flow turbulence modeling. The LES model is able to resolve separation on a coarse and fine grid resolutions. Secondly, the off-design flow condition is modeled by negative and positive inflow incidence angles. The numerical experiments show that a separation bubble generated on blade pressure side is predicted by LES. The total pressure drop is also been calculated at incidence angle between -20◦ and +8◦. The minimum total pressure drop is obtained by k-ω and LES at the design point.

Keywords: low pressure turbine, off-design performance, openFOAM, turbulence modeling, flow separation

Procedia PDF Downloads 356
6281 Variation in Total Iron and Zinc Concentration, Protein Quality, and Quantity of Maize Hybrids Grown under Abiotic Stress and Optimal Conditions

Authors: Tesfaye Walle Mekonnen

Abstract:

Maize is one of the most important staple food crops for most low-income households in the Sub-Saharan (SSA). Combined heat and drought stress is the major production threats that reduce the yield potential of biofortified maize and restrain various macro and micronutrient deficiencies highly prevalent in low-income people who rely solely on maize-based diets, SSA. This problem can be alleviated by crossing the biofortified inbred lines with different nutritional attributes, Fe, Zn, Protein, and Provitamin A, and developing agronomically superior and stable multi-nutrient maize of various genetic backgrounds. This aimed to understand the correlation between biofortified inbred lines per se and hybrid performance under combined heat and drought stress conditions (CSC). The experiment was conducted at CIMMYT, Zimbabwe, using α-lattice design with three replications. The hybrid effect was highly significant for zein fractions (α-, β-, γ- and δ-zein) zinc, (Zn), and iron (Fe) provitamin A, phytic acid, and grain yield. Under CSC, Fe, Zn concentration, provitamin A in grain and grain yield of hybrids were significantly decreased, however, the zein fraction content and phytic acid content increases in grain were increased under CSC. The phenotypic correlation between grain yield with Zn, Fe concentration, and Provitamin A in grain was strongly positive and higher under CSC than in well-watered conditions. The present investigation confirmed that under CSC, Fe, and Zn-enhanced hybrids could be forecasted to a certain scope based on the performance of and scientifically selected for desirable grain yield and related traits with CSC tolerance during hybrid development programs. In conclusion, the development of high-yielding and micronutrient-dense maize variety is possible under CSC, which could reduce the highly prevalent micronutrient in SSA.

Keywords: drought, Fe, heat, maize, protein, zein fractions, Zn

Procedia PDF Downloads 64
6280 Integrated Intensity and Spatial Enhancement Technique for Color Images

Authors: Evan W. Krieger, Vijayan K. Asari, Saibabu Arigela

Abstract:

Video imagery captured for real-time security and surveillance applications is typically captured in complex lighting conditions. These less than ideal conditions can result in imagery that can have underexposed or overexposed regions. It is also typical that the video is too low in resolution for certain applications. The purpose of security and surveillance video is that we should be able to make accurate conclusions based on the images seen in the video. Therefore, if poor lighting and low resolution conditions occur in the captured video, the ability to make accurate conclusions based on the received information will be reduced. We propose a solution to this problem by using image preprocessing to improve these images before use in a particular application. The proposed algorithm will integrate an intensity enhancement algorithm with a super resolution technique. The intensity enhancement portion consists of a nonlinear inverse sign transformation and an adaptive contrast enhancement. The super resolution section is a single image super resolution technique is a Fourier phase feature based method that uses a machine learning approach with kernel regression. The proposed technique intelligently integrates these algorithms to be able to produce a high quality output while also being more efficient than the sequential use of these algorithms. This integration is accomplished by performing the proposed algorithm on the intensity image produced from the original color image. After enhancement and super resolution, a color restoration technique is employed to obtain an improved visibility color image.

Keywords: dynamic range compression, multi-level Fourier features, nonlinear enhancement, super resolution

Procedia PDF Downloads 550
6279 The System Dynamics Research of China-Africa Trade, Investment and Economic Growth

Authors: Emma Serwaa Obobisaa, Haibo Chen

Abstract:

International trade and outward foreign direct investment are important factors which are generally recognized in the economic growth and development. Though several scholars have struggled to reveal the influence of trade and outward foreign direct investment (FDI) on economic growth, most studies utilized common econometric models such as vector autoregression and aggregated the variables, which for the most part prompts, however, contradictory and mixed results. Thus, there is an exigent need for the precise study of the trade and FDI effect of economic growth while applying strong econometric models and disaggregating the variables into its separate individual variables to explicate their respective effects on economic growth. This will guarantee the provision of policies and strategies that are geared towards individual variables to ensure sustainable development and growth. This study, therefore, seeks to examine the causal effect of China-Africa trade and Outward Foreign Direct Investment on the economic growth of Africa using a robust and recent econometric approach such as system dynamics model. Our study impanels and tests an ensemble of a group of vital variables predominant in recent studies on trade-FDI-economic growth causality: Foreign direct ınvestment, international trade and economic growth. Our results showed that the system dynamics method provides accurate statistical inference regarding the direction of the causality among the variables than the conventional method such as OLS and Granger Causality predominantly used in the literature as it is more robust and provides accurate, critical values.

Keywords: economic growth, outward foreign direct investment, system dynamics model, international trade

Procedia PDF Downloads 102
6278 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 63
6277 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 335
6276 Integrating Distributed Architectures in Highly Modular Reinforcement Learning Libraries

Authors: Albert Bou, Sebastian Dittert, Gianni de Fabritiis

Abstract:

Advancing reinforcement learning (RL) requires tools that are flexible enough to easily prototype new methods while avoiding impractically slow experimental turnaround times. To match the first requirement, the most popular RL libraries advocate for highly modular agent composability, which facilitates experimentation and development. To solve challenging environments within reasonable time frames, scaling RL to large sampling and computing resources has proved a successful strategy. However, this capability has been so far difficult to combine with modularity. In this work, we explore design choices to allow agent composability both at a local and distributed level of execution. We propose a versatile approach that allows the definition of RL agents at different scales through independent, reusable components. We demonstrate experimentally that our design choices allow us to reproduce classical benchmarks, explore multiple distributed architectures, and solve novel and complex environments while giving full control to the user in the agent definition and training scheme definition. We believe this work can provide useful insights to the next generation of RL libraries.

Keywords: deep reinforcement learning, Python, PyTorch, distributed training, modularity, library

Procedia PDF Downloads 78
6275 A Combined Fiber-Optic Surface Plasmon Resonance and Ta2O5: rGO Nanocomposite Synergistic Scheme for Trace Detection of Insecticide Fenitrothion

Authors: Ravi Kant, Banshi D. Gupta

Abstract:

The unbridled application of insecticides to enhance agricultural yield has become a matter of grave concern to both the environment and the human health and, thus pose a potential threat to sustainable development. Fenitrothion is an extensively used organophosphate insecticide whose residues are reported to be extremely toxic for birds, humans and aquatic life. A sensitive, swift and accurate detection protocol for fenitrothion is, thus, highly demanded. In this work, we report an SPR based fiber optic sensor for the detection of fenitrothion, where a nanocomposite arrangement of Ta2O5 and reduced graphene oxide (rGO) (Ta₂O₅: rGO) decorated on silver coated unclad core region of an optical fiber forms the sensing channel. A nanocomposite arrangement synergistically integrates the properties of involved components and consequently furnishes a conducive framework for sensing applications. The modification of the dielectric function of the sensing layer on exposure to fenitrothion solutions of diverse concentration forms the sensing mechanism. This modification is reflected in terms of the shift in resonance wavelength. Experimental variables such as the concentration of rGO in the nanocomposite configuration, dip time of silver coated fiber optic probe for deposition of sensing layer and influence of pH on the performance of the sensor have been optimized to extract the best performance of the sensor. SPR studies on the optimized sensing probe reveal the high sensitivity, wide operating range and good reproducibility of the fabricated sensor, which unveil the promising utility of Ta₂O₅: rGO nanocomposite framework for developing an efficient detection methodology for fenitrothion. FOSPR approach in cooperation with nanomaterials projects the present work as a beneficial approach for fenitrothion detection by imparting numerous useful advantages such as sensitivity, selectivity, compactness and cost-effectiveness.

Keywords: surface plasmon resonance, optical fiber, sensor, fenitrothion

Procedia PDF Downloads 204
6274 Polyhydroxybutyrate (PHB): Highly Porous Scaffold for Biomedicine

Authors: Neda Sinaei, Davood Zare, Mehrdad Azin

Abstract:

Polyhydroxyalkanoates (PHAs) are biocompatible and biodegradable polymers produced by a wide range of bacterial strains. These biopolymers are significantly studied for drug delivery and tissue engineering applications because of their fascinating physicochemical properties. Polyhydroxybutyrate (PHB) scaffold that has been extracted from a novel bacteria using oil wastewater was selected to study. Some physical parameters affecting scaffold properties such as PHB concentration, solvent evaporation speed, and ultrasonic time were investigated. Scanning electron microscopy was used to evaluate the porosity. Afterward, the biocompatibility of PHB scaffold was assessed. Initial results showed the highly porous PHB scaffold structure with a variety of pore sizes. Subsequent results indicated that more unique pore sizes can be obtained by optimizing physical factors. It would be noticed that the morphology of the pore structure was accordingly affected by ultrasonic time. Hence, In vitro cell viability tests on the PHB scaffold using human foreskin fibroblasts revealed strong cell attachment and proliferation supports. Therefore, it can be concluded that the cost-effective PHB scaffold has the potential using as a biomaterial cell adhesion substrate in therapeutic applications.

Keywords: Polyhydroxybutyrate, biocompatible, scaffold, porous, tissue engineering

Procedia PDF Downloads 227
6273 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 394
6272 Numerical Simulation of Phase Transfer during Cryosurgery for an Irregular Tumor Using Hybrid Approach

Authors: Rama Bhargava, Surabhi Nishad

Abstract:

The infusion of nanofluids has dramatically enhanced the heat-carrying capacity of the fluids, applicable to many engineering and medical process where the temperature below freezing is required. Cryosurgery is an efficient therapy for the treatment of cancer, but sometimes the excessive cooling may harm the nearby healthy cells. Efforts are therefore done to develop a model which can cause to generate the low temperature as required. In the present study, a mathematical model is developed based on the bioheat transfer equation to simulate the heat transfer from the probe on a tumor (with irregular domain) using the hybrid technique consisting of element free Galerkin method with αα-family of approximation. The probe is loaded will nano-particles. The effects of different nanoparticles, namely Al₂O₃, Fe₃O₄, Au on the heat-producing rate, is obtained. It is observed that the temperature can be brought to (60°C)-(-30°C) at a faster freezing rate on the infusion of different nanoparticles. Besides increasing the freezing rate, the volume of the nanoparticle can also control the size and growth of ice crystals formed during the freezing process. The study is also made to find the time required to achieve the desired temperature. The problem is further extended for multi tumors of different shapes and sizes. The irregular shape of the frozen domain and the direction of ice growth are very sensitive issues, posing a challenge for simulation. The Meshfree method has been one of the accurate methods in such problems as a domain is naturally irregular. The discretization is done using the nodes only. MLS approximation is taken in order to generate the shape functions. Sufficiently accurate results are obtained.

Keywords: cryosurgery, EFGM, hybrid, nanoparticles

Procedia PDF Downloads 119
6271 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry

Authors: Dhanuj M. Gandikota

Abstract:

Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.

Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry

Procedia PDF Downloads 98
6270 Growth of SWNTs from Alloy Catalyst Nanoparticles

Authors: S. Forel, F. Bouanis, L. Catala, I. Florea, V. Huc, F. Fossard, A. Loiseau, C. Cojocaru

Abstract:

Single wall carbon nanotubes are seen as excellent candidate for application on nanoelectronic devices because of their remarkable electronic and mechanical properties. These unique properties are highly dependent on their chiral structures and the diameter. Therefore, structure controlled growth of SWNTs, especially directly on final device’s substrate surface, are highly desired for the fabrication of SWNT-based electronics. In this work, we present a new approach to control the diameter of SWNTs and eventually their chirality. Because of their potential to control the SWNT’s chirality, bi-metalics nanoparticles are used to prepare alloy nanoclusters with specific structure. The catalyst nanoparticles are pre-formed following a previously described process. Briefly, the oxide surface is first covered with a SAM (self-assembled monolayer) of a pyridine-functionalized silane. Then, bi-metallic (Fe-Ru, Co-Ru and Ni-Ru) complexes are assembled by coordination bonds on the pre-formed organic SAM. The resultant alloy nanoclusters were then used to catalyze SWNTs growth on SiO2/Si substrates via CH4/H2 double hot-filament chemical vapor deposition (d-HFCVD). The microscopy and spectroscopy analysis demonstrate the high quality of SWNTs that were furthermore integrated into high-quality SWNT-FET.

Keywords: nanotube, CVD, device, transistor

Procedia PDF Downloads 313
6269 Radiation Protection Assessment of the Emission of a d-t Neutron Generator: Simulations with MCNP Code and Experimental Measurements in Different Operating Conditions

Authors: G. M. Contessa, L. Lepore, G. Gandolfo, C. Poggi, N. Cherubini, R. Remetti, S. Sandri

Abstract:

Practical guidelines are provided in this work for the safe use of a portable d-t Thermo Scientific MP-320 neutron generator producing pulsed 14.1 MeV neutron beams. The neutron generator’s emission was tested experimentally and reproduced by MCNPX Monte Carlo code. Simulations were particularly accurate, even generator’s internal components were reproduced on the basis of ad-hoc collected X-ray radiographic images. Measurement campaigns were conducted under different standard experimental conditions using an LB 6411 neutron detector properly calibrated at three different energies, and comparing simulated and experimental data. In order to estimate the dose to the operator vs. the operating conditions and the energy spectrum, the most appropriate value of the conversion factor between neutron fluence and ambient dose equivalent has been identified, taking into account both direct and scattered components. The results of the simulations show that, in real situations, when there is no information about the neutron spectrum at the point where the dose has to be evaluated, it is possible - and in any case conservative - to convert the measured value of the count rate by means of the conversion factor corresponding to 14 MeV energy. This outcome has a general value when using this type of generator, enabling a more accurate design of experimental activities in different setups. The increasingly widespread use of this type of device for industrial and medical applications makes the results of this work of interest in different situations, especially as a support for the definition of appropriate radiation protection procedures and, in general, for risk analysis.

Keywords: instrumentation and monitoring, management of radiological safety, measurement of individual dose, radiation protection of workers

Procedia PDF Downloads 129
6268 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 277
6267 Pb and NI Removal from Aqueous Environment by Green Synthesized Iron Nanoparticles Using Fruit Cucumis Melo and Leaves of Ficus Virens

Authors: Amandeep Kaur, Sangeeta Sharma

Abstract:

Keeping in view the serious entanglement of heavy metals ( Pb+2 and Ni+2) ions in an aqueous environment, a rapid search for efficient adsorbents for the adsorption of heavy metals has become highly desirable. In this quest, green synthesized Fe np’s have gathered attention because of their excellent adsorption capability of heavy metals from aqueous solution. This research report aims at the fabrication of Fe np’s using the fruit Cucumis melo and leaves of Ficus virens via a biogenic synthesis route. Further, synthesized CM-Fe-np’s and FV-Fe-np’s have been tested as potential bio-adsorbents for the removal of Pb+2 and Ni+2 by carrying out adsorption batch experiments. The influence of myriad parameters like initial concentration of Pb/Ni (5,10,15,20,25 mg/L), contact time (10 to 200 min.), adsorbent dosage (0.5, 0.10, 0.15 mg/L), shaking speed (120 to 350 rpm) and pH value (6,7,8,9) has been investigated. The maximum removal with CM-Fe-np’s and FV-Fe-np’s has been achieved at pH 7, metal conc. 5 mg/L, dosage 0.9 g/L, shaking speed 200 rpm and reaction contact time 200 min during the adsorption experiment. The results obtained are found to be in accordance with Freundlich and Langmuir's adsorption models; consequently, they could be highly applicable to the wastewater treatment plant.

Keywords: adsorption, biogenic synthesis, nanoparticles, nickel, lead

Procedia PDF Downloads 83
6266 Accurate Binding Energy of Ytterbium Dimer from Ab Initio Calculations and Ultracold Photoassociation Spectroscopy

Authors: Giorgio Visentin, Alexei A. Buchachenko

Abstract:

Recent proposals to use Yb dimer as an optical clock and as a sensor for non-Newtonian gravity imply the knowledge of its interaction potential. Here, the ground-state Born-Oppenheimer Yb₂ potential energy curve is represented by a semi-analytical function, consisting of short- and long-range contributions. For the former, the systematic ab initio all-electron exact 2-component scalar-relativistic CCSD(T) calculations are carried out. Special care is taken to saturate diffuse basis set component with the atom- and bond-centered primitives and reach the complete basis set limit through n = D, T, Q sequence of the correlation-consistent polarized n-zeta basis sets. Similar approaches are used to the long-range dipole and quadrupole dispersion terms by implementing the CCSD(3) polarization propagator method for dynamic polarizabilities. Dispersion coefficients are then computed through Casimir-Polder integration. The semiclassical constraint on the number of the bound vibrational levels known for the ¹⁷⁴Yb isotope is used to scale the potential function. The scaling, based on the most accurate ab initio results, bounds the interaction energy of two Yb atoms within the narrow 734 ± 4 cm⁻¹ range, in reasonable agreement with the previous ab initio-based estimations. The resulting potentials can be used as the reference for more sophisticated models that go beyond the Born-Oppenheimer approximation and provide the means of their uncertainty estimations. The work is supported by Russian Science Foundation grant # 17-13-01466.

Keywords: ab initio coupled cluster methods, interaction potential, semi-analytical function, ytterbium dimer

Procedia PDF Downloads 148