Search results for: metric tensor
324 Integral Form Solutions of the Linearized Navier-Stokes Equations without Deviatoric Stress Tensor Term in the Forward Modeling for FWI
Authors: Anyeres N. Atehortua Jimenez, J. David Lambraño, Juan Carlos Muñoz
Abstract:
Navier-Stokes equations (NSE), which describe the dynamics of a fluid, have an important application on modeling waves used for data inversion techniques as full waveform inversion (FWI). In this work a linearized version of NSE and its variables, neglecting deviatoric terms of stress tensor, is presented. In order to get a theoretical modeling of pressure p(x,t) and wave velocity profile c(x,t), a wave equation of visco-acoustic medium (VAE) is written. A change of variables p(x,t)=q(x,t)h(ρ), is made on the equation for the VAE leading to a well known Klein-Gordon equation (KGE) describing waves propagating in variable density medium (ρ) with dispersive term α^2(x). KGE is reduced to a Poisson equation and solved by proposing a specific function for α^2(x) accounting for the energy dissipation and dispersion. Finally, an integral form solution is derived for p(x,t), c(x,t) and kinematics variables like particle velocity v(x,t), displacement u(x,t) and bulk modulus function k_b(x,t). Further, it is compared this visco-acoustic formulation with another form broadly used in the geophysics; it is argued that this formalism is more general and, given its integral form, it may offer several advantages from the modern parallel computing point of view. Applications to minimize the errors in modeling for FWI applied to oils resources in geophysics are discussed.Keywords: Navier-Stokes equations, modeling, visco-acoustic, inversion FWI
Procedia PDF Downloads 518323 Influence of Biochar Application on Growth, Dry Matter Yield and Nutrition of Corn (Zea mays L.) Grown on Sandy Loam Soils of Gujarat, India
Authors: Pravinchandra Patel
Abstract:
Sustainable agriculture in sandy loam soil generally faces large constraints due to low water holding and nutrient retention capacity, and accelerated mineralization of soil organic matter. There is need to increase soil organic carbon in the soil for higher crop productivity and soil sustainability. Recently biochar is considered as sixth element and work as a catalyst for increasing crop yield, soil fertility, soil sustainability and mitigation of climate change. Biochar was generated at the Sansoli Farm of Anand Agricultural University, Gujarat, India by pyrolysis at temperatures (250-400°C) in absence of oxygen using slow chemical process (using two kilns) from corn stover (Zea mays, L), cluster bean stover (Cyamopsis tetragonoloba) and Prosopis julifera wood. There were 16 treatments; 4 organic sources (3 biochar; corn stover biochar (MS), cluster bean stover (CB) & Prosopis julifera wood (PJ) and one farmyard manure-FYM) with two rate of application (5 & 10 metric tons/ha), so there were eight treatments of organic sources. Eight organic sources was applied with the recommended dose of fertilizers (RDF) (80-40-0 kg/ha N-P-K) while remaining eight organic sources were kept without RDF. Application of corn stover biochar @ 10 metric tons/ha along with RDF (RDF+MS) increased dry matter (DM) yield, crude protein (CP) yield, chlorophyll content and plant height (at 30 and 60 days after sowing) than CB and PJ biochar and FYM. Nutrient uptake of P, K, Ca, Mg, S and Cu were significantly increased with the application of RDF + corn stover @ 10 metric tons/ha while uptake of N and Mn were significantly increased in RDF + corn stover @ 5 metric tons/ha. It was found that soil application of corn stover biochar @ 10 metric tons/ha along with the recommended dose of chemical fertilizers (RDF+MS ) exhibited the highest impact in obtaining significantly higher dry matter and crude protein yields and larger removal of nutrients from the soil and it also beneficial for built up nutrients in soil. It also showed significantly higher organic carbon content and cation exchange capacity in sandy loam soil. The lower dose of corn stover biochar @ 5 metric tons/ha (RDF+ MS) was also remained the second highest for increasing dry matter and crude protein yields of forage corn crop which ultimately resulted in larger removals of nutrients from the soil. This study highlights the importance of mixing of biochar along with recommended dose of fertilizers on its synergistic effect on sandy loam soil nutrient retention, organic carbon content and water holding capacity hence, the amendment value of biochar in sandy loam soil.Keywords: biochar, corn yield, plant nutrient, fertility status
Procedia PDF Downloads 148322 Using Different Methods of Nanofabrication as a New Way to Activate Cement Replacement Materials in Concrete Industry
Authors: Azadeh Askarinejad, Parham Hayati, Reza Parchami, Parisa Hayati
Abstract:
One of the most important industries and building operations causing carbon dioxide emission is the cement and concrete related industries so that cement production (including direct fuel for mining and transporting raw material) consumes approximately 6 million Btus per metric-ton, and releases about 1 metric-ton of CO2. Reducing the consumption of cement with simultaneous utilizing waste materials as cement replacement is preferred for reasons of environmental protection. Blended cements consist of different supplementary cementitious materials (SCM), such as fly ash, silica fume, Ground Granulated Blast Furnace Slag (GGBFS), limestone, natural pozzolans, etc. these materials should be chemically activated to show effective cementitious properties. The present review article reports three different methods of nanofabrication that were used for activation of two types of SCMs.Keywords: nanofabrication, cement replacement materials, activation, concrete
Procedia PDF Downloads 612321 Integer Programming Model for the Network Design Problem with Facility Dependent Shortest Path Routing
Authors: Taehan Lee
Abstract:
We consider a network design problem which has shortest routing restriction based on the values determined by the installed facilities on each arc. In conventional multicommodity network design problem, a commodity can be routed through any possible path when the capacity is available. But, we consider a problem in which the commodity between two nodes must be routed on a path which has shortest metric value and the link metric value is determined by the installed facilities on the link. By this routing restriction, the problem has a distinct characteristic. We present an integer programming formulation containing the primal-dual optimality conditions to the shortest path routing. We give some computational results for the model.Keywords: integer programming, multicommodity network design, routing, shortest path
Procedia PDF Downloads 418320 Effect of Dust Rejected by Iron and Steel Complex on Roots of Bean Phaseolus vulgaris
Authors: Labiba Zerari Bourafa, Djebar Mohamed Reda, Berrebah Houria, Khadri Sihem, Chiheb Linda
Abstract:
The study of the effect of metal dust (pollutants) was performed on higher plant white beans Phaseolus vulgaris; the experience took place in cellular toxicology laboratory (in vitro culture). The seeds of the bean Phaseolus vulgaris are cultured in a metal contaminated dust medium (a single treatment by different increasing doses), at a rate of 10 seeds per box, for 10 days. The measurement of morpho-metric parameters is performed during the first 96 hours that follow the germination; while the dosage of the proline, the protein content and histological sections are formed on the tenth day (240 h). All morpho-metric and biochemical parameters measured were highly disturbed by metal dust; histological sections confirm this disurbance.Keywords: conductive fabrics, metal dust, osmoticums, roots, Phaseolus vulgaris
Procedia PDF Downloads 374319 Automatic Detection and Update of Region of Interest in Vehicular Traffic Surveillance Videos
Authors: Naydelis Brito Suárez, Deni Librado Torres Román, Fernando Hermosillo Reynoso
Abstract:
Automatic detection and generation of a dynamic ROI (Region of Interest) in vehicle traffic surveillance videos based on a static camera in Intelligent Transportation Systems is challenging for computer vision-based systems. The dynamic ROI, being a changing ROI, should capture any other moving object located outside of a static ROI. In this work, the video is represented by a Tensor model composed of a Background and a Foreground Tensor, which contains all moving vehicles or objects. The values of each pixel over a time interval are represented by time series, and some pixel rows were selected. This paper proposes a pixel entropy-based algorithm for automatic detection and generation of a dynamic ROI in traffic videos under the assumption of two types of theoretical pixel entropy behaviors: (1) a pixel located at the road shows a high entropy value due to disturbances in this zone by vehicle traffic, (2) a pixel located outside the road shows a relatively low entropy value. To study the statistical behavior of the selected pixels, detecting the entropy changes and consequently moving objects, Shannon, Tsallis, and Approximate entropies were employed. Although Tsallis entropy achieved very high results in real-time, Approximate entropy showed results slightly better but in greater time.Keywords: convex hull, dynamic ROI detection, pixel entropy, time series, moving objects
Procedia PDF Downloads 73318 Rhythmic Prioritisation as a Means of Compositional Organisation: Analysing Meshuggah’s “do Not Look Down”
Authors: Nicholas Freer
Abstract:
Rhythmic complexity in progressive metal is a developing area of analysis, particularly the interpretation of hyper-metric time spans as hierarchically significant rhythmic units of compositional organisation (Pieslak 2007, Charupakorn 2012, Capuzzo 2018, Calder 2018, Lucas 2018, Hannan 2020). This paper adds to this developing area by considering the relationships between the concepts of tactus, metric imposition, polymeter and rhythmic parallax in the Meshuggah composition “Do Not Look Down”. By considering an architectonic rhythmic framework within “Do Not Look Down” as the controlling organisation mechanism, an exploration of the interaction between distinct rhythmic layers and the composition’s formal segmentation and harmony (as riffs), reveals a pervasive structural misalignment between these elements. By exhibiting how Meshuggah’s manipulations of rhythmic complexities deliberately blur structural boundaries, creating misalignments in a flat approach to temporal partitioning (Nieto 2014), rhythmic characteristics of Meshuggah and the genre of Djent are exposed.Keywords: hypermeter, rhythmic parallax, meshuggah, temporal partitioning
Procedia PDF Downloads 76317 Fairness in Recommendations Ranking: From Pairwise Approach to Listwise Approach
Authors: Patik Joslin Kenfack, Polyakov Vladimir Mikhailovich
Abstract:
Machine Learning (ML) systems are trained using human generated data that could be biased by implicitly containing racist, sexist, or discriminating data. ML models learn those biases or even amplify them. Recent research in work on has begun to consider issues of fairness. The concept of fairness is extended to recommendation. A recommender system will be considered fair if it doesn’t under rank items of protected group (gender, race, demographic...). Several metrics for evaluating fairness concerns in recommendation systems have been proposed, which take pairs of items as ‘instances’ in fairness evaluation. It doesn’t take in account the fact that the fairness should be evaluated across a list of items. The paper explores a probabilistic approach that generalize pairwise metric by using a list k (listwise) of items as ‘instances’ in fairness evaluation, parametrized by k. We also explore new regularization method based on this metric to improve fairness ranking during model training.Keywords: Fairness, Recommender System, Ranking, Listwise Approach
Procedia PDF Downloads 147316 IT-Aided Business Process Enabling Real-Time Analysis of Candidates for Clinical Trials
Authors: Matthieu-P. Schapranow
Abstract:
Recruitment of participants for clinical trials requires the screening of a big number of potential candidates, i.e. the testing for trial-specific inclusion and exclusion criteria, which is a time-consuming and complex task. Today, a significant amount of time is spent on identification of adequate trial participants as their selection may affect the overall study results. We introduce a unique patient eligibility metric, which allows systematic ranking and classification of candidates based on trial-specific filter criteria. Our web application enables real-time analysis of patient data and assessment of candidates using freely definable inclusion and exclusion criteria. As a result, the overall time required for identifying eligible candidates is tremendously reduced whilst additional degrees of freedom for evaluating the relevance of individual candidates are introduced by our contribution.Keywords: in-memory technology, clinical trials, screening, eligibility metric, data analysis, clustering
Procedia PDF Downloads 492315 Application of Metric Dimension of Graph in Unraveling the Complexity of Hyperacusis
Authors: Hassan Ibrahim
Abstract:
The prevalence of hyperacusis, an auditory condition characterized by heightened sensitivity to sounds, continues to rise, posing challenges for effective diagnosis and intervention. It is believed that this work deepens will deepens the understanding of hyperacusis etiology by employing graph theory as a novel analytical framework. We constructed a comprehensive graph wherein nodes represent various factors associated with hyperacusis, including aging, head or neck trauma, infection/virus, depression, migraines, ear infection, anxiety, and other potential contributors. Relationships between factors are modeled as edges, allowing us to visualize and quantify the interactions within the etiological landscape of hyperacusis. it employ the concept of the metric dimension of a connected graph to identify key nodes (landmarks) that serve as critical influencers in the interconnected web of hyperacusis causes. This approach offers a unique perspective on the relative importance and centrality of different factors, shedding light on the complex interplay between physiological, psychological, and environmental determinants. Visualization techniques were also employed to enhance the interpretation and facilitate the identification of the central nodes. This research contributes to the growing body of knowledge surrounding hyperacusis by offering a network-centric perspective on its multifaceted causes. The outcomes hold the potential to inform clinical practices, guiding healthcare professionals in prioritizing interventions and personalized treatment plans based on the identified landmarks within the etiological network. Through the integration of graph theory into hyperacusis research, the complexity of this auditory condition was unraveled and pave the way for more effective approaches to its management.Keywords: auditory condition, connected graph, hyperacusis, metric dimension
Procedia PDF Downloads 38314 Unraveling the Complexity of Hyperacusis: A Metric Dimension of a Graph Concept
Authors: Hassan Ibrahim
Abstract:
The prevalence of hyperacusis, an auditory condition characterized by heightened sensitivity to sounds, continues to rise, posing challenges for effective diagnosis and intervention. It is believed that this work deepens will deepens the understanding of hyperacusis etiology by employing graph theory as a novel analytical framework. it constructed a comprehensive graph wherein nodes represent various factors associated with hyperacusis, including aging, head or neck trauma, infection/virus, depression, migraines, ear infection, anxiety, and other potential contributors. Relationships between factors are modeled as edges, allowing us to visualize and quantify the interactions within the etiological landscape of hyperacusis. it employ the concept of the metric dimension of a connected graph to identify key nodes (landmarks) that serve as critical influencers in the interconnected web of hyperacusis causes. This approach offers a unique perspective on the relative importance and centrality of different factors, shedding light on the complex interplay between physiological, psychological, and environmental determinants. Visualization techniques were also employed to enhance the interpretation and facilitate the identification of the central nodes. This research contributes to the growing body of knowledge surrounding hyperacusis by offering a network-centric perspective on its multifaceted causes. The outcomes hold the potential to inform clinical practices, guiding healthcare professionals in prioritizing interventions and personalized treatment plans based on the identified landmarks within the etiological network. Through the integration of graph theory into hyperacusis research, the complexity of this auditory condition was unraveled and pave the way for more effective approaches to its management.Keywords: auditory condition, connected graph, hyperacusis, metric dimension
Procedia PDF Downloads 19313 Robust Pattern Recognition via Correntropy Generalized Orthogonal Matching Pursuit
Authors: Yulong Wang, Yuan Yan Tang, Cuiming Zou, Lina Yang
Abstract:
This paper presents a novel sparse representation method for robust pattern classification. Generalized orthogonal matching pursuit (GOMP) is a recently proposed efficient sparse representation technique. However, GOMP adopts the mean square error (MSE) criterion and assign the same weights to all measurements, including both severely and slightly corrupted ones. To reduce the limitation, we propose an information-theoretic GOMP (ITGOMP) method by exploiting the correntropy induced metric. The results show that ITGOMP can adaptively assign small weights on severely contaminated measurements and large weights on clean ones, respectively. An ITGOMP based classifier is further developed for robust pattern classification. The experiments on public real datasets demonstrate the efficacy of the proposed approach.Keywords: correntropy induced metric, matching pursuit, pattern classification, sparse representation
Procedia PDF Downloads 355312 Large Language Model Powered Chatbots Need End-to-End Benchmarks
Authors: Debarag Banerjee, Pooja Singh, Arjun Avadhanam, Saksham Srivastava
Abstract:
Autonomous conversational agents, i.e., chatbots, are becoming an increasingly common mechanism for enterprises to provide support to customers and partners. In order to rate chatbots, especially ones powered by Generative AI tools like Large Language Models (LLMs), we need to be able to accurately assess their performance. This is where chatbot benchmarking becomes important. In this paper, authors propose the use of a benchmark that they call the E2E (End to End) benchmark and show how the E2E benchmark can be used to evaluate the accuracy and usefulness of the answers provided by chatbots, especially ones powered by LLMs. The authors evaluate an example chatbot at different levels of sophistication based on both our E2E benchmark as well as other available metrics commonly used in the state of the art and observe that the proposed benchmark shows better results compared to others. In addition, while some metrics proved to be unpredictable, the metric associated with the E2E benchmark, which uses cosine similarity, performed well in evaluating chatbots. The performance of our best models shows that there are several benefits of using the cosine similarity score as a metric in the E2E benchmark.Keywords: chatbot benchmarking, end-to-end (E2E) benchmarking, large language model, user centric evaluation.
Procedia PDF Downloads 64311 Water Quality Assessment Based on Operational Indicator in West Coastal Water of Malaysia
Authors: Seyedeh Belin Tavakoly Sany, H. Rosli, R. Majid, S. Aishah
Abstract:
In this study, water monitoring was performed from Nov. 2012 to Oct. 2013 to assess water quality and evaluate the spatial and temporal distribution of physicochemical and biological variables in water. Water samples were collected from 10 coastal water stations of West Port. In the case of water-quality assessment, multi-metric indices and operational indicators have been proposed to classify the trophic status at different stations. The trophic level of West Port coastal water ranges from eutrophic to hypertrophic. Chl-a concentration was used to estimate the biological response of phytoplankton biomass and indicated eutrophic conditions in West Port and mesotrophic conditions at the control site. During the study period, no eutrophication events or secondary symptoms occurred, which may be related to hydrodynamic turbulence and water exchange, which prevent the development of eutrophic conditions in the West Port.Keywords: water quality, multi-metric indices, operational indicator, Malaysia, West Port
Procedia PDF Downloads 294310 Base Change for Fisher Metrics: Case of the q-Gaussian Inverse Distribution
Authors: Gabriel I. Loaiza Ossa, Carlos A. Cadavid Moreno, Juan C. Arango Parra
Abstract:
It is known that the Riemannian manifold determined by the family of inverse Gaussian distributions endowed with the Fisher metric has negative constant curvature κ= -1/2, as does the family of usual Gaussian distributions. In the present paper, firstly, we arrive at this result by following a different path, much simpler than the previous ones. We first put the family in exponential form, thus endowing the family with a new set of parameters, or coordinates, θ₁, θ₂; then we determine the matrix of the Fisher metric in terms of these parameters; and finally we compute this matrix in the original parameters. Secondly, we define the inverse q-Gaussian distribution family (q < 3) as the family obtained by replacing the usual exponential function with the Tsallis q-exponential function in the expression for the inverse Gaussian distribution and observe that it supports two possible geometries, the Fisher and the q-Fisher geometry. And finally, we apply our strategy to obtain results about the Fisher and q-Fisher geometry of the inverse q-Gaussian distribution family, similar to the ones obtained in the case of the inverse Gaussian distribution family.Keywords: base of changes, information geometry, inverse Gaussian distribution, inverse q-Gaussian distribution, statistical manifolds
Procedia PDF Downloads 242309 Study υ_4 Fundamental Band of 12 CD4 Molecule
Authors: Kaarour Abdelkrim, Ouardi Okkacha, Meskine Mohamed
Abstract:
In this study, the υ_4 fundamental band of 12CD4 molecule has been studied by infrared spectroscopy with high resolution. Using XTDS and SPEVIEW software and the tensor formalism developed by ICB (laboratoire interdisciplinaire de Bourgogne) to several lines have been assigned and fitted with a standard deviation acceptable. This analysis allowed us to calculate several parameters of the molecule 12 CD4.Keywords: XTDS, SPEVIEW, tetrahedral tensorial formalism, rovibrational band
Procedia PDF Downloads 325308 [Keynote Talk]: Existence of Random Fixed Point Theorem for Contractive Mappings
Authors: D. S. Palimkar
Abstract:
Random fixed point theory has received much attention in recent years, and it is needed for the study of various classes of random equations. The study of random fixed point theorems was initiated by the Prague school of probabilistic in the 1950s. The existence and uniqueness of fixed points for the self-maps of a metric space by altering distances between the points with the use of a control function is an interesting aspect in the classical fixed point theory. In a new category of fixed point problems for a single self-map with the help of a control function that alters the distance between two points in a metric space which they called an altering distance function. In this paper, we prove the results of existence of random common fixed point and its uniqueness for a pair of random mappings under weakly contractive condition for generalizing alter distance function in polish spaces using Random Common Fixed Point Theorem for Generalized Weakly Contractions.Keywords: Polish space, random common fixed point theorem, weakly contractive mapping, altering function
Procedia PDF Downloads 271307 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 119306 Metrics and Methods for Improving Resilience in Agribusiness Supply Chains
Authors: Golnar Behzadi, Michael O'Sullivan, Tava Olsen, Abraham Zhang
Abstract:
By definition, increasing supply chain resilience improves the supply chain’s ability to return to normal, or to an even more desirable situation, quickly and efficiently after being hit by a disruption. This is especially critical in agribusiness supply chains where the products are perishable and have a short life-cycle. In this paper, we propose a resilience metric to capture and improve the recovery process in terms of both performance and time, of an agribusiness supply chain following either supply or demand-side disruption. We build a model that determines optimal supply chain recovery planning decisions and selects the best resilient strategies that minimize the loss of profit during the recovery time window. The model is formulated as a two-stage stochastic mixed-integer linear programming problem and solved with a branch-and-cut algorithm. The results show that the optimal recovery schedule is highly dependent on the duration of the time-window allowed for recovery. In addition, the profit loss during recovery is reduced by utilizing the proposed resilient actions.Keywords: agribusiness supply chain, recovery, resilience metric, risk management
Procedia PDF Downloads 395305 The Cost of Solar-Centric Renewable Portfolio
Authors: Timothy J. Considine, Edward J. M. Manderson
Abstract:
This paper develops an econometric forecasting system of energy demand coupled with engineering-economic models of energy supply. The framework is used to quantify the impact of state-level renewable portfolio standards (RPSs) achieved predominately with solar generation on electricity rates, electricity consumption, and environmental quality. We perform the analysis using Arizona’s RPS as a case study. We forecast energy demand in Arizona out to 2035, and find by this time the state will require an additional 35 million MWh of electricity generation. If Arizona implements its RPS when supplying this electricity demand, we find there will be a substantial increase in electricity rates (relative to a business-as-usual scenario of reliance on gas-fired generation). Extending the current regime of tax credits can greatly reduce this increase, at the taxpayers’ expense. We find that by 2025 Arizona’s RPS will implicitly abate carbon dioxide emissions at a cost between $101 and $135 per metric ton, and by 2035 abatement costs are between $64 and $112 per metric ton (depending on the future evolution of nature gas prices).Keywords: electricity demand, renewable portfolio standard, solar, carbon dioxide
Procedia PDF Downloads 483304 Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content
Authors: Maryam Azimi, Amin Banitalebi-Dehkordi, Yuanyuan Dong, Mahsa T. Pourazad, Panos Nasiopoulos
Abstract:
While there exists a wide variety of Low Dynamic Range (LDR) quality metrics, only a limited number of metrics are designed specifically for the High Dynamic Range (HDR) content. With the introduction of HDR video compression standardization effort by international standardization bodies, the need for an efficient video quality metric for HDR applications has become more pronounced. The objective of this study is to compare the performance of the existing full-reference LDR and HDR video quality metrics on HDR content and identify the most effective one for HDR applications. To this end, a new HDR video data set is created, which consists of representative indoor and outdoor video sequences with different brightness, motion levels and different representing types of distortions. The quality of each distorted video in this data set is evaluated both subjectively and objectively. The correlation between the subjective and objective results confirm that VIF quality metric outperforms all to their tested metrics in the presence of the tested types of distortions.Keywords: HDR, dynamic range, LDR, subjective evaluation, video compression, HEVC, video quality metrics
Procedia PDF Downloads 522303 AI Peer Review Challenge: Standard Model of Physics vs 4D GEM EOS
Authors: David A. Harness
Abstract:
Natural evolution of ATP cognitive systems is to meet AI peer review standards. ATP process of axiom selection from Mizar to prove a conjecture would be further refined, as in all human and machine learning, by solving the real world problem of the proposed AI peer review challenge: Determine which conjecture forms the higher confidence level constructive proof between Standard Model of Physics SU(n) lattice gauge group operation vs. present non-standard 4D GEM EOS SU(n) lattice gauge group spatially extended operation in which the photon and electron are the first two trace angular momentum invariants of a gravitoelectromagnetic (GEM) energy momentum density tensor wavetrain integration spin-stress pressure-volume equation of state (EOS), initiated via 32 lines of Mathematica code. Resulting gravitoelectromagnetic spectrum ranges from compressive through rarefactive of the central cosmological constant vacuum energy density in units of pascals. Said self-adjoint group operation exclusively operates on the stress energy momentum tensor of the Einstein field equations, introducing quantization directly on the 4D spacetime level, essentially reformulating the Yang-Mills virtual superpositioned particle compounded lattice gauge groups quantization of the vacuum—into a single hyper-complex multi-valued GEM U(1) × SU(1,3) lattice gauge group Planck spacetime mesh quantization of the vacuum. Thus the Mizar corpus already contains all of the axioms required for relevant DeepMath premise selection and unambiguous formal natural language parsing in context deep learning.Keywords: automated theorem proving, constructive quantum field theory, information theory, neural networks
Procedia PDF Downloads 179302 Analysis of the Predictive Performance of Value at Risk Estimations in Times of Financial Crisis
Authors: Alexander Marx
Abstract:
Measuring and mitigating market risk is essential for the stability of enterprises, especially for major banking corporations and investment bank firms. To employ these risk measurement and mitigation processes, the Value at Risk (VaR) is the most commonly used risk metric by practitioners. In the past years, we have seen significant weaknesses in the predictive performance of the VaR in times of financial market crisis. To address this issue, the purpose of this study is to investigate the value-at-risk (VaR) estimation models and their predictive performance by applying a series of backtesting methods on the stock market indices of the G7 countries (Canada, France, Germany, Italy, Japan, UK, US, Europe). The study employs parametric, non-parametric, and semi-parametric VaR estimation models and is conducted during three different periods which cover the most recent financial market crisis: the overall period (2006–2022), the global financial crisis period (2008–2009), and COVID-19 period (2020–2022). Since the regulatory authorities have introduced and mandated the Conditional Value at Risk (Expected Shortfall) as an additional regulatory risk management metric, the study will analyze and compare both risk metrics on their predictive performance.Keywords: value at risk, financial market risk, banking, quantitative risk management
Procedia PDF Downloads 92301 CNN-Based Compressor Mass Flow Estimator in Industrial Aircraft Vapor Cycle System
Authors: Justin Reverdi, Sixin Zhang, Saïd Aoues, Fabrice Gamboa, Serge Gratton, Thomas Pellegrini
Abstract:
In vapor cycle systems, the mass flow sensor plays a key role for different monitoring and control purposes. However, physical sensors can be inaccurate, heavy, cumbersome, expensive, or highly sensitive to vibrations, which is especially problematic when embedded into an aircraft. The conception of a virtual sensor, based on other standard sensors, is a good alternative. This paper has two main objectives. Firstly, a data-driven model using a convolutional neural network is proposed to estimate the mass flow of the compressor. We show that it significantly outperforms the standard polynomial regression model (thermodynamic maps) in terms of the standard MSE metric and engineer performance metrics. Secondly, a semi-automatic segmentation method is proposed to compute the engineer performance metrics for real datasets, as the standard MSE metric may pose risks in analyzing the dynamic behavior of vapor cycle systems.Keywords: deep learning, convolutional neural network, vapor cycle system, virtual sensor
Procedia PDF Downloads 59300 Uncertainty Reduction and Dyadic Interaction through Social Media
Authors: Masrur Alam Khan
Abstract:
The purpose of this study was to examine the dyadic interaction techniques that social media users utilize to reduce uncertainty in their day to day business engagements in the absence of their physical interaction. The study empirically tested assumptions of uncertainty reduction theory while addressing self-disclosure, seeking questions to develop consensus, and subsequently to achieve intimacy in very conducive environment. Moreover, this study examined the effect of dyadic interaction through social media among business community while identifying the strength of their reciprocity in relationships and compares it with those having no dyadic relations due to absence of social media. Using socio-metric survey, the study revealed a better understanding of their partners for upholding their professional relations more credible. A sample of unacquainted, both male and female, was randomly asked questions regarding their nature of dyadic interaction within their office while using social media (face-to-face, visual CMC (webcam) or text-only). Primary results explored that the social media users develop their better know-how about their professional obligations to reduce ambiguity and align with one to one interact.Keywords: dyadic-interaction, social media, uncertainty reduction, socio-metric survey, self-disclosure, intimacy, reciprocity in relationship
Procedia PDF Downloads 136299 Finch-Skea Stellar Structures in F(R, ϕ, X) Theory of Gravity Using Bardeen Geometry
Authors: Aqsa Asharaf
Abstract:
The current study aims to examine the physical characteristics of charge compact spheres employing anisotropic fluid under f(R, ϕ, X) modified gravity approach, exploring how this theoretical context influences their attributes and behavior. To accomplish our goal, we adopt the Spherically Symmetric (SS) space-time and, additionally, employ a specific Adler-based mode for the metric potential (gtt), which yields a broader class of solutions, Then, by making use of the Karmarkar condition, we successfully derive the other metric potential. A primary component of our current analysis is utilizing the Bardeen geometry as extrinsic space-time to determine the constant parameters of intrinsic space-time. Further, to validate the existence of Bardeen stellar spheres, we debate the behavior of physical properties and parameters such as components of pressure, energy density, anisotropy, parameters of EoS, stability and dynamical equilibrium, energy bounds, mass function, adiabatic index, compactness factor, and surface redshift. Conclusively, all the obtained results show that the system under consideration is physically stable, free from singularity, and viable models.Keywords: cosmology, GR, Bardeen BH, modified gravities
Procedia PDF Downloads 27298 Developing Fault Tolerance Metrics of Web and Mobile Applications
Authors: Ahmad Mohsin, Irfan Raza Naqvi, Syda Fatima Usamn
Abstract:
Applications with higher fault tolerance index are considered more reliable and trustworthy to drive quality. In recent years application development has been shifted from traditional desktop and web to native and hybrid application(s) for the web and mobile platforms. With the emergence of Internet of things IOTs, cloud and big data trends, the need for measuring Fault Tolerance for these complex nature applications has increased to evaluate their performance. There is a phenomenal gap between fault tolerance metrics development and measurement. Classic quality metric models focused on metrics for traditional systems ignoring the essence of today’s applications software, hardware & deployment characteristics. In this paper, we have proposed simple metrics to measure fault tolerance considering general requirements for Web and Mobile Applications. We have aligned factors – subfactors, using GQM for metrics development considering the nature of mobile we apps. Systematic Mathematical formulation is done to measure metrics quantitatively. Three web mobile applications are selected to measure Fault Tolerance factors using formulated metrics. Applications are then analysed on the basis of results from observations in a controlled environment on different mobile devices. Quantitative results are presented depicting Fault tolerance in respective applications.Keywords: web and mobile applications, reliability, fault tolerance metric, quality metrics, GQM based metrics
Procedia PDF Downloads 343297 Active Space Debris Removal by Extreme Ultraviolet Radiation
Authors: A. Anandha Selvan, B. Malarvizhi
Abstract:
In recent year the problem of space debris have become very serious. The mass of the artificial objects in orbit increased quite steadily at the rate of about 145 metric tons annually, leading to a total tally of approximately 7000 metric tons. Now most of space debris object orbiting in LEO region about 97%. The catastrophic collision can be mostly occurred in LEO region, where this collision generate the new debris. Thus, we propose a concept for cleaning the space debris in the region of thermosphere by passing the Extreme Ultraviolet (EUV) radiation to in front of space debris object from the re-orbiter. So in our concept the Extreme Ultraviolet (EUV) radiation will create the thermosphere expansion by reacting with atmospheric gas particles. So the drag is produced in front of the space debris object by thermosphere expansion. This drag force is high enough to slow down the space debris object’s relative velocity. Therefore the space debris object gradually reducing the altitude and finally enter into the earth’s atmosphere. After the first target is removed, the re-orbiter can be goes into next target. This method remove the space debris object without catching debris object. Thus it can be applied to a wide range of debris object without regard to their shapes or rotation. This paper discusses the operation of re-orbiter for removing the space debris in thermosphere region.Keywords: active space debris removal, space debris, LEO, extreme ultraviolet, re-orbiter, thermosphere
Procedia PDF Downloads 460296 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 106295 Homeostatic Analysis of the Integrated Insulin and Glucagon Signaling Network: Demonstration of Bistable Response in Catabolic and Anabolic States
Authors: Pramod Somvanshi, Manu Tomar, K. V. Venkatesh
Abstract:
Insulin and glucagon are responsible for homeostasis of key plasma metabolites like glucose, amino acids and fatty acids in the blood plasma. These hormones act antagonistically to each other during the secretion and signaling stages. In the present work, we analyze the effect of macronutrients on the response from integrated insulin and glucagon signaling pathways. The insulin and glucagon pathways are connected by DAG (a calcium signaling component which is part of the glucagon signaling module) which activates PKC and inhibits IRS (insulin signaling component) constituting a crosstalk. AKT (insulin signaling component) inhibits cAMP (glucagon signaling component) through PDE3 forming the other crosstalk between the two signaling pathways. Physiological level of anabolism and catabolism is captured through a metric quantified by the activity levels of AKT and PKA in their phosphorylated states, which represent the insulin and glucagon signaling endpoints, respectively. Under resting and starving conditions, the phosphorylation metric represents homeostasis indicating a balance between the anabolic and catabolic activities in the tissues. The steady state analysis of the integrated network demonstrates the presence of a bistable response in the phosphorylation metric with respect to input plasma glucose levels. This indicates that two steady state conditions (one in the homeostatic zone and other in the anabolic zone) are possible for a given glucose concentration depending on the ON or OFF path. When glucose levels rise above normal, during post-meal conditions, the bistability is observed in the anabolic space denoting the dominance of the glycogenesis in liver. For glucose concentrations lower than the physiological levels, while exercising, metabolic response lies in the catabolic space denoting the prevalence of glycogenolysis in liver. The non-linear positive feedback of AKT on IRS in insulin signaling module of the network is the main cause of the bistable response. The span of bistability in the phosphorylation metric increases as plasma fatty acid and amino acid levels rise and eventually the response turns monostable and catabolic representing diabetic conditions. In the case of high fat or protein diet, fatty acids and amino acids have an inhibitory effect on the insulin signaling pathway by increasing the serine phosphorylation of IRS protein via the activation of PKC and S6K, respectively. Similar analysis was also performed with respect to input amino acid and fatty acid levels. This emergent property of bistability in the integrated network helps us understand why it becomes extremely difficult to treat obesity and diabetes when blood glucose level rises beyond a certain value.Keywords: bistability, diabetes, feedback and crosstalk, obesity
Procedia PDF Downloads 273