Search results for: measurement models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9178

Search results for: measurement models

6838 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 130
6837 Large Scale Method to Assess the Seismic Vulnerability of Heritage Buidings: Modal Updating of Numerical Models and Vulnerability Curves

Authors: Claire Limoge Schraen, Philippe Gueguen, Cedric Giry, Cedric Desprez, Frédéric Ragueneau

Abstract:

Mediterranean area is characterized by numerous monumental or vernacular masonry structures illustrating old ways of build and live. Those precious buildings are often poorly documented, present complex shapes and loadings, and are protected by the States, leading to legal constraints. This area also presents a moderate to high seismic activity. Even moderate earthquakes can be magnified by local site effects and cause collapse or significant damage. Moreover the structural resistance of masonry buildings, especially when less famous or located in rural zones has been generally lowered by many factors: poor maintenance, unsuitable restoration, ambient pollution, previous earthquakes. Recent earthquakes prove that any damage to these architectural witnesses to our past is irreversible, leading to the necessity of acting preventively. This means providing preventive assessments for hundreds of structures with no or few documents. In this context we want to propose a general method, based on hierarchized numerical models, to provide preliminary structural diagnoses at a regional scale, indicating whether more precise investigations and models are necessary for each building. To this aim, we adapt different tools, being developed such as photogrammetry or to be created such as a preprocessor starting from pictures to build meshes for a FEM software, in order to allow dynamic studies of the buildings of the panel. We made an inventory of 198 baroque chapels and churches situated in the French Alps. Then their structural characteristics have been determined thanks field surveys and the MicMac photogrammetric software. Using structural criteria, we determined eight types of churches and seven types for chapels. We studied their dynamical behavior thanks to CAST3M, using EC8 spectrum and accelerogramms of the studied zone. This allowed us quantifying the effect of the needed simplifications in the most sensitive zones and choosing the most effective ones. We also proposed threshold criteria based on the observed damages visible in the in situ surveys, old pictures and Italian code. They are relevant in linear models. To validate the structural types, we made a vibratory measures campaign using vibratory ambient noise and velocimeters. It also allowed us validating this method on old masonry and identifying the modal characteristics of 20 churches. Then we proceeded to a dynamic identification between numerical and experimental modes. So we updated the linear models thanks to material and geometrical parameters, often unknown because of the complexity of the structures and materials. The numerically optimized values have been verified thanks to the measures we made on the masonry components in situ and in laboratory. We are now working on non-linear models redistributing the strains. So we validate the damage threshold criteria which we use to compute the vulnerability curves of each defined structural type. Our actual results show a good correlation between experimental and numerical data, validating the final modeling simplifications and the global method. We now plan to use non-linear analysis in the critical zones in order to test reinforcement solutions.

Keywords: heritage structures, masonry numerical modeling, seismic vulnerability assessment, vibratory measure

Procedia PDF Downloads 492
6836 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 207
6835 Effects of Concomitant Use of Metformin and Powdered Moringa Oleifera Leaves on Glucose Tolerance in Sprague-Dawley Rats

Authors: Emielex M. Aguilar, Kristen Angela G. Cruz, Czarina Joie L. Rivera, Francis Dave C. Tan, Gavino Ivan N. Tanodra, Dianne Katrina G. Usana, Mary Grace T. Valentin, Nico Albert S. Vasquez, Edwin Monico C. Wee

Abstract:

The risk of diabetes mellitus is increasing in the Philippines, with Metformin and Insulin as drugs commonly used for its management. The use of herbal medicines has grown increasingly, especially among the elderly population. Moringa oleifera or malunggay is one of the most common plants in the country, and several studies have shown the plant to exhibit a hypoglycemic property with its flavonoid content. This study aims to investigate the possible effects of concomitant use of Metformin and powdered M. oleifera leaves (PMOL) on blood glucose levels. Twenty male Sprague-Dawley rats were equally distributed into four groups. Fasting blood glucose levels of the rats were measured prior to experimentation. The following treatments were administered to the four groups, respectively: glucose only 2 g/kg; glucose 2 g/kg + Metformin 100 mg/kg; glucose 2 g/kg + PMOL 200 mg/kg; and glucose 2 g/kg + PMOL 200 mg/kg and Metformin 100 mg/kg. Blood glucose levels were determined on the 1st, 2nd, 3rd, and 4th hour post-treatment and compared between groups. Statistical analysis showed that the type of intervention did not show significance in the reduction of blood glucose levels when compared with the other groups (p=0.378), while the effect of time exhibited significance (p=0.000). The interaction between the type of intervention and time of blood glucose measurement was shown to be significant (p=0.024). Within each group, the control and PMOL-treated groups showed significant reduction in blood glucose levels over time with p-values of 0.000 and 0.000, respectively, while the Metformin-treated and the combination groups had p-values of 0.062 and 0.093, respectively, which are not significant. The descriptive data also showed that the mean total reduction of blood glucose levels of the Metformin and PMOL combination treatment group was lower than the PMOL-treated group alone, while the mean total reduction of blood glucose levels of the combination group was higher than the Metformin-treated group alone. Based on the results obtained, the combination of Metformin and PMOL did not significantly lower the blood glucose levels of the rats as compared to the other groups. However, the concomitant use of Metformin and PMOL may affect each other’s blood glucose lowering activity. Additionally, prolonged time of exposure and delay in the first blood glucose measurement after treatment could exhibit a significant effect in the blood glucose levels. Further studies are recommended regarding the effects of the concomitant use of the two agents on blood glucose levels.

Keywords: blood glucose levels, concomitant use, metformin, Moringa oleifera

Procedia PDF Downloads 413
6834 Detonating Culture, Statistic and Developmenet in Imo State of Nigeria

Authors: Ejikeme Ugiri

Abstract:

In an executive summary, UNESCO describes Framework for Cultural Statistics as a tool for organizing cultural statistics both nationally and internationally. This is based on conceptual foundation and a common understanding of culture that will enable the measurement of a wide range of cultural expressions. This means therefore that cultural expression in whatever guise has the potentiality of contributing reasonably to the development of a given society. The paper looked into the various tangible and intangible cultures in Imo State of Nigeria. Due to government’s insensitivity, there is need to remind ourselves of the need to pay adequate attention to the cultural heritage bequeathed to us by our forefathers for the sake of posterity. Documenting this information in written form therefore becomes imperative. The study concludes that culture if developed, could reasonably contribute to economic and social growth of the society.

Keywords: culture, detonation, development, statistics

Procedia PDF Downloads 468
6833 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials

Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina

Abstract:

The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.

Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials

Procedia PDF Downloads 315
6832 Environmental Effects on Energy Consumption of Smart Grid Consumers

Authors: S. M. Ali, A. Salam Khan, A. U. Khan, M. Tariq, M. S. Hussain, B. A. Abbasi, I. Hussain, U. Farid

Abstract:

Environment and surrounding plays a pivotal rule in structuring life-style of the consumers. Living standards intern effect the energy consumption of the consumers. In smart grid paradigm, climate drifts, weather parameter and green environmental directly relates to the energy profiles of the various consumers, such as residential, commercial and industrial. Considering above factors helps policy in shaping utility load curves and optimal management of demand and supply. Thus, there is a pressing need to develop correlation models of load and weather parameters and critical analysis of the factors effecting energy profiles of smart grid consumers. In this paper, we elaborated various environment and weather parameter factors effecting demand of consumers. Moreover, we developed correlation models, such as Pearson, Spearman, and Kendall, an inter-relation between dependent (load) parameter and independent (weather) parameters. Furthermore, we validated our discussion with real-time data of Texas State. The numerical simulations proved the effective relation of climatic drifts with energy consumption of smart grid consumers.

Keywords: climatic drifts, correlation analysis, energy consumption, smart grid, weather parameter

Procedia PDF Downloads 375
6831 Characterising Stable Model by Extended Labelled Dependency Graph

Authors: Asraful Islam

Abstract:

Extended dependency graph (EDG) is a state-of-the-art isomorphic graph to represent normal logic programs (NLPs) that can characterize the consistency of NLPs by graph analysis. To construct the vertices and arcs of an EDG, additional renaming atoms and rules besides those the given program provides are used, resulting in higher space complexity compared to the corresponding traditional dependency graph (TDG). In this article, we propose an extended labeled dependency graph (ELDG) to represent an NLP that shares an equal number of nodes and arcs with TDG and prove that it is isomorphic to the domain program. The number of nodes and arcs used in the underlying dependency graphs are formulated to compare the space complexity. Results show that ELDG uses less memory to store nodes, arcs, and cycles compared to EDG. To exhibit the desirability of ELDG, firstly, the stable models of the kernel form of NLP are characterized by the admissible coloring of ELDG; secondly, a relation of the stable models of a kernel program with the handles of the minimal, odd cycles appearing in the corresponding ELDG has been established; thirdly, to our best knowledge, for the first time an inverse transformation from a dependency graph to the representing NLP w.r.t. ELDG has been defined that enables transferring analytical results from the graph to the program straightforwardly.

Keywords: normal logic program, isomorphism of graph, extended labelled dependency graph, inverse graph transforma-tion, graph colouring

Procedia PDF Downloads 214
6830 Studying the Impact of Soil Characteristics in Displacement of Retaining Walls Using Finite Element

Authors: Mojtaba Ahmadabadi, Akbar Masoudi, Morteza Rezai

Abstract:

In this paper, using the finite element method, the effect of soil and wall characteristics was investigated. Thirty and two different models were studied by different parameters. These studies could calculate displacement at any height of the wall for frictional-cohesive soils. The main purpose of this research is to determine the most effective soil characteristics in reducing the wall displacement. Comparing different models showed that the overall increase in internal friction angle, angle of friction between soil and wall and modulus of elasticity reduce the replacement of the wall. In addition, increase in special weight of soil will increase the wall displacement. Based on results, it can be said that all wall displacements were overturning and in the backfill, soil was bulging. Results show that the highest impact is seen in reducing wall displacement, internal friction angle, and the angle friction between soil and wall. One of the advantages of this study is taking into account all the parameters of the soil and walls replacement distribution in wall and backfill soil. In this paper, using the finite element method and considering all parameters of the soil, we investigated the impact of soil parameter in wall displacement. The aim of this study is to provide the best conditions in reducing the wall displacement and displacement wall and soil distribution.

Keywords: retaining wall, fem, soil and wall interaction, angle of internal friction of the soil, wall displacement

Procedia PDF Downloads 387
6829 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 95
6828 Creating Energy Sustainability in an Enterprise

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

As we enter the new era of Artificial Intelligence (AI) and Cloud Computing, we mostly rely on the Machine and Natural Language Processing capabilities of AI, and Energy Efficient Hardware and Software Devices in almost every industry sector. In these industry sectors, much emphasis is on developing new and innovative methods for producing and conserving energy and sustaining the depletion of natural resources. The core pillars of sustainability are economic, environmental, and social, which is also informally referred to as the 3 P's (People, Planet and Profits). The 3 P's play a vital role in creating a core Sustainability Model in the Enterprise. Natural resources are continually being depleted, so there is more focus and growing demand for renewable energy. With this growing demand, there is also a growing concern in many industries on how to reduce carbon emissions and conserve natural resources while adopting sustainability in corporate business models and policies. In our paper, we would like to discuss the driving forces such as Climate changes, Natural Disasters, Pandemic, Disruptive Technologies, Corporate Policies, Scaled Business Models and Emerging social media and AI platforms that influence the 3 main pillars of Sustainability (3P’s). Through this paper, we would like to bring an overall perspective on enterprise strategies and the primary focus on bringing cultural shifts in adapting energy-efficient operational models. Overall, many industries across the globe are incorporating core sustainability principles such as reducing energy costs, reducing greenhouse gas (GHG) emissions, reducing waste and increasing recycling, adopting advanced monitoring and metering infrastructure, reducing server footprint and compute resources (Shared IT services, Cloud computing, and Application Modernization) with the vision for a sustainable environment.

Keywords: climate change, pandemic, disruptive technology, government policies, business model, machine learning and natural language processing, AI, social media platform, cloud computing, advanced monitoring, metering infrastructure

Procedia PDF Downloads 111
6827 Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19

Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Daniel Zaccariello

Abstract:

Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language.

Keywords: facial action coding system, COVID-19, masks, facial analysis

Procedia PDF Downloads 79
6826 End-to-End Spanish-English Sequence Learning Translation Model

Authors: Vidhu Mitha Goutham, Ruma Mukherjee

Abstract:

The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution.

Keywords: attention, encoder-decoder, Fairseq, Seq2Seq, Spanish, translation

Procedia PDF Downloads 175
6825 Corporate Governance of State-Owned Enterprises: A Comparative Analysis

Authors: Adeyemi Adebayo, Barry Ackers

Abstract:

This paper comparatively analyses the corporate governance of SOEs in South Africa and Singapore in the context of the World Bank’s framework for corporate governance of SOEs. This framework ensured that the analysis holistically covered key aspects of corporate governance of SOEs in these states. In order to ground our understanding of the paths taken by SOEs in the states, the paper presents the evolution and reforms of SOEs in the states before analyzing key aspects of their corporate governance. The analysis shows that even though SOEs in South Africa and Singapore are comparable in a number of ways, there are notable differences. In this context, this paper finds that the main difference between corporate governance of SOEs in South Africa and Singapore is their organizing model. Further, the analysis, among other findings, shows that SOEs Boards in Singapore are better remunerated. Further finding reveals that, even though some board members are politically connected, Singaporean SOEs boards are better constituted based on skills and experience compared to SOEs boards in South Africa. Overall, the analysis opens up new debates and as such concludes by providing avenues for further research.

Keywords: corporate governance, comparative corporate governance, corporate governance framework, government business enterprises, government linked companies, organizing models, ownership models, state-owned companies, state-owned enterprises

Procedia PDF Downloads 219
6824 Evaluating Radiative Feedback Mechanisms in Coastal West Africa Using Regional Climate Models

Authors: Akinnubi Rufus Temidayo

Abstract:

Coastal West Africa is highly sensitive to climate variability, driven by complex ocean-atmosphere interactions that shape temperature, precipitation, and extreme weather. Radiative feedback mechanisms—such as water vapor feedback, cloud-radiation interactions, and surface albedo—play a critical role in modulating these patterns. Yet, limited research addresses these feedbacks in climate models specific to West Africa’s coastal zones, creating challenges for accurate climate projections and adaptive planning. This study aims to evaluate the influence of radiative feedbacks on the coastal climate of West Africa by quantifying the effects of water vapor, cloud cover, and sea surface temperature (SST) on the region’s radiative balance. The study uses a regional climate model (RCM) to simulate feedbacks over a 20-year period (2005-2025) with high-resolution data from CORDEX and satellite observations. Key mechanisms investigated include (1) Water Vapor Feedback—the amplifying effect of humidity on warming, (2) Cloud-Radiation Interactions—the impact of cloud cover on radiation balance, especially during the West African Monsoon, and (3) Surface Albedo and Land-Use Changes—effects of urbanization and vegetation on the radiation budget. Preliminary results indicate that radiative feedbacks strongly influence seasonal climate variability in coastal West Africa. Water vapor feedback amplifies dry-season warming, cloud-radiation interactions moderate surface temperatures during monsoon seasons, and SST variations in the Atlantic affect the frequency and intensity of extreme rainfall events. The findings suggest that incorporating these feedbacks into climate planning can strengthen resilience to climate impacts in West African coastal communities. Further research should refine regional models to capture anthropogenic influences like greenhouse gas emissions, guiding sustainable urban and resource planning to mitigate climate risks.

Keywords: west africa, radiative, climate, resilence, anthropogenic

Procedia PDF Downloads 10
6823 Error Amount in Viscoelasticity Analysis Depending on Time Step Size and Method used in ANSYS

Authors: A. Fettahoglu

Abstract:

Theory of viscoelasticity is used by many researchers to represent behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain like pavements of bridges can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell elements and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Subsequently, a practical problem, which has an analytical solution given in literature, is used to verify the applicability of viscoelasticity tool embedded in ANSYS. Finally, amount of error in the results of ANSYS is compared with the analytical results to indicate the influence of used method and time step size.

Keywords: generalized Maxwell model, finite element method, prony series, time step size, viscoelasticity

Procedia PDF Downloads 369
6822 Measurement of Rheologic Properties of Soft Tissue (Muscle Tissue) by Device Called Myotonometer

Authors: Petr Sifta, Vaclav Bittner, Martin Kysela, Matej Kolar

Abstract:

The purpose of the research described in this work is to answer how to measure the rheologic (viscoelastic) properties tendo–deformational characteristics of soft tissue. The method would also resemble muscle palpation examination as it is known in clinical practice. For this purpose, an instrument with the working name “myotonometer” has been used. At present, there is lack of objective methods for assessing the muscle tone by viscous and elastic properties of soft tissue. That is why we decided to focus on creating or finding quantitative and qualitative methodology capable of specifying muscle tone.

Keywords: rheologic properties, tendo–deformational characteristics, viscosity, elasticity, hypertonus

Procedia PDF Downloads 623
6821 Virtual Modelling of Turbulent Fibre Flow in a Low Consistency Refiner for a Sustainable and Energy Efficient Process

Authors: Simon Ingelsten, Anton Lundberg, Vijay Shankar, Lars-Olof Landström, Örjan Johansson

Abstract:

The flow in a low consistency disc refiner is simulated with the aim of identifying flow structures possibly being of importance for a future study to optimise the energy efficiency in refining processes. A simplified flow geometry is used, where a single groove of a refiner disc is modelled. Two different fibre models are used to simulate turbulent fibre suspension flow in the groove. The first model is a Bingham viscoplastic fluid model where the fibre suspension is treated as a non-Newtonian fluid with a yield stress. The second model is a new model proposed in a recent study where the suspended fibres effect on flow is accounted for through a modelled orientation distribution function (ODF). Both models yielded similar results with small differences. Certain flow characteristics that were expected and that was found in the literature were identified. Some of these flow characteristics may be of importance in a future process to optimise the refiner geometry to increase the energy efficiency. Further study and a more detailed flow model is; however, needed in order for the simulations to yield results valid for quantitative use in such an optimisation study. An outline of the next steps in such a study is proposed.

Keywords: disc refiner, fibre flow, sustainability, turbulence modelling

Procedia PDF Downloads 407
6820 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 153
6819 How Unicode Glyphs Revolutionized the Way We Communicate

Authors: Levi Corallo

Abstract:

Typed language made by humans on computers and cell phones has made a significant distinction from previous modes of written language exchanges. While acronyms remain one of the most predominant markings of typed language, another and perhaps more recent revolution in the way humans communicate has been with the use of symbols or glyphs, primarily Emojis—globally introduced on the iPhone keyboard by Apple in 2008. This paper seeks to analyze the use of symbols in typed communication from both a linguistic and machine learning perspective. The Unicode system will be explored and methods of encoding will be juxtaposed with the current machine and human perception. Topics in how typed symbol usage exists in conversation will be explored as well as topics across current research methods dealing with Emojis like sentiment analysis, predictive text models, and so on. This study proposes that sequential analysis is a significant feature for analyzing unicode characters in a corpus with machine learning. Current models that are trying to learn or translate the meaning of Emojis should be starting to learn using bi- and tri-grams of Emoji, as well as observing the relationship between combinations of different Emoji in tandem. The sociolinguistics of an entire new vernacular of language referred to here as ‘typed language’ will also be delineated across my analysis with unicode glyphs from both a semantic and technical perspective.

Keywords: unicode, text symbols, emojis, glyphs, communication

Procedia PDF Downloads 194
6818 Qsar Studies of Certain Novel Heterocycles Derived From bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, triazoles, novel heterocycles

Procedia PDF Downloads 444
6817 Excitation Modeling for Hidden Markov Model-Based Speech Synthesis Based on Wavelet Analysis

Authors: M. Kiran Reddy, K. Sreenivasa Rao

Abstract:

The conventional Hidden Markov Model (HMM)-based speech synthesis system (HTS) uses only a pulse excitation model, which significantly differs from natural excitation signal. Hence, buzziness can be perceived in the speech generated using HTS. This paper proposes an efficient excitation modeling method that can significantly reduce the buzziness, and improve the quality of HMM-based speech synthesis. The proposed approach models the pitch-synchronous residual frames extracted from the residual excitation signal. Each pitch synchronous residual frame is parameterized using 30 wavelet coefficients. These 30 wavelet coefficients are found to accurately capture the perceptually important information present in the residual waveform. In synthesis phase, the residual frames are reconstructed from the generated wavelet coefficients and are pitch-synchronously overlap-added to generate the excitation signal. The proposed excitation modeling method is integrated into HMM-based speech synthesis system. Evaluation results indicate that the speech synthesized by the proposed excitation model is significantly better than the speech generated using state-of-the-art excitation modeling methods.

Keywords: excitation modeling, hidden Markov models, pitch-synchronous frames, speech synthesis, wavelet coefficients

Procedia PDF Downloads 249
6816 Texture Observation of Bending by XRD and EBSD Method

Authors: Takashi Sakai, Yuri Shimomura

Abstract:

The crystal orientation is a factor that affects the microscopic material properties. Crystal orientation determines the anisotropy of the polycrystalline material. And it is closely related to the mechanical properties of the material. In this paper, for pure copper polycrystalline material, two different methods; X-Ray Diffraction (XRD) and Electron Backscatter Diffraction (EBSD); and the crystal orientation were analyzed. In the latter method, it is possible that the X-ray beam diameter is thicker as compared to the former, to measure the crystal orientation macroscopically relatively. By measurement of the above, we investigated the change in crystal orientation and internal tissues of pure copper.

Keywords: bending, electron backscatter diffraction, X-ray diffraction, microstructure, IPF map, orientation distribution function

Procedia PDF Downloads 330
6815 Naked Machismo: Uncovered Masculinity in an Israeli Home Design Campaign

Authors: Gilad Padva, Sigal Barak Brandes

Abstract:

This research centers on an unexpected Israeli advertising campaign for Elemento, a local furniture company, which eroticizes male nudity. The discussed campaign includes a series of printed ads that depict naked male models in effeminate positions. This campaign included a series of ads published in Haaretz, a small-scaled yet highly prestigious daily newspaper which is typically read by urban middle-upper-class left-winged Israelis. Apparently, this campaign embodies an alternative masculinity that challenges the prevalent machismo in Israeli society and advertising. Although some of the ads focus on young men in effeminate positions, they never expose their genitals and anuses, and their bodies are never permeable. The 2010s Elemento male models are seemingly contrasted to conventional representation of manhood in contemporary mainstream advertising. They display a somewhat inactive, passive and self-indulgent masculinity which involves 'conspicuous leisure'. In the process of commodity fetishism, the advertised furniture are emptied of the original meaning of their production, and then filled with new meanings in ways that both mystify the product and turn it into a fetish object. Yet, our research critically reconsiders this sensational campaign as sophisticated patriarchal parody that does not subvert but rather reconfirms and even fetishizes patriarchal premises; it parodizes effeminacy rather than the prevalent (Israeli) machismo. Following Pierre Bourdieu's politics of cultural taste, our research reconsiders and criticizes the male models' domesticated masculinity in a fantasized and cosmopolitan hedonistic habitus. Notwithstanding, we suggest that the Elemento campaign, despite its conformity, does question some Israeli and global axioms about gender roles, corporeal ideologies, idealized bodies, and domesticated phalluses and anuses. Although the naked truth is uncovered by this campaign, it does erect a vibrant discussion of contemporary masculinities and their exploitation in current mass consumption.

Keywords: male body, campaign, advertising, gender studies, men's studies, Israeli culture, masculinity, parody, effeminacy

Procedia PDF Downloads 212
6814 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.

Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element

Procedia PDF Downloads 73
6813 The Short-Term Stress Indicators in Home and Experimental Dogs

Authors: Madara Nikolajenko, Jevgenija Kondratjeva

Abstract:

Stress is a response of the body to physical or psychological environmental stressors. Cortisol level in blood serum is determined as the main indicator of stress, but the blood collection, the animal preparation and other activities can cause unpleasant conditions and induce increase of these hormones. Therefore, less invasive methods are searched to determine stress hormone levels, for example, by measuring the cortisol level saliva. The aim of the study is to find out the changes of stress hormones in blood and saliva in home and experimental dogs in simulated short-term stress conditions. The study included clinically healthy experimental beagle dogs (n=6) and clinically healthy home American Staffordshire terriers (n=6). The animals were let into a fenced area to adapt. Loud drum sounds (in cooperation with 'Andžeja Grauda drum school') were used as a stressor. Blood serum samples were taken for sodium, potassium, glucose and cortisol level determination and saliva samples for cortisol determination only. Control parameters were taken immediately before the start of the stressor, and next samples were taken immediately after the stress. The last measurements were taken two hours after the stress. Electrolyte levels in blood serum were determined using direction selective electrode method (ILab Aries analyzer) and cortisol in blood serum and saliva using electrochemical luminescence method (Roche Diagnostics). Blood glucose level was measured with glucometer (ACCU-CHECK Active test strips). Cortisol level in the blood increased immediately after the stress in all home dogs (P < 0,05), but only in 33% (P < 0,05) of the experimental dogs. After two hours the measurement decreased in 83% (P < 0,05) of home dogs (in 50% returning to the control point) and in 83% (P < 0,05) of the experimental dogs. Cortisol in saliva immediately after the stress increased in 50% (P > 0,05) of home dogs and in 33% (P > 0,05) of the experimental dogs. After two hours in 83% (P > 0,05) of the home animals, the measurements decreased, only in 17% of the experimental dogs it decreased as well, while in 49% measurement was undetectable due to the lack of material. Blood sodium, potassium, and glucose measurements did not show any significant changes. The combination of short-term stress indicators, when, after the stressor, all indicators should immediately increase and decrease after two hours, confirmed in none of the animals. Therefore the authors can conclude that each animal responds to a stressful situation with different physiological mechanisms and hormonal activity. Cortisol level in saliva and blood is released with the different speed and is not an objective indicator of acute stress.

Keywords: animal behaivor, cortisol, short-term stress, stress indicators

Procedia PDF Downloads 270
6812 Development of an Information System Based on the Establishment and Evaluation of Performance Rating by Application Part/Type of Remodeling Element Technologies

Authors: Sungwon Jung

Abstract:

The percentage of 20 years or older apartment houses in South Korea is approximately 20% (1.55 million houses), and the explosive increase of aged houses is expected around the first planned new towns. Accordingly, we should prepare for social issues such as difficulty of housing lease and degradation of housing performance. The improvement of performance of aged houses is essential for achieving the national energy and carbon reduction goals, and we should develop techniques to respond to the changing construction environment. Furthermore, we should develop a performance evaluation system that is appropriate for the demands of residents such as the improvement of remodeling floor plan by performance improvement in line with the residence type of the housing vulnerable groups such as low-income group and elderly people living alone. For this purpose, remodeling techniques and business models optimized for the target complexes must be spread through the development of various business models. In addition, it is necessary to improve the remodeling business by improving the laws and systems related to the improvement of the residential performance and to prepare techniques to respond to the increasing business demands. In other words, performance improvement and evaluation and knowledge systems need to be researched as new issues related to remodeling that has not been addressed in the existing research.

Keywords: remodelling, performance evaluation, web-based system, big data

Procedia PDF Downloads 224
6811 Conceptualizing the Cyber Insecurity Risk in the Ethics of Automated Warfare

Authors: Otto Kakhidze, Hoda Alkhzaimi, Adam Ramey, Nasir Memon

Abstract:

This paper provides an alternative, cyber security based a conceptual framework for the ethics of automated warfare. The large body of work produced on fully or partially autonomous warfare systems tends to overlook malicious security factors as in the possibility of technical attacks on these systems when it comes to the moral and legal decision-making. The argument provides a risk-oriented justification to why technical malicious risks cannot be dismissed in legal, ethical and policy considerations when warfare models are being implemented and deployed. The assumptions of the paper are supported by providing a broader model that contains the perspective of technological vulnerabilities through the lenses of the Game Theory, Just War Theory as well as standard and non-standard defense ethics. The paper argues that a conventional risk-benefit analysis without considering ethical factors is insufficient for making legal and policy decisions on automated warfare. This approach will provide the substructure for security and defense experts as well as legal scholars, ethicists and decision theorists to work towards common justificatory grounds that will accommodate the technical security concerns that have been overlooked in the current legal and policy models.

Keywords: automated warfare, ethics of automation, inherent hijacking, security vulnerabilities, risk, uncertainty

Procedia PDF Downloads 357
6810 A Novel Combination Method for Computing the Importance Map of Image

Authors: Ahmad Absetan, Mahdi Nooshyar

Abstract:

The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality.

Keywords: content-aware image resizing, visual saliency, edge density, image warping

Procedia PDF Downloads 582
6809 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.

Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking

Procedia PDF Downloads 402