Search results for: the algorithmic calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1362

Search results for: the algorithmic calculation

1332 Compensation Strategies and Their Effects on Employees' Motivation and Organizational Citizenship Behaviour in Some Manufacturing Companies in Lagos, Nigeria

Authors: Ade Oyedijo

Abstract:

This paper reports the findings of a study on the strategic and organizational antecedents and effects of two opposing pay patterns used by some manufacturing companies in Lagos Nigeria with particular reference to the behavioural correlates of the pay strategies considered. The assumed relationship between pay strategies and some organizational correlates such as business and corporate strategies and firm size was considered problematic in view of their likely implications for employee motivation and citizenship behaviour and firm performance. The survey research method was used for the study. Structured, close ended questions were used to collect primary data from the respondents. A multipart Likert scale was used to measure the pay orientations of the respondent firms and the job and organizational involvement of the respondent employees. Utilizing hierarchical linear regression method and "t-test" to analyze the data obtained from 48 manufacturing companies of various sizes and strategies, it was found that the dominant pattern of employee compensation in the sampled manufacturing companies. The study also revealed that the choice of a pay strategy was strongly influenced by organizational size as well as the type of business and corporate level strategies adopted by afirm. Firms pursuing a strategy of related and unrelated diversification are more likely to adopt the algorithmic compensation system than single product firms because of their relatively larger size and scope. However; firms that pursue a competitive advantage through a business level strategy of cost efficiency are more likely to use the experiential, variable pay strategy. The study found that an algorithmic compensation strategy is as effective as experiential compensation strategy in the promotion of organizational citizenship behaviour and motivation of employees.

Keywords: compensation, corporate strategy, business strategy, motivation, citizenship behaviour, algorithmic, experiential, organizational commitment, work environment

Procedia PDF Downloads 360
1331 Research on the Calculation Method of Smartization Rate of Concrete Structure Building Construction

Authors: Hongyu Ye, Hong Zhang, Minjie Sun, Hongfang Xu

Abstract:

In the context of China's promotion of smart construction and building industrialization, there is a need for evaluation standards for the development of building industrialization based on assembly-type construction. However, the evaluation of smart construction remains a challenge in the industry's development process. This paper addresses this issue by proposing a calculation and evaluation method for the smartization rate of concrete structure building construction. The study focuses on examining the factors of smart equipment application and their impact on costs throughout the process of smart construction design, production, transfer, and construction. Based on this analysis, the paper presents an evaluation method for the smartization rate based on components. Furthermore, it introduces calculation methods for assessing the smartization rate of buildings. The paper also suggests a rapid calculation method for determining the smartization rate using Building Information Modeling (BIM) and information expression technology. The proposed research provides a foundation for the swift calculation of the smartization rate based on BIM and information technology. Ultimately, it aims to promote the development of smart construction and the construction of high-quality buildings in China.

Keywords: building industrialization, high quality building, smart construction, smartization rate, component

Procedia PDF Downloads 30
1330 Integrating Carbon Footprint into Supply Chain Management of Manufacturing Companies: Sri Lanka

Authors: Shirekha Layangani, Suneth Dharmaparakrama

Abstract:

When the manufacturing industry is concerned the Environment Management System (EMS) is a common term. Currently most organizations have obtained the environmental standard certification, ISO 14001. In the Sri Lankan context even though the organizations adopt Environmental Management, a very limited number of companies tend to calculate their Carbon Footprints. This research discusses the demotivating factors of manufacturing organizations in Sri Lanka to integrate calculation of carbon footprint into their supply chains. Further it also identifies the benefits that manufacturing organizations can gain by implementing calculation of carbon footprint. The manufacturing companies listed under “ISO 14001” certification were considered in this study in order to investigate the problems mentioned above. 100% enumeration was used when the surveys were carried out. In order to gather essential data two surveys were designed to be done among manufacturing organizations that are currently engaged in calculating their carbon footprint and the organizations that have not. The survey among the first set of manufacturing organizations revealed the benefits the organizations were able to gain by implementing calculation of carbon footprint. The latter set organizations revealed the demotivating factors that have influenced not to integrate calculation of carbon footprint into their supply chains. This paper has summarized the results obtained by the surveys and segregated depending on the market share of the manufacturing organizations. Further it has indicated the benefits that can be obtained by implementing carbon footprint calculation, depending on the market share of the manufacturing entity. Finally the research gives suggestions to manufacturing organizations on applicability of adopting carbon footprint calculation depending on the benefits that can be obtained.

Keywords: carbon footprint, environmental management systems (EMS), benefits of carbon footprint, ISO14001

Procedia PDF Downloads 346
1329 Discrimination in Insurance Pricing: A Textual-Analysis Perspective

Authors: Ruijuan Bi

Abstract:

Discrimination in insurance pricing is a topic of increasing concern, particularly in the context of the rapid development of big data and artificial intelligence. There is a need to explore the various forms of discrimination, such as direct and indirect discrimination, proxy discrimination, algorithmic discrimination, and unfair discrimination, and understand their implications in insurance pricing models. This paper aims to analyze and interpret the definitions of discrimination in insurance pricing and explore measures to reduce discrimination. It utilizes a textual analysis methodology, which involves gathering qualitative data from relevant literature on definitions of discrimination. The research methodology focuses on exploring the various forms of discrimination and their implications in insurance pricing models. Through textual analysis, this paper identifies the specific characteristics and implications of each form of discrimination in the general insurance industry. This research contributes to the theoretical understanding of discrimination in insurance pricing. By analyzing and interpreting relevant literature, this paper provides insights into the definitions of discrimination and the laws and regulations surrounding it. This theoretical foundation can inform future empirical research on discrimination in insurance pricing using relevant theories of probability theory.

Keywords: algorithmic discrimination, direct and indirect discrimination, proxy discrimination, unfair discrimination, insurance pricing

Procedia PDF Downloads 36
1328 High-Frequency Cryptocurrency Portfolio Management Using Multi-Agent System Based on Federated Reinforcement Learning

Authors: Sirapop Nuannimnoi, Hojjat Baghban, Ching-Yao Huang

Abstract:

Over the past decade, with the fast development of blockchain technology since the birth of Bitcoin, there has been a massive increase in the usage of Cryptocurrencies. Cryptocurrencies are not seen as an investment opportunity due to the market’s erratic behavior and high price volatility. With the recent success of deep reinforcement learning (DRL), portfolio management can be modeled and automated. In this paper, we propose a novel DRL-based multi-agent system to automatically make proper trading decisions on multiple cryptocurrencies and gain profits in the highly volatile cryptocurrency market. We also extend this multi-agent system with horizontal federated transfer learning for better adapting to the inclusion of new cryptocurrencies in our portfolio; therefore, we can, through the concept of diversification, maximize our profits and minimize the trading risks. Experimental results through multiple simulation scenarios reveal that this proposed algorithmic trading system can offer three promising key advantages over other systems, including maximized profits, minimized risks, and adaptability.

Keywords: cryptocurrency portfolio management, algorithmic trading, federated learning, multi-agent reinforcement learning

Procedia PDF Downloads 86
1327 Structured Tariff Calculation to Promote Geothermal for Energy Security

Authors: Siti Mariani, Arwin DW Sumari, Retno Gumilang Dewi

Abstract:

This paper analyzes the necessity of a structured tariff calculation for geothermal electricity in Indonesia. Indonesia is blessed with abundant natural resources and a choices of energy resources to generate electricity among other are coal, gas, biomass, hydro to geothermal, creating a fierce competition in electricity tariffs. While geothermal is inline with energy security principle and green growth initiative, it requires a huge capital funding. Geothermal electricity development consists of phases of project with each having its own financial characteristics. The Indonesian government has set a support in the form of ceiling price of geothermal electricity tariff by 11 U.S cents / kWh. However, the government did not set a levelized cost of geothermal, as an indication of lower limit capacity class, to which support is given. The government should establish a levelized cost of geothermal energy to reflect its financial capability in supporting geothermal development. Aside of that, the government is also need to establish a structured tariff calculation to reflect a fair and transparent business cooperation.

Keywords: load fator, levelized cost of geothermal, geothermal power plant, structured tariff calculation

Procedia PDF Downloads 408
1326 The Influence of Cycle Index of Simulation Condition on Main Bearing Wear Prognosis of Internal Combustion Engine

Authors: Ziyu Diao, Yanyan Zhang, Zhentao Liu, Ruidong Yan

Abstract:

The update frequency of wear profile in main bearing wear prognosis of internal combustion engine plays an important role in the calculation efficiency and accuracy. In order to investigate the appropriate cycle index of the simplified working condition of wear simulation, the main bearing-crankshaft journal friction pair of a diesel engine in service was studied in this paper. The method of multi-body dynamics simulation was used, and the wear prognosis model of the main bearing was established. Several groups of cycle indexes were set up for the wear calculation, and the maximum wear depth and wear profile were compared and analyzed. The results showed that when the cycle index reaches 3, the maximum deviation rate of the maximum wear depth is about 2.8%, and the maximum deviation rate comes to 1.6% when the cycle index reaches 5. This study provides guidance and suggestions for the optimization of wear prognosis by selecting appropriate value of cycle index according to the requirement of calculation cost and accuracy of the simulation work.

Keywords: cycle index, deviation rate, wear calculation, wear profile

Procedia PDF Downloads 131
1325 Reliability of the Estimate of Earthwork Quantity Based on 3D-BIM

Authors: Jaechoul Shin, Juhwan Hwang

Abstract:

In case of applying the BIM method to the civil engineering in the area of free formed structure, we can expect comparatively high rate of construction productivity as it is in the building engineering area. In this research, we developed quantity calculation error applying it to earthwork and bridge construction (e.g. PSC-I type segmental girder bridge amd integrated bridge of steel I-girders and inverted-Tee bent cap), NATM (New Austrian Tunneling Method) tunnel construction, retaining wall construction, culvert construction and implemented BIM based 3D modeling quantity survey. we confirmed high reliability of the BIM-based method in structure work in which errors occurred in range between -6% ~ +5%. Especially, understanding of the problem and improvement of the existing 2D-CAD based of quantity calculation through rock type quantity calculation error in range of -14% ~ +13% of earthwork quantity calculation. It is benefit and applicability of BIM method in civil engineering. In addition, routine method for quantity of earthwork has the same error tolerance negligible for that of structure work. But, rock type's quantity calculated as the error appears significantly to the reliability of 2D-based volume calculation shows that the problem could be. Through the estimating quantity of earthwork based 3D-BIM, proposed method has better reliability than routine method. BIM, as well as the design, construction, maintenance levels of information when you consider the benefits of integration, the introduction of BIM design in civil engineering and the possibility of applying for the effectiveness was confirmed.

Keywords: BIM, 3D modeling, 3D-BIM, quantity of earthwork

Procedia PDF Downloads 414
1324 Energy Consumption and GHG Production in Railway and Road Passenger Regional Transport

Authors: Martin Kendra, Tomas Skrucany, Jozef Gnap, Jan Ponicky

Abstract:

Paper deals with the modeling and simulation of energy consumption and GHG production of two different modes of regional passenger transport – road and railway. These two transport modes use the same type of fuel – diesel. Modeling and simulation of the energy consumption in transport is often used due to calculation satisfactory accuracy and cost efficiency. Paper deals with the calculation based on EN standards and information collected from technical information from vehicle producers and characteristics of tracks. Calculation included maximal theoretical capacity of bus and train and real passenger’s measurement from operation. Final energy consumption and GHG production is calculated by using software simulation. In evaluation of the simulation is used system ‘well to wheel’.

Keywords: bus, consumption energy, GHG, production, simulation, train

Procedia PDF Downloads 413
1323 Substructure Method for Thermal-Stress Analysis of Liquid-Propellant Rocket Engine Combustion Chamber

Authors: Olga V. Korotkaya

Abstract:

This article is devoted to an important problem of calculation of deflected mode of the combustion chamber and the nozzle end of a new liquid-propellant rocket cruise engine. A special attention is given to the methodology of calculation. Three operating modes are considered. The analysis has been conducted in ANSYS software. The methods of conducted research are mathematical modelling, substructure method, cyclic symmetry, and finite element method. The calculation has been carried out to order of S. P. Korolev Rocket and Space Corporation «Energia». The main results are practical. Proposed methodology and created models would be able to use for a wide range of strength problems.

Keywords: combustion chamber, cyclic symmetry, finite element method, liquid-propellant rocket engine, nozzle end, substructure

Procedia PDF Downloads 470
1322 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment

Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang

Abstract:

2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn  features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.

Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks

Procedia PDF Downloads 174
1321 Numerical Methods versus Bjerksund and Stensland Approximations for American Options Pricing

Authors: Marasovic Branka, Aljinovic Zdravka, Poklepovic Tea

Abstract:

Numerical methods like binomial and trinomial trees and finite difference methods can be used to price a wide range of options contracts for which there are no known analytical solutions. American options are the most famous of that kind of options. Besides numerical methods, American options can be valued with the approximation formulas, like Bjerksund-Stensland formulas from 1993 and 2002. When the value of American option is approximated by Bjerksund-Stensland formulas, the computer time spent to carry out that calculation is very short. The computer time spent using numerical methods can vary from less than one second to several minutes or even hours. However to be able to conduct a comparative analysis of numerical methods and Bjerksund-Stensland formulas, we will limit computer calculation time of numerical method to less than one second. Therefore, we ask the question: Which method will be most accurate at nearly the same computer calculation time?

Keywords: Bjerksund and Stensland approximations, computational analysis, finance, options pricing, numerical methods

Procedia PDF Downloads 419
1320 Modeling Nanomechanical Behavior of ZnO Nanowires as a Function of Nano-Diameter

Authors: L. Achou, A. Doghmane

Abstract:

Elastic performances, as an essential property of nanowires (NWs), play a significant role in the design and fabrication of modern nanodevices. In this paper, our interest is focused on ZnO NWs to investigate wire diameter (Dwire ≤ 400 nm) effects on elastic properties. The plotted data reveal that a strong size dependence of the elastic constants exists when the wire diameter is smaller than ~ 100 nm. For larger diameters (Dwire > 100 nm), these ones approach their corresponding bulk values. To enrich this study, we make use of the scanning acoustic microscopy simulation technique. The calculation methodology consists of several steps: determination of longitudinal and transverse wave velocities, calculation of refection coefficients, calculation of acoustic signatures and Rayleigh velocity determination. Quantitatively, it was found that changes in ZnO diameters over the ranges 1 nm ≤ Dwire ≤ 100 nm lead to similar exponential variations, for all elastic parameters, of the from: A = a + b exp(-Dwire/c) where a, b, and c are characteristic constants of a given parameter. The developed relation can be used to predict elastic properties of such NW by just knowing its diameter and vice versa.

Keywords: elastic properties, nanowires, semiconductors, theoretical model, ZnO

Procedia PDF Downloads 137
1319 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations

Authors: Milena Nanova, Radul Shishkov, Damyan Damov, Martin Georgiev

Abstract:

This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper places emphasis on algorithmic implementation of the logical constraint and intricacies in residential architecture by exploring the potential of generative design to create visually engaging and contextually harmonious structures. This exploration also contains an analysis of how these designs align with legal building parameters, showcasing the potential for creative solutions within the confines of urban building regulations. Concurrently, our methodology integrates functional, economic, and environmental factors. We investigate how generative design can be utilized to optimize buildings' performance, considering them, aiming to achieve a symbiotic relationship between the built environment and its natural surroundings. Through a blend of theoretical research and practical case studies, this research highlights the multifaceted capabilities of generative design and demonstrates practical applications of our framework. Our findings illustrate the rich possibilities that arise from an algorithmic design approach in the context of a vibrant urban landscape. This study contributes an alternative perspective to residential architecture, suggesting that the future of urban development lies in embracing the complex interplay between computational design innovation, regulatory adherence, and environmental responsibility.

Keywords: generative design, computational design, parametric design, algorithmic modeling

Procedia PDF Downloads 22
1318 Acoustic Finite Element Analysis of a Slit Model with Consideration of Air Viscosity

Authors: M. Sasajima, M. Watanabe, T. Yamaguchi Y. Kurosawa, Y. Koike

Abstract:

In very narrow pathways, the speed of sound propagation and the phase of sound waves change due to the air viscosity. We have developed a new Finite Element Method (FEM) that includes the effects of air viscosity for modeling a narrow sound pathway. This method is developed as an extension of the existing FEM for porous sound-absorbing materials. The numerical calculation results for several three-dimensional slit models using the proposed FEM are validated against existing calculation methods.

Keywords: simulation, FEM, air viscosity, slit

Procedia PDF Downloads 340
1317 Evaluation of Carbon Dioxide Pressure through Radial Velocity Difference in Arterial Blood Modeled by Drift Flux Model

Authors: Aicha Rima Cheniti, Hatem Besbes, Joseph Haggege, Christophe Sintes

Abstract:

In this paper, we are interested to determine the carbon dioxide pressure in the arterial blood through radial velocity difference. The blood was modeled as a two phase mixture (an aqueous carbon dioxide solution with carbon dioxide gas) by Drift flux model and the Young-Laplace equation. The distributions of mixture velocities determined from the considered model permitted the calculation of the radial velocity distributions with different values of mean mixture pressure and the calculation of the mean carbon dioxide pressure knowing the mean mixture pressure. The radial velocity distributions are used to deduce a calculation method of the mean mixture pressure through the radial velocity difference between two positions which is measured by ultrasound. The mean carbon dioxide pressure is then deduced from the mean mixture pressure.

Keywords: mean carbon dioxide pressure, mean mixture pressure, mixture velocity, radial velocity difference

Procedia PDF Downloads 391
1316 Modeling and Calculation of Physical Parameters of the Pollution of Water by Oil and Materials in Suspensions

Authors: Ainas Belkacem, Fourar Ali

Abstract:

The present study focuses on the mathematical modeling and calculation of physical parameters of water pollution by oil and sand in regime fully dispersed in water. In this study, the sand particles and oil are suspended in the case of fully developed turbulence. The study consists to understand, model and predict the viscosity, the structure and dynamics of these types of mixtures. The work carried out is Numerical and validated by experience.

Keywords: multi phase flow, pollution, suspensions, turbulence

Procedia PDF Downloads 209
1315 A Bibliometric Analysis on Filter Bubble

Authors: Misbah Fatma, Anam Saiyeda

Abstract:

This analysis charts the introduction and expansion of research into the filter bubble phenomena over the last 10 years using a large dataset of academic publications. This bibliometric study demonstrates how interdisciplinary filter bubble research is. The identification of key authors and organizations leading the filter bubble study sheds information on collaborative networks and knowledge transfer. Relevant papers are organized based on themes including algorithmic bias, polarisation, social media, and ethical implications through a systematic examination of the literature. In order to shed light on how these patterns have changed over time, the study plots their historical history. The study also looks at how research is distributed globally, showing geographic patterns and discrepancies in scholarly output. The results of this bibliometric analysis let us fully comprehend the development and reach of filter bubble research. This study offers insights into the ongoing discussion surrounding information personalization and its implications for societal discourse, democratic participation, and the potential risks to an informed citizenry by exposing dominant themes, interdisciplinary collaborations, and geographic patterns. In order to solve the problems caused by filter bubbles and to advance a more diverse and inclusive information environment, this analysis is essential for scholars and researchers.

Keywords: bibliometric analysis, social media, social networking, algorithmic personalization, self-selection, content moderation policies and limited access to information, recommender system and polarization

Procedia PDF Downloads 85
1314 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking

Authors: Jonas Colin

Abstract:

Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.

Keywords: chatbot, GPT 3.5, metacognition, symbiose

Procedia PDF Downloads 24
1313 Influence of Non-Carcinogenic Risk on Public Health

Authors: Gulmira Umarova

Abstract:

The data on the assessment of the influence of environmental risk to the health of the population of Uralsk in the West region of Kazakhstan were presented. Calculation of non-carcinogenic risks was performed for such air pollutants as sulfur dioxide, nitrogen oxides, hydrogen sulfide, carbon monoxide. Here with the critical organs and systems, which are affected by the above-mentioned substances were taken into account. As well as indicators of primary and general morbidity by classes of diseases among the population were considered. The quantitative risk of the influence of substances on organs and systems is established by results of the calculation.

Keywords: environment, health, morbidity, non-carcinogenic risk

Procedia PDF Downloads 89
1312 Business and Psychological Principles Integrated into Automated Capital Investment Systems through Mathematical Algorithms

Authors: Cristian Pauna

Abstract:

With few steps away from the 2020, investments in financial markets is a common activity nowadays. In the electronic trading environment, the automated investment software has become a major part in the business intelligence system of any modern financial company. The investment decisions are assisted and/or made automatically by computers using mathematical algorithms today. The complexity of these algorithms requires computer assistance in the investment process. This paper will present several investment strategies that can be automated with algorithmic trading for Deutscher Aktienindex DAX30. It was found that, based on several price action mathematical models used for high-frequency trading some investment strategies can be optimized and improved for automated investments with good results. This paper will present the way to automate these investment decisions. Automated signals will be built using all of these strategies. Three major types of investment strategies were found in this study. The types are separated by the target length and by the exit strategy used. The exit decisions will be also automated and the paper will present the specificity for each investment type. A comparative study will be also included in this paper in order to reveal the differences between strategies. Based on these results, the profit and the capital exposure will be compared and analyzed in order to qualify the investment methodologies presented and to compare them with any other investment system. As conclusion, some major investment strategies will be revealed and compared in order to be considered for inclusion in any automated investment system.

Keywords: Algorithmic trading, automated investment systems, limit conditions, trading principles, trading strategies

Procedia PDF Downloads 161
1311 Determination Power and Sample Size Zero-Inflated Negative Binomial Dependent Death Rate of Age Model (ZINBD): Regression Analysis Mortality Acquired Immune Deficiency De ciency Syndrome (AIDS)

Authors: Mohd Asrul Affendi Bin Abdullah

Abstract:

Sample size calculation is especially important for zero inflated models because a large sample size is required to detect a significant effect with this model. This paper verify how to present percentage of power approximation for categorical and then extended to zero inflated models. Wald test was chosen to determine power sample size of AIDS death rate because it is frequently used due to its approachability and its natural for several major recent contribution in sample size calculation for this test. Power calculation can be conducted when covariates are used in the modeling ‘excessing zero’ data and assist categorical covariate. Analysis of AIDS death rate study is used for this paper. Aims of this study to determine the power of sample size (N = 945) categorical death rate based on parameter estimate in the simulation of the study.

Keywords: power sample size, Wald test, standardize rate, ZINBDR

Procedia PDF Downloads 408
1310 Characterization of the in 0.53 Ga 0.47 as n+nn+ Photodetectors

Authors: Fatima Zohra Mahi, Luca Varani

Abstract:

We present an analytical model for the calculation of the sensitivity, the spectral current noise and the detectivity for an optically illuminated In0.53Ga0.47As n+nn+ diode. The photocurrent due to the excess carrier is obtained by solving the continuity equation. Moreover, the current noise level is evaluated at room temperature and under a constant voltage applied between the diode terminals. The analytical calculation of the current noise in the n+nn+ structure is developed. The responsivity and the detectivity are discussed as functions of the doping concentrations and the emitter layer thickness in one-dimensional homogeneous n+nn+ structure.

Keywords: detectivity, photodetectors, continuity equation, current noise

Procedia PDF Downloads 610
1309 Calculation and Comparison of a Turbofan Engine Performance Parameters with Various Definitions

Authors: O. Onal, O. Turan

Abstract:

In this paper, some performance parameters of a selected turbofan engine (JT9D) are analyzed. The engine is a high bypass turbofan engine which powers a wide-body aircraft and it produces 206 kN thrust force (thrust/weight ratio is 5.4). The objective parameters for the engine include calculation of power, specific fuel consumption, specific thrust, engine propulsive, thermal and overall efficiencies according to the various definitions given in the literature. Furthermore, in the case study, wasted energy from the exhaust is calculated at the maximum power setting (i.e. take off phase) for the engine.

Keywords: turbofan, power, efficiency, trust

Procedia PDF Downloads 268
1308 Investigating Jacket-Type Offshore Structures Failure Probability by Applying the Reliability Analyses Methods

Authors: Majid Samiee Zonoozian

Abstract:

For such important constructions as jacket type platforms, scrupulous attention in analysis, design and calculation processes is needed. The reliability assessment method has been established into an extensively used method to behavior safety calculation of jacket platforms. In the present study, a methodology for the reliability calculation of an offshore jacket platform in contradiction of the extreme wave loading state is available. Therefore, sensitivity analyses are applied to acquire the nonlinear response of jacket-type platforms against extreme waves. The jacket structure is modeled by applying a nonlinear finite-element model with regards to the tubular members' behave. The probability of a member’s failure under extreme wave loading is figured by a finite-element reliability code. The FORM and SORM approaches are applied for the calculation of safety directories and reliability indexes have been detected. A case study for a fixed jacket-type structure positioned in the Persian Gulf is studied by means of the planned method. Furthermore, to define the failure standards, equations suggested by the 21st version of the API RP 2A-WSD for The jacket-type structures’ tubular members designing by applying the mixed axial bending and axial pressure. Consequently, the effect of wave Loades in the reliability index was considered.

Keywords: Jacket-Type structure, reliability, failure probability, tubular members

Procedia PDF Downloads 146
1307 Hidden Markov Model for Financial Limit Order Book and Its Application to Algorithmic Trading Strategy

Authors: Sriram Kashyap Prasad, Ionut Florescu

Abstract:

This study models the intraday asset prices as driven by Markov process. This work identifies the latent states of the Hidden Markov model, using limit order book data (trades and quotes) to continuously estimate the states throughout the day. This work builds a trading strategy using estimated states to generate signals. The strategy utilizes current state to recalibrate buy/ sell levels and the transition between states to trigger stop-loss when adverse price movements occur. The proposed trading strategy is tested on the Stevens High Frequency Trading (SHIFT) platform. SHIFT is a highly realistic market simulator with functionalities for creating an artificial market simulation by deploying agents, trading strategies, distributing initial wealth, etc. In the implementation several assets on the NASDAQ exchange are used for testing. In comparison to a strategy with static buy/ sell levels, this study shows that the number of limit orders that get matched and executed can be increased. Executing limit orders earns rebates on NASDAQ. The system can capture jumps in the limit order book prices, provide dynamic buy/sell levels and trigger stop loss signals to improve the PnL (Profit and Loss) performance of the strategy.

Keywords: algorithmic trading, Hidden Markov model, high frequency trading, limit order book learning

Procedia PDF Downloads 123
1306 Development of a General Purpose Computer Programme Based on Differential Evolution Algorithm: An Application towards Predicting Elastic Properties of Pavement

Authors: Sai Sankalp Vemavarapu

Abstract:

This paper discusses the application of machine learning in the field of transportation engineering for predicting engineering properties of pavement more accurately and efficiently. Predicting the elastic properties aid us in assessing the current road conditions and taking appropriate measures to avoid any inconvenience to commuters. This improves the longevity and sustainability of the pavement layer while reducing its overall life-cycle cost. As an example, we have implemented differential evolution (DE) in the back-calculation of the elastic modulus of multi-layered pavement. The proposed DE global optimization back-calculation approach is integrated with a forward response model. This approach treats back-calculation as a global optimization problem where the cost function to be minimized is defined as the root mean square error in measured and computed deflections. The optimal solution which is elastic modulus, in this case, is searched for in the solution space by the DE algorithm. The best DE parameter combinations and the most optimum value is predicted so that the results are reproducible whenever the need arises. The algorithm’s performance in varied scenarios was analyzed by changing the input parameters. The prediction was well within the permissible error, establishing the supremacy of DE.

Keywords: cost function, differential evolution, falling weight deflectometer, genetic algorithm, global optimization, metaheuristic algorithm, multilayered pavement, pavement condition assessment, pavement layer moduli back calculation

Procedia PDF Downloads 137
1305 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 22
1304 Characteristic Sentence Stems in Academic English Texts: Definition, Identification, and Extraction

Authors: Jingjie Li, Wenjie Hu

Abstract:

Phraseological units in academic English texts have been a central focus in recent corpus linguistic research. A wide variety of phraseological units have been explored, including collocations, chunks, lexical bundles, patterns, semantic sequences, etc. This paper describes a special category of clause-level phraseological units, namely, Characteristic Sentence Stems (CSSs), with a view to describing their defining criteria and extraction method. CSSs are contiguous lexico-grammatical sequences which contain a subject-predicate structure and which are frame expressions characteristic of academic writing. The extraction of CSSs consists of six steps: Part-of-speech tagging, n-gram segmentation, structure identification, significance of occurrence calculation, text range calculation, and overlapping sequence reduction. Significance of occurrence calculation is the crux of this study. It includes the computing of both the internal association and the boundary independence of a CSS and tests the occurring significance of the CSS from both inside and outside perspectives. A new normalization algorithm is also introduced into the calculation of LocalMaxs for reducing overlapping sequences. It is argued that many sentence stems are so recurrent in academic texts that the most typical of them have become the habitual ways of making meaning in academic writing. Therefore, studies of CSSs could have potential implications and reference value for academic discourse analysis, English for Academic Purposes (EAP) teaching and writing.

Keywords: characteristic sentence stem, extraction method, phraseological unit, the statistical measure

Procedia PDF Downloads 137
1303 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems

Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic

Abstract:

Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.

Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method

Procedia PDF Downloads 98