Search results for: momentum augmented fama & french five-factor model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17486

Search results for: momentum augmented fama & french five-factor model

12506 Balancing and Synchronization Control of a Two Wheel Inverted Pendulum Vehicle

Authors: Shiuh-Jer Huang, Shin-Ham Lee, Sheam-Chyun Lin

Abstract:

A two wheel inverted pendulum (TWIP) vehicle is built with two hub DC motors for motion control evaluation. Arduino Nano micro-processor is chosen as the control kernel for this electric test plant. Accelerometer and gyroscope sensors are built in to measure the tilt angle and angular velocity of the inverted pendulum vehicle. Since the TWIP has significantly hub motor dead zone and nonlinear system dynamics characteristics, the vehicle system is difficult to control by traditional model based controller. The intelligent model-free fuzzy sliding mode controller (FSMC) was employed as the main control algorithm. Then, intelligent controllers are designed for TWIP balance control, and two wheels synchronization control purposes.

Keywords: balance control, synchronization control, two-wheel inverted pendulum, TWIP

Procedia PDF Downloads 386
12505 Numerical Investigation of a Supersonic Ejector for Refrigeration System

Authors: Karima Megdouli, Bourhan Taschtouch

Abstract:

Supersonic ejectors have many applications in refrigeration systems. And improving ejector performance is the key to improve the efficiency of these systems. One of the main advantages of the ejector is its geometric simplicity and the absence of moving parts. This paper presents a theoretical model for evaluating the performance of a new supersonic ejector configuration for refrigeration system applications. The relationship between the flow field and the key parameters of the new configuration has been illustrated by analyzing the Mach number and flow velocity contours. The method of characteristics (MOC) is used to design the supersonic nozzle of the ejector. The results obtained are compared with those obtained by CFD. The ejector is optimized by minimizing exergy destruction due to irreversibility and shock waves. The optimization converges to an efficient optimum solution, ensuring improved and stable performance over the whole considered range of uncertain operating conditions.

Keywords: supersonic ejector, theoretical model, CFD, optimization, performance

Procedia PDF Downloads 73
12504 A New Framework for ECG Signal Modeling and Compression Based on Compressed Sensing Theory

Authors: Siavash Eftekharifar, Tohid Yousefi Rezaii, Mahdi Shamsi

Abstract:

The purpose of this paper is to exploit compressed sensing (CS) method in order to model and compress the electrocardiogram (ECG) signals at a high compression ratio. In order to obtain a sparse representation of the ECG signals, first a suitable basis matrix with Gaussian kernels, which are shown to nicely fit the ECG signals, is constructed. Then the sparse model is extracted by applying some optimization technique. Finally, the CS theory is utilized to obtain a compressed version of the sparse signal. Reconstruction of the ECG signal from the compressed version is also done to prove the reliability of the algorithm. At this stage, a greedy optimization technique is used to reconstruct the ECG signal and the Mean Square Error (MSE) is calculated to evaluate the precision of the proposed compression method.

Keywords: compressed sensing, ECG compression, Gaussian kernel, sparse representation

Procedia PDF Downloads 460
12503 Using Genetic Algorithms and Rough Set Based Fuzzy K-Modes to Improve Centroid Model Clustering Performance on Categorical Data

Authors: Rishabh Srivastav, Divyam Sharma

Abstract:

We propose an algorithm to cluster categorical data named as ‘Genetic algorithm initialized rough set based fuzzy K-Modes for categorical data’. We propose an amalgamation of the simple K-modes algorithm, the Rough and Fuzzy set based K-modes and the Genetic Algorithm to form a new algorithm,which we hypothesise, will provide better Centroid Model clustering results, than existing standard algorithms. In the proposed algorithm, the initialization and updation of modes is done by the use of genetic algorithms while the membership values are calculated using the rough set and fuzzy logic.

Keywords: categorical data, fuzzy logic, genetic algorithm, K modes clustering, rough sets

Procedia PDF Downloads 241
12502 Using Historical Data for Stock Prediction

Authors: Sofia Stoica

Abstract:

In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.

Keywords: finance, machine learning, opening price, stock market

Procedia PDF Downloads 181
12501 Binary Logistic Regression Model in Predicting the Employability of Senior High School Graduates

Authors: Cromwell F. Gopo, Joy L. Picar

Abstract:

This study aimed to predict the employability of senior high school graduates for S.Y. 2018- 2019 in the Davao del Norte Division through quantitative research design using the descriptive status and predictive approaches among the indicated parameters, namely gender, school type, academics, academic award recipient, skills, values, and strand. The respondents of the study were the 33 secondary schools offering senior high school programs identified through simple random sampling, which resulted in 1,530 cases of graduates’ secondary data, which were analyzed using frequency, percentage, mean, standard deviation, and binary logistic regression. Results showed that the majority of the senior high school graduates who come from large schools were females. Further, less than half of these graduates received any academic award in any semester. In general, the graduates’ performance in academics, skills, and values were proficient. Moreover, less than half of the graduates were not employed. Then, those who were employed were either contractual, casual, or part-time workers dominated by GAS graduates. Further, the predictors of employability were gender and the Information and Communications Technology (ICT) strand, while the remaining variables did not add significantly to the model. The null hypothesis had been rejected as the coefficients of the predictors in the binary logistic regression equation did not take the value of 0. After utilizing the model, it was concluded that Technical-Vocational-Livelihood (TVL) graduates except ICT had greater estimates of employability.

Keywords: employability, senior high school graduates, Davao del Norte, Philippines

Procedia PDF Downloads 144
12500 Mathematical Modeling and Analysis of COVID-19 Pandemic

Authors: Thomas Wetere

Abstract:

Background: The coronavirus disease 2019 (COVID-19) pandemic (COVID-19) virus infection is a severe infectious disease with the highly transmissible variant, which become the global public health treat now. It has taken the life of more than 4 million people so far. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. Methodology: To end the global COVID-19 pandemic, implementation of multiple population-wide strategies, including vaccination, environmental factors, Government action, testing, and contact tracing, is required. In this article, a new mathematical model incorporating both temperature and government action to study the dynamics of the COVID-19 pandemic has been developed and comprehensively analysed. The model considers eight stages of infection: susceptible (S), infected Asymptomatic and Undetected(IAU ), infected Asymptomatic and detected(IAD), infected symptomatic and Undetected(ISU ), infected Symptomatic and detected(ISD), Hospitalized or threatened(H), Recovered(R) and Died(D). Results: The existence as well as non-negativity of the solution to the model is also verified, and the basic reproduction number is calculated. Besides, stability conditions are also checked, and finally, simulation results are compared with real data. The results demonstrates that effective government action will need to be combined with vaccination to end the ongoing COVID-19 pandemic. Conclusion: Vaccination and Government action are highly the crucial measures to control the COVID-19 pandemic. Besides, as the cost of vaccination might be high, we recommend an optimal control to reduce the cost and number of infected individuals. Moreover, in order to prevent COVID-19 pandemic, through the analysis of the model, the government must strictly manage the policy on COVID-19 and carry it out. This, in turn, helps for health campaigning and raising health literacy which plays a role to control the quick spread of the disease. We finally strongly believe that our study will play its own role in the current effort of controlling the pandemic.

Keywords: modeling, COVID-19, MCMC, stability

Procedia PDF Downloads 104
12499 Well-Being in the Workplace: Do Christian Leaders Behave Differently?

Authors: Mariateresa Torchia, Helene Cristini, Hannele Kauppinen

Abstract:

Leadership plays a vital role in organizations. Leaders provide directions and facilitate the processes that enable organizations to achieve their goals and objectives. However, while productivity and financial objectives are often given the greatest emphasis, leaders also have the responsibility for instituting standards of ethical conduct and moral values that guide the behavior of employees. Leaders’ behaviors such as support, empowerment and a high-quality relationship with their employees might not only prevent stress, but also improve employees’ stress coping meanwhile contributing to their affective well-being. Stemming from Girard’s Mimetic Theory, this study aims at understanding how leaders can foster well-being in organizations. To do so, we explore which is the role leaders play in conflict management, resentment management and negative emotions dissipation. Furthermore, we examine whether and to what extent religiosity impacts the way in which leaders operate in relation to employees’ well-being. Indeed, given that organizational values are crucial to ethical behavior and firms’ values may be steeled by a deep sense of spirituality and religious identification, there is a need to take a closer look at the role religion and spirituality play in influencing the way leaders impact employees’ well-being. Thus, religion might work as an overarching logic that provides a set of principles guiding leaders’ everyday practices and relations with employees. We answer our research questions using a qualitative approach. We interviewed 27 Christian leaders (members of the Christian Entrepreneurs and Leaders Association – EDC, a non-profit organization created in 1926 including 3,000 French Christian Leaders & Entrepreneurs). Our results show that well-being can have a different meaning in relation to the type of companies, size, culture, country of analysis. Moreover the values and believes of leaders influence the way they see and foster well-being among employees. Furthermore, leaders can have both a positive or negative impact on well-being. Indeed on the one side, they could increase well-being in the company while on the other hand, they could be the source of resentment and conflicts among employees. Finally, we observed that Christian leaders possess characteristics that are sometimes missing in leaders (humility, inability to compare with others, attempt to be coherent with their values and beliefs, interest in the common good instead of the personal interest, having tougher dilemmas, collectively undertaking the firm). Moreover the Christian leader believes that the common good should come before personal interest. In other words, to them, not only short –termed profit shouldn’t guide strategical decisions but also leaders should feel responsible for their employees’ well-being. Last but not least, the study is not an apologia of Christian, yet it discusses the implications of these values through the light of Girard’s mimetic theory for both theory and practice.

Keywords: Christian leaders, employees well-being, leadership, mimetic theory

Procedia PDF Downloads 118
12498 LaPEA: Language for Preprocessing of Edge Applications in Smart Factory

Authors: Masaki Sakai, Tsuyoshi Nakajima, Kazuya Takahashi

Abstract:

In order to improve the productivity of a factory, it is often the case to create an inference model by collecting and analyzing operational data off-line and then to develop an edge application (EAP) that evaluates the quality of the products or diagnoses machine faults in real-time. To accelerate this development cycle, an edge application framework for the smart factory is proposed, which enables to create and modify EAPs based on prepared inference models. In the framework, the preprocessing component is the key part to make it work. This paper proposes a language for preprocessing of edge applications, called LaPEA, which can flexibly process several sensor data from machines into explanatory variables for an inference model, and proves that it meets the requirements for the preprocessing.

Keywords: edge application framework, edgecross, preprocessing language, smart factory

Procedia PDF Downloads 137
12497 Computation and Validation of the Stress Distribution around a Circular Hole in a Slab Undergoing Plastic Deformation

Authors: Sherif D. El Wakil, John Rice

Abstract:

The aim of the current work was to employ the finite element method to model a slab, with a small hole across its width, undergoing plastic plane strain deformation. The computational model had, however, to be validated by comparing its results with those obtained experimentally. Since they were in good agreement, the finite element method can therefore be considered a reliable tool that can help gain better understanding of the mechanism of ductile failure in structural members having stress raisers. The finite element software used was ANSYS, and the PLANE183 element was utilized. It is a higher order 2-D, 8-node or 6-node element with quadratic displacement behavior. A bilinear stress-strain relationship was used to define the material properties, with constants similar to those of the material used in the experimental study. The model was run for several tensile loads in order to observe the progression of the plastic deformation region, and the stress concentration factor was determined in each case. The experimental study involved employing the visioplasticity technique, where a circular mesh (each circle was 0.5 mm in diameter, with 0.05 mm line thickness) was initially printed on the side of an aluminum slab having a small hole across its width. Tensile loading was then applied to produce a small increment of plastic deformation. Circles in the plastic region became ellipses, where the directions of the principal strains and stresses coincided with the major and minor axes of the ellipses. Next, we were able to determine the directions of the maximum and minimum shear stresses at the center of each ellipse, and the slip-line field was then constructed. We were then able to determine the stress at any point in the plastic deformation zone, and hence the stress concentration factor. The experimental results were found to be in good agreement with the analytical ones.

Keywords: finite element method to model a slab, slab undergoing plastic deformation, stress distribution around a circular hole, visioplasticity

Procedia PDF Downloads 315
12496 Proposing of an Adaptable Land Readjustment Model for Developing of the Informal Settlements in Kabul City

Authors: Habibi Said Mustafa, Hiroko Ono

Abstract:

Since 2006, Afghanistan is dealing with one of the most dramatic trend of urban movement in its history, cities and towns are expanding in size and number. Kabul is the capital of Afghanistan and as well as the fast-growing city in the Asia. The influx of the returnees from neighbor countries and other provinces of Afghanistan caused high rate of artificial growth which slums increased. As an unwanted consequence of this growth, today informal settlements have covered a vast portion of the city. Land Readjustment (LR) has proved to be an important tool for developing informal settlements and reorganizing urban areas but its implementation always varies from country to country and region to region within the countries. Consequently, to successfully develop the informal settlements in Kabul, we need to define an Afghan model of LR specifically for Afghanistan which needs to incorporate all those factors related to the socio-economic condition of the country. For this purpose, a part of the old city of Kabul has selected as a study area which is located near the Central Business District (CBD). After the further analysis and incorporating all needed factors, the result shows a positive potential for the implementation of an adaptable Land Readjustment model for Kabul city which is more sustainable and socio-economically friendly. It will enhance quality of life and provide better urban services for the residents. Moreover, it will set a vision and criteria by which sustainable developments shall proceed in other similar informal settlements of Kabul.

Keywords: adaptation, informal settlements, Kabul, land readjustment, preservation

Procedia PDF Downloads 196
12495 Factors Related to Employee Adherence to Rules in Kuwait Business Organizations

Authors: Ali Muhammad

Abstract:

The purpose of this study is to develop a theoretical framework which demonstrates the effect of four personal factors on employees rule following behavior in Kuwaiti business organizations. The model suggested in this study includes organizational citizenship behavior, affective organizational commitment, organizational trust, and procedural justice as possible predictors of rule following behavior. The study also attempts to compare the effects of the suggested factors on employees rule following behavior. The new model will, hopefully, extend previous research by adding new variables to the models used to explain employees rule following behavior. A discussion of issues related to rule-following behavior is presented, as well as recommendations for future research.

Keywords: employee adherence to rules, organizational justice, organizational commitment, organizational citizenship behavior

Procedia PDF Downloads 451
12494 Detection of Flood Prone Areas Using Multi Criteria Evaluation, Geographical Information Systems and Fuzzy Logic. The Ardas Basin Case

Authors: Vasileiou Apostolos, Theodosiou Chrysa, Tsitroulis Ioannis, Maris Fotios

Abstract:

The severity of extreme phenomena is due to their ability to cause severe damage in a small amount of time. It has been observed that floods affect the greatest number of people and induce the biggest damage when compared to the total of annual natural disasters. The detection of potential flood-prone areas constitutes one of the fundamental components of the European Natural Disaster Management Policy, directly connected to the European Directive 2007/60. The aim of the present paper is to develop a new methodology that combines geographical information, fuzzy logic and multi-criteria evaluation methods so that the most vulnerable areas are defined. Therefore, ten factors related to geophysical, morphological, climatological/meteorological and hydrological characteristics of the basin were selected. Afterwards, two models were created to detect the areas pronest to flooding. The first model defined the gravitas of each factor using Analytical Hierarchy Process (AHP) and the final map of possible flood spots were created using GIS and Boolean Algebra. The second model made use of the fuzzy logic and GIS combination and a respective map was created. The application area of the aforementioned methodologies was in Ardas basin due to the frequent and important floods that have taken place these last years. Then, the results were compared to the already observed floods. The result analysis shows that both models can detect with great precision possible flood spots. As the fuzzy logic model is less time-consuming, it is considered the ideal model to apply to other areas. The said results are capable of contributing to the delineation of high risk areas and to the creation of successful management plans dealing with floods.

Keywords: analytical hierarchy process, flood prone areas, fuzzy logic, geographic information system

Procedia PDF Downloads 372
12493 Implication of Fractal Kinetics and Diffusion Limited Reaction on Biomass Hydrolysis

Authors: Sibashish Baksi, Ujjaini Sarkar, Sudeshna Saha

Abstract:

In the present study, hydrolysis of Pinus roxburghi wood powder was carried out with Viscozyme, and kinetics of the hydrolysis has been investigated. Finely ground sawdust is submerged into 2% aqueous peroxide solution (pH=11.5) and pretreated through autoclaving, probe sonication, and alkaline peroxide pretreatment. Afterward, the pretreated material is subjected to hydrolysis. A chain of experiments was executed with delignified biomass (50 g/l) and varying enzyme concentrations (24.2–60.5 g/l). In the present study, 14.32 g/l of glucose, along with 7.35 g/l of xylose, have been recovered with a viscozyme concentration of 48.8 g/l and the same condition was treated as optimum condition. Additionally, thermal deactivation of viscozyme has been investigated and found to be gradually decreasing with escalated enzyme loading from 48.4 g/l (dissociation constant= 0.05 h⁻¹) to 60.5 g/l (dissociation constant= 0.02 h⁻¹). The hydrolysis reaction is a pseudo first-order reaction, and therefore, the rate of the hydrolysis can be expressed as a fractal-like kinetic equation that communicates between the product concentration and hydrolytic time t. It is seen that the value of rate constant (K) increases from 0.008 to 0.017 with augmented enzyme concentration from 24.2 g/l to 60.5 g/l. Greater value of K is associated with stronger enzyme binding capacity of the substrate mass. However, escalated concentration of supplied enzyme ensures improved interaction with more substrate molecules resulting in an enhanced de-polymerization of the polymeric sugar chains per unit time which eventually modifies the physiochemical structure of biomass. All fractal dimensions are in between 0 and 1. Lower the value of fractal dimension, more easily the biomass get hydrolyzed. It can be seen that with increased enzyme concentration from 24.2 g/l to 48.4 g/l, the values of fractal dimension go down from 0.1 to 0.044. This indicates that the presence of more enzyme molecules can more easily hydrolyze the substrate. However, an increased value has been observed with a further increment of enzyme concentration to 60.5g/l because of diffusional limitation. It is evident that the hydrolysis reaction system is a heterogeneous organization, and the product formation rate depends strongly on the enzyme diffusion resistances caused by the rate-limiting structures of the substrate-enzyme complex. Value of the rate constant increases from 1.061 to 2.610 with escalated enzyme concentration from 24.2 to 48.4 g/l. As the rate constant is proportional to Fick’s diffusion coefficient, it can be assumed that with a higher concentration of enzyme, a larger amount of enzyme mass dM diffuses into the substrate through the surface dF per unit time dt. Therefore, a higher rate constant value is associated with a faster diffusion of enzyme into the substrate. Regression analysis of time curves with various enzyme concentrations shows that diffusion resistant constant increases from 0.3 to 0.51 for the first two enzyme concentrations and again decreases with enzyme concentration of 60.5 g/l. During diffusion in a differential scale, the enzyme also experiences a greater resistance during diffusion of larger dM through dF in dt.

Keywords: viscozyme, glucose, fractal kinetics, thermal deactivation

Procedia PDF Downloads 108
12492 A Game-Theory-Based Price-Optimization Algorithm for the Simulation of Markets Using Agent-Based Modelling

Authors: Juan Manuel Sanchez-Cartas, Gonzalo Leon

Abstract:

A price competition algorithm for ABMs based on game theory principles is proposed to deal with the simulation of theoretical market models. The algorithm is applied to the classical Hotelling’s model and to a two-sided market model to show it leads to the optimal behavior predicted by theoretical models. However, when theoretical models fail to predict the equilibrium, the algorithm is capable of reaching a feasible outcome. Results highlight that the algorithm can be implemented in other simulation models to guarantee rational users and endogenous optimal behaviors. Also, it can be applied as a tool of verification given that is theoretically based.

Keywords: agent-based models, algorithmic game theory, multi-sided markets, price optimization

Procedia PDF Downloads 448
12491 Minimizing Unscheduled Maintenance from an Aircraft and Rolling Stock Maintenance Perspective: Preventive Maintenance Model

Authors: Adel A. Ghobbar, Varun Raman

Abstract:

The Corrective maintenance of components and systems is a problem plaguing almost every industry in the world today. Train operators’ and the maintenance repair and overhaul subsidiary of the Dutch railway company is also facing this problem. A considerable portion of the maintenance activities carried out by the company are unscheduled. This, in turn, severely stresses and stretches the workforce and resources available. One possible solution is to have a robust preventive maintenance plan. The other possible solution is to plan maintenance based on real-time data obtained from sensor-based ‘Health and Usage Monitoring Systems.’ The former has been investigated in this paper. The preventive maintenance model developed for train operator will subsequently be extended, to tackle the unscheduled maintenance problem also affecting the aerospace industry. The extension of the model to the aerospace sector will be dealt with in the second part of the research, and it would, in turn, validate the soundness of the model developed. Thus, there are distinct areas that will be addressed in this paper, including the mathematical modelling of preventive maintenance and optimization based on cost and system availability. The results of this research will help an organization to choose the right maintenance strategy, allowing it to save considerable sums of money as opposed to overspending under the guise of maintaining high asset availability. The concept of delay time modelling was used to address the practical problem of unscheduled maintenance in this paper. The delay time modelling can be used to help with support planning for a given asset. The model was run using MATLAB, and the results are shown that the ideal inspection intervals computed using the extended from a minimal cost perspective were 29 days, and from a minimum downtime, perspective was 14 days. Risk matrix integration was constructed to represent the risk in terms of the probability of a fault leading to breakdown maintenance and its consequences in terms of maintenance cost. Thus, the choice of an optimal inspection interval of 29 days, resulted in a cost of approximately 50 Euros and the corresponding value of b(T) was 0.011. These values ensure that the risk associated with component X being maintained at an inspection interval of 29 days is more than acceptable. Thus, a switch in maintenance frequency from 90 days to 29 days would be optimal from the point of view of cost, downtime and risk.

Keywords: delay time modelling, unscheduled maintenance, reliability, maintainability, availability

Procedia PDF Downloads 130
12490 Economic Development Impacts of Connected and Automated Vehicles (CAV)

Authors: Rimon Rafiah

Abstract:

This paper will present a combination of two seemingly unrelated models, which are the one for estimating economic development impacts as a result of transportation investment and the other for increasing CAV penetration in order to reduce congestion. Measuring economic development impacts resulting from transportation investments is becoming more recognized around the world. Examples include the UK’s Wider Economic Benefits (WEB) model, Economic Impact Assessments in the USA, various input-output models, and additional models around the world. The economic impact model is based on WEB and is based on the following premise: investments in transportation will reduce the cost of personal travel, enabling firms to be more competitive, creating additional throughput (the same road allows more people to travel), and reducing the cost of travel of workers to a new workplace. This reduction in travel costs was estimated in out-of-pocket terms in a given localized area and was then translated into additional employment based on regional labor supply elasticity. This additional employment was conservatively assumed to be at minimum wage levels, translated into GDP terms, and from there into direct taxation (i.e., an increase in tax taken by the government). The CAV model is based on economic principles such as CAV usage, supply, and demand. Usage of CAVs can increase capacity using a variety of means – increased automation (known as Level I thru Level IV) and also by increased penetration and usage, which has been predicted to go up to 50% by 2030 according to several forecasts, with possible full conversion by 2045-2050. Several countries have passed policies and/or legislation on sales of gasoline-powered vehicles (none) starting in 2030 and later. Supply was measured via increased capacity on given infrastructure as a function of both CAV penetration and implemented technologies. The CAV model, as implemented in the USA, has shown significant savings in travel time and also in vehicle operating costs, which can be translated into economic development impacts in terms of job creation, GDP growth and salaries as well. The models have policy implications as well and can be adapted for use in Japan as well.

Keywords: CAV, economic development, WEB, transport economics

Procedia PDF Downloads 70
12489 Bringing German History to Tourists

Authors: Gudrun Görlitz, Christian Schölzel, Alexander Vollmar

Abstract:

Sites of Jewish Life in Berlin 1933-1945. Between Persecution and Self-assertion” was realized in a project funded by the European Regional Development Fund. A smartphone app, and a associated web site enable tourists and other participants of this educational offer to learn in a serious way more about the life of Jews in the German capital during the Nazi era. Texts, photos, video and audio recordings communicate the historical content. Interactive maps (both current and historical) make it possible to use predefined or self combined routes. One of the manifold challenges was to create a broad ranged guide, in which all detailed information are well linked with each other. This enables heterogeneous groups of potential users to find a wide range of specific information, corresponding with their particular wishes and interests. The multitude of potential ways to navigate through the diversified information causes (hopefully) the users to utilize app and web site for a second or third time and with a continued interest. Therefore 90 locations, a lot of them situated in Berlin’s city centre, have been chosen. For all of them text-, picture and/or audio/video material gives extensive information. Suggested combinations of several of these “site stories” are leading to the offer of detailed excursion routes. Events and biographies are also presented. A few of the implemented biographies are especially enriched with source material concerning the aspect of (forced) migration of these persons during the Nazi time. All this was done in a close and fruitful interdisciplinary cooperation of computer scientists and historians. The suggested conference paper aims to show the challenges shaping complex source material for practical use by different user-groups in a proper technical and didactic way. Based on the historical research in archives, museums, libraries and digital resources the quantitative dimension of the project can be sized as follows: The paper focuses on the following historiographical and technical aspects: - Shaping the text material didactically for the use in new media, especially a Smartphone-App running on differing platforms; - Geo-referencing of the sites on historical and current map material; - Overlay of old and new maps to present and find the sites; - Using Augmented Reality technologies to re-visualize destroyed buildings; - Visualization of black-/white-picture-material; - Presentation of historical footage and the resulting problems to need too much storage space; - Financial and juridical aspects in gaining copyrights to present archival material.

Keywords: smartphone app, history, tourists, German

Procedia PDF Downloads 371
12488 Modeling and System Identification of a Variable Excited Linear Direct Drive

Authors: Heiko Weiß, Andreas Meister, Christoph Ament, Nils Dreifke

Abstract:

Linear actuators are deployed in a wide range of applications. This paper presents the modeling and system identification of a variable excited linear direct drive (LDD). The LDD is designed based on linear hybrid stepper technology exhibiting the characteristic tooth structure of mover and stator. A three-phase topology provides the thrust force caused by alternating strengthening and weakening of the flux of the legs. To achieve best possible synchronous operation, the phases are commutated sinusoidal. Despite the fact that these LDDs provide high dynamics and drive forces, noise emission limits their operation in calm workspaces. To overcome this drawback an additional excitation of the magnetic circuit is introduced to LDD using additional enabling coils instead of permanent magnets. The new degree of freedom can be used to reduce force variations and related noise by varying the excitation flux that is usually generated by permanent magnets. Hence, an identified simulation model is necessary to analyze the effects of this modification. Especially the force variations must be modeled well in order to reduce them sufficiently. The model can be divided into three parts: the current dynamics, the mechanics and the force functions. These subsystems are described with differential equations or nonlinear analytic functions, respectively. Ordinary nonlinear differential equations are derived and transformed into state space representation. Experiments have been carried out on a test rig to identify the system parameters of the complete model. Static and dynamic simulation based optimizations are utilized for identification. The results are verified in time and frequency domain. Finally, the identified model provides a basis for later design of control strategies to reduce existing force variations.

Keywords: force variations, linear direct drive, modeling and system identification, variable excitation flux

Procedia PDF Downloads 368
12487 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 50
12486 Arithmetic Operations in Deterministic P Systems Based on the Weak Rule Priority

Authors: Chinedu Peter, Dashrath Singh

Abstract:

Membrane computing is a computability model which abstracts its structures and functions from the biological cell. The main ingredient of membrane computing is the notion of a membrane structure, which consists of several cell-like membranes recurrently placed inside a unique skin membrane. The emergence of several variants of membrane computing gives rise to the notion of a P system. The paper presents a variant of P systems for arithmetic operations on non-negative integers based on the weak priorities for rule application. Consequently, we obtain deterministic P systems. Two membranes suffice. There are at most four objects for multiplication and five objects for division throughout the computation processes. The model is simple and has a potential for possible extension to non-negative integers and real numbers in general.

Keywords: P system, binary operation, determinism, weak rule priority

Procedia PDF Downloads 442
12485 Reallocation of Bed Capacity in a Hospital Combining Discrete Event Simulation and Integer Linear Programming

Authors: Muhammed Ordu, Eren Demir, Chris Tofallis

Abstract:

The number of inpatient admissions in the UK has been significantly increasing over the past decade. These increases cause bed occupancy rates to exceed the target level (85%) set by the Department of Health in England. Therefore, hospital service managers are struggling to better manage key resource such as beds. On the other hand, this severe demand pressure might lead to confusion in wards. For example, patients can be admitted to the ward of another inpatient specialty due to lack of resources (i.e., bed). This study aims to develop a simulation-optimization model to reallocate the available number of beds in a mid-sized hospital in the UK. A hospital simulation model was developed to capture the stochastic behaviours of the hospital by taking into account the accident and emergency department, all outpatient and inpatient services, and the interactions between each other. A couple of outputs of the simulation model (e.g., average length of stay and revenue) were generated as inputs to be used in the optimization model. An integer linear programming was developed under a number of constraints (financial, demand, target level of bed occupancy rate and staffing level) with the aims of maximizing number of admitted patients. In addition, a sensitivity analysis was carried out by taking into account unexpected increases on inpatient demand over the next 12 months. As a result, the major findings of the approach proposed in this study optimally reallocate the available number of beds for each inpatient speciality and reveal that 74 beds are idle. In addition, the findings of the study indicate that the hospital wards will be able to cope with 14% demand increase at most in the projected year. In conclusion, this paper sheds a new light on how best to reallocate beds in order to cope with current and future demand for healthcare services.

Keywords: bed occupancy rate, bed reallocation, discrete event simulation, inpatient admissions, integer linear programming, projected usage

Procedia PDF Downloads 138
12484 Towards a Resources Provisioning for Dynamic Workflows in the Cloud

Authors: Fairouz Fakhfakh, Hatem Hadj Kacem, Ahmed Hadj Kacem

Abstract:

Cloud computing offers a new model of service provisioning for workflow applications, thanks to its elasticity and its paying model. However, it presents various challenges that need to be addressed in order to be efficiently utilized. The resources provisioning problem for workflow applications has been widely studied. Nevertheless, the existing works did not consider the change in workflow instances while they are being executed. This functionality has become a major requirement to deal with unusual situations and evolution. This paper presents a first step towards the resources provisioning for a dynamic workflow. In fact, we propose a provisioning algorithm which minimizes the overall workflow execution cost, while meeting a deadline constraint. Then, we extend it to support the dynamic adding of tasks. Experimental results show that our proposed heuristic demonstrates a significant reduction in resources cost by using a consolidation process.

Keywords: cloud computing, resources provisioning, dynamic workflow, workflow applications

Procedia PDF Downloads 282
12483 Photovoltaic-Driven Thermochemical Storage for Cooling Applications to Be Integrated in Polynesian Microgrids: Concept and Efficiency Study

Authors: Franco Ferrucci, Driss Stitou, Pascal Ortega, Franck Lucas

Abstract:

The energy situation in tropical insular regions, as found in the French Polynesian islands, presents a number of challenges, such as high dependence on imported fuel, high transport costs from the mainland and weak electricity grids. Alternatively, these regions have a variety of renewable energy resources, which favor the exploitation of smart microgrids and energy storage technologies. With regards to the electrical energy demand, the high temperatures in these regions during the entire year implies that a large proportion of consumption is used for cooling buildings, even during the evening hours. In this context, this paper presents an air conditioning system driven by photovoltaic (PV) electricity that combines a refrigeration system and a thermochemical storage process. Thermochemical processes are able to store energy in the form of chemical potential with virtually no losses, and this energy can be used to produce cooling during the evening hours without the need to run a compressor (thus no electricity is required). Such storage processes implement thermochemical reactors in which a reversible chemical reaction between a solid compound and a gas takes place. The solid/gas pair used in this study is BaCl2 reacting with ammonia (NH3), which is also the coolant fluid in the refrigeration circuit. In the proposed system, the PV-driven electric compressor is used during the daytime either to run the refrigeration circuit when a cooling demand occurs or to decompose the ammonia-charged salt and remove the gas from thermochemical reactor when no cooling is needed. During the evening, when there is no electricity from solar source, the system changes its configuration and the reactor reabsorbs the ammonia gas from the evaporator and produces the cooling effect. In comparison to classical PV-driven air conditioning units equipped with electrochemical batteries (e.g. Pb, Li-ion), the proposed system has the advantage of having a novel storage technology with a much longer charge/discharge life cycle, and no self-discharge. It also allows a continuous operation of the electric compressor during the daytime, thus avoiding the problems associated with the on-off cycling. This work focuses on the system concept and on the efficiency study of its main components. It also compares the thermochemical with electrochemical storage as well as with other forms of thermal storage, such as latent (ice) and sensible heat (chilled water). The preliminary results show that the system seems to be a promising alternative to simultaneously fulfill cooling and energy storage needs in tropical insular regions.

Keywords: microgrid, solar air-conditioning, solid/gas sorption, thermochemical storage, tropical and insular regions

Procedia PDF Downloads 234
12482 Nearly Zero Energy Building: Analysis on How End-Users Affect Energy Savings Targets

Authors: Margarida Plana

Abstract:

One of the most important energy challenge of the European policies is the transition to a Net Zero Energy Building (NZEB) model. A NZEB is a new concept of building that has the aim of reducing both the energy consumption and the carbon emissions to nearly zero of the course of a year. To achieve this nearly zero consumption, apart from being buildings with high efficiency levels, the energy consumed by the building has to be produced on-site. This paper is focused on presenting the results of the analysis developed on basis of real projects’ data in order to quantify the impact of end-users behavior. The analysis is focused on how the behavior of building’s occupants can vary the achievement of the energy savings targets and how they can be limited. The results obtained show that on this kind of project, with very high energy performance, is required to limit the end-users interaction with the system operation to be able to reach the targets fixed.

Keywords: end-users impacts, energy efficiency, energy savings, NZEB model

Procedia PDF Downloads 368
12481 Digital Structural Monitoring Tools @ADaPT for Cracks Initiation and Growth due to Mechanical Damage Mechanism

Authors: Faizul Azly Abd Dzubir, Muhammad F. Othman

Abstract:

Conventional structural health monitoring approach for mechanical equipment uses inspection data from Non-Destructive Testing (NDT) during plant shut down window and fitness for service evaluation to estimate the integrity of the equipment that is prone to crack damage. Yet, this forecast is fraught with uncertainty because it is often based on assumptions of future operational parameters, and the prediction is not continuous or online. Advanced Diagnostic and Prognostic Technology (ADaPT) uses Acoustic Emission (AE) technology and a stochastic prognostic model to provide real-time monitoring and prediction of mechanical defects or cracks. The forecast can help the plant authority handle their cracked equipment before it ruptures, causing an unscheduled shutdown of the facility. The ADaPT employs process historical data trending, finite element analysis, fitness for service, and probabilistic statistical analysis to develop a prediction model for crack initiation and growth due to mechanical damage. The prediction model is combined with live equipment operating data for real-time prediction of the remaining life span owing to fracture. ADaPT was devised at a hot combined feed exchanger (HCFE) that had suffered creep crack damage. The ADaPT tool predicts the initiation of a crack at the top weldment area by April 2019. During the shutdown window in April 2019, a crack was discovered and repaired. Furthermore, ADaPT successfully advised the plant owner to run at full capacity and improve output by up to 7% by April 2019. ADaPT was also used on a coke drum that had extensive fatigue cracking. The initial cracks are declared safe with ADaPT, with remaining crack lifetimes extended another five (5) months, just in time for another planned facility downtime to execute repair. The prediction model, when combined with plant information data, allows plant operators to continuously monitor crack propagation caused by mechanical damage for improved maintenance planning and to avoid costly shutdowns to repair immediately.

Keywords: mechanical damage, cracks, continuous monitoring tool, remaining life, acoustic emission, prognostic model

Procedia PDF Downloads 71
12480 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis

Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio

Abstract:

Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.

Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction

Procedia PDF Downloads 304
12479 Trusted Neural Network: Reversibility in Neural Networks for Network Integrity Verification

Authors: Malgorzata Schwab, Ashis Kumer Biswas

Abstract:

In this concept paper, we explore the topic of Reversibility in Neural Networks leveraged for Network Integrity Verification and crafted the term ''Trusted Neural Network'' (TNN), paired with the API abstraction around it, to embrace the idea formally. This newly proposed high-level generalizable TNN model builds upon the Invertible Neural Network architecture, trained simultaneously in both forward and reverse directions. This allows for the original system inputs to be compared with the ones reconstructed from the outputs in the reversed flow to assess the integrity of the end-to-end inference flow. The outcome of that assessment is captured as an Integrity Score. Concrete implementation reflecting the needs of specific problem domains can be derived from this general approach and is demonstrated in the experiments. The model aspires to become a useful practice in drafting high-level systems architectures which incorporate AI capabilities.

Keywords: trusted, neural, invertible, API

Procedia PDF Downloads 141
12478 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance

Procedia PDF Downloads 390
12477 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.

Keywords: agglomerate, blast furnace, permeability, softening-melting

Procedia PDF Downloads 246