Search results for: MI edema model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16910

Search results for: MI edema model

12350 The Performance Improvement of Solar Aided Power Generation System by Introducing the Second Solar Field

Authors: Junjie Wu, Hongjuan Hou, Eric Hu, Yongping Yang

Abstract:

Solar aided power generation (SAPG) technology has been proven as an efficient way to make use of solar energy for power generation purpose. In an SAPG plant, a solar field consisting of parabolic solar collectors is normally used to supply the solar heat in order to displace the high pressure/temperature extraction steam. To understand the performance of such a SAPG plant, a new simulation model was developed by the authors recently, in which the boiler was treated, as a series of heat exchangers unlike other previous models. Through the simulations using the new model, it was found the outlet properties of reheated steam, e.g. temperature, would decrease due to the introduction of the solar heat. The changes make the (lower stage) turbines work under off-design condition. As a result, the whole plant’s performance may not be optimal. In this paper, the second solar filed was proposed to increase the inlet temperature of steam to be reheated, in order to bring the outlet temperature of reheated steam back to the designed condition. A 600MW SAPG plant was simulated as a case study using the new model to understand the impact of the second solar field on the plant performance. It was found in the study, the 2nd solar field would improve the plant’s performance in terms of cycle efficiency and solar-to-electricity efficiency by 1.91% and 6.01%. The solar-generated electricity produced by per aperture area under the design condition was 187.96W/m2, which was 26.14% higher than the previous design.

Keywords: solar-aided power generation system, off-design performance, coal-saving performance, boiler modelling, integration schemes

Procedia PDF Downloads 290
12349 An Interdisciplinary Maturity Model for Accompanying Sustainable Digital Transformation Processes in a Smart Residential Quarter

Authors: Wesley Preßler, Lucie Schmidt

Abstract:

Digital transformation is playing an increasingly important role in the development of smart residential quarters. In order to accompany and steer this process and ultimately make the success of the transformation efforts measurable, it is helpful to use an appropriate maturity model. However, conventional maturity models for digital transformation focus primarily on the evaluation of processes and neglect the information and power imbalances between the stakeholders, which affects the validity of the results. The Multi-Generation Smart Community (mGeSCo) research project is developing an interdisciplinary maturity model that integrates the dimensions of digital literacy, interpretive patterns, and technology acceptance to address this gap. As part of the mGeSCo project, the technological development of selected dimensions in the Smart Quarter Jena-Lobeda (Germany) is being investigated. A specific maturity model, based on Cohen's Smart Cities Wheel, evaluates the central dimensions Working, Living, Housing and Caring. To improve the reliability and relevance of the maturity assessment, the factors Digital Literacy, Interpretive Patterns and Technology Acceptance are integrated into the developed model. The digital literacy dimension examines stakeholders' skills in using digital technologies, which influence their perception and assessment of technological maturity. Digital literacy is measured by means of surveys, interviews, and participant observation, using the European Commission's Digital Literacy Framework (DigComp) as a basis. Interpretations of digital technologies provide information about how individuals perceive technologies and ascribe meaning to them. However, these are not mere assessments, prejudices, or stereotyped perceptions but collective patterns, rules, attributions of meaning and the cultural repertoire that leads to these opinions and attitudes. Understanding these interpretations helps in assessing the overarching readiness of stakeholders to digitally transform a/their neighborhood. This involves examining people's attitudes, beliefs, and values about technology adoption, as well as their perceptions of the benefits and risks associated with digital tools. These insights provide important data for a holistic view and inform the steps needed to prepare individuals in the neighborhood for a digital transformation. Technology acceptance is another crucial factor for successful digital transformation to examine the willingness of individuals to adopt and use new technologies. Surveys or questionnaires based on Davis' Technology Acceptance Model can be used to complement interpretive patterns to measure neighborhood acceptance of digital technologies. Integrating the dimensions of digital literacy, interpretive patterns and technology acceptance enables the development of a roadmap with clear prerequisites for initiating a digital transformation process in the neighborhood. During the process, maturity is measured at different points in time and compared with changes in the aforementioned dimensions to ensure sustainable transformation. Participation, co-creation, and co-production are essential concepts for a successful and inclusive digital transformation in the neighborhood context. This interdisciplinary maturity model helps to improve the assessment and monitoring of sustainable digital transformation processes in smart residential quarters. It enables a more comprehensive recording of the factors that influence the success of such processes and supports the development of targeted measures to promote digital transformation in the neighborhood context.

Keywords: digital transformation, interdisciplinary, maturity model, neighborhood

Procedia PDF Downloads 77
12348 A Critical Discourse Analysis of Jamaican and Trinidadian News Articles about D/Deafness

Authors: Melissa Angus Baboun

Abstract:

Utilizing a Critical Discourse Analysis (CDA) methodology and a theoretical framework based on disability studies, how Jamaican and Trinidadian newspapers discussed issues relating to the Deaf community were examined. The term deaf was inputted into the search engine tool of the online website for the Jamaica Observer and the Trinidad & Tobago Guardian. All 27 articles that contained the term deaf in its content and were written between August 1, 2017 and November 15, 2017 were chosen for the study. The data analysis was divided into three steps: (1) listing and analysis instances of metaphorical deafness (e.g. fall on deaf ears), (2) categorization of the content of the articles into the models of disability discourse (the medical, socio-cultural, and superscrip models of disability narratives), and (3) the analysis of any additional data found. A total of 42% of the articles pulled for this study did not deal with the Deaf community in any capacity, but rather instances of the use of idiomatic expressions that use deafness as a metaphor for a non-physical, undesirable trait. The most common idiomatic expression found was fall on deaf ears. Regarding the models of disability discourse, eight articles were found to follow the socio-cultural model, two were found to follow the medical model, and two were found to follow the superscrip model. The additional data found in these articles include two instances of the term deaf and mute, an overwhelming use of lower case d for the term deaf, and the misuse of the term translator (to mean interpreter).

Keywords: deafness, disability, news coverage, Caribbean newspapers

Procedia PDF Downloads 233
12347 Theoretical Approach for Estimating Transfer Length of Prestressing Strand in Pretensioned Concrete Members

Authors: Sun-Jin Han, Deuck Hang Lee, Hyo-Eun Joo, Hyun Kang, Kang Su Kim

Abstract:

In pretensioned concrete members, the transfer length region is existed, in which the stress in prestressing strand is developed due to the bond mechanism with surrounding concrete. The stress of strands in the transfer length zone is smaller than that in the strain plateau zone, so-called effective prestress, therefore the web-shear strength in transfer length region is smaller than that in the strain plateau zone. Although the transfer length is main key factor in the shear design, a few analytical researches have been conducted to investigate the transfer length. Therefore, in this study, a theoretical approach was used to estimate the transfer length. The bond stress developed between the strands and the surrounding concrete was quantitatively calculated by using the Thick-Walled Cylinder Model (TWCM), based on this, the transfer length of strands was calculated. To verify the proposed model, a total of 209 test results were collected from the previous studies. Consequently, the analysis results showed that the main influencing factors on the transfer length are the compressive strength of concrete, the cover thickness of concrete, the diameter of prestressing strand, and the magnitude of initial prestress. In addition, the proposed model predicted the transfer length of collected test specimens with high accuracy. Acknowledgement: This research was supported by a grant(17TBIP-C125047-01) from Technology Business Innovation Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

Keywords: bond, Hoyer effect, prestressed concrete, prestressing strand, transfer length

Procedia PDF Downloads 295
12346 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite

Abstract:

Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.

Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination

Procedia PDF Downloads 124
12345 The Influence of the Diameter of the Flow Conducts on the Rheological Behavior of a Non-Newtonian Fluid

Authors: Hacina Abchiche, Mounir Mellal, Imene Bouchelkia

Abstract:

The knowledge of the rheological behavior of the used products in different fields is essential, both in digital simulation and the understanding of phenomenon involved during the flow of these products. The fluids presenting a nonlinear behavior represent an important category of materials used in the process of food-processing, chemical, pharmaceutical and oil industries. The issue is that the rheological characterization by classical rheometer cannot simulate, or take into consideration, the different parameters affecting the characterization of a complex fluid flow during real-time. The main objective of this study is to investigate the influence of the diameter of the flow conducts or pipe on the rheological behavior of a non-Newtonian fluid and Propose a mathematical model linking the rheologic parameters and the diameter of the conduits of flow. For this purpose, we have developed an experimental system based on the principal of a capillary rheometer.

Keywords: rhéologie, non-Newtonian fluids, experimental stady, mathematical model, cylindrical conducts

Procedia PDF Downloads 290
12344 Investigation of the Progressive Collapse Potential in Steel Buildings with Composite Floor System

Authors: Pouya Kaafi, Gholamreza Ghodrati Amiri

Abstract:

Abnormal loads due to natural events, implementation errors and some other issues can lead to occurrence of progressive collapse in structures. Most of the past researches consist of 2- Dimensional (2D) models of steel frames without consideration of the floor system effects, which reduces the accuracy of the modeling. While employing a 3-Dimensional (3D) model and modeling the concrete slab system for the floors have a crucial role in the progressive collapse evaluation. In this research, a 3D finite element model of a 5-story steel building is modeled by the ABAQUS software once with modeling the slabs, and the next time without considering them. Then, the progressive collapse potential is evaluated. The results of the analyses indicate that the lack of the consideration of the slabs during the analyses, can lead to inaccuracy in assessing the progressive failure potential of the structure.

Keywords: abnormal loads, composite floor system, intermediate steel moment resisting frame system, progressive collapse

Procedia PDF Downloads 456
12343 A Human Centered Design of an Exoskeleton Using Multibody Simulation

Authors: Sebastian Kölbl, Thomas Reitmaier, Mathias Hartmann

Abstract:

Trial and error approaches to adapt wearable support structures to human physiology are time consuming and elaborate. However, during preliminary design, the focus lies on understanding the interaction between exoskeleton and the human body in terms of forces and moments, namely body mechanics. For the study at hand, a multi-body simulation approach has been enhanced to evaluate actual forces and moments in a human dummy model with and without a digital mock-up of an active exoskeleton. Therefore, different motion data have been gathered and processed to perform a musculosceletal analysis. The motion data are ground reaction forces, electromyography data (EMG) and human motion data recorded with a marker-based motion capture system. Based on the experimental data, the response of the human dummy model has been calibrated. Subsequently, the scalable human dummy model, in conjunction with the motion data, is connected with the exoskeleton structure. The results of the human-machine interaction (HMI) simulation platform are in particular resulting contact forces and human joint forces to compare with admissible values with regard to the human physiology. Furthermore, it provides feedback for the sizing of the exoskeleton structure in terms of resulting interface forces (stress justification) and the effect of its compliance. A stepwise approach for the setup and validation of the modeling strategy is presented and the potential for a more time and cost-effective development of wearable support structures is outlined.

Keywords: assistive devices, ergonomic design, inverse dynamics, inverse kinematics, multibody simulation

Procedia PDF Downloads 162
12342 Pattern of Stress Distribution in Different Ligature-Wire-Brackets Systems: A FE and Experimental Analysis

Authors: Afef Dridi, Salah Mezlini

Abstract:

Since experimental devices cannot calculate stress and deformation of complex structures. The Finite Element Method FEM has been widely used in several fields of research. One of these fields is orthodontics. The advantage of using such a method is the use of an accurate and non invasive method that allows us to have a sufficient data about the physiological reactions can happening in soft tissues. Most of researches done in this field were interested in the study of stresses and deformations induced by orthodontic apparatus in soft tissues (alveolar tissues). Only few studies were interested in the distribution of stress and strain in the orthodontic brackets. These studies, although they tried to be as close as possible to real conditions, their models did not reproduce the clinical cases. For this reason, the model generated by our research is the closest one to reality. In this study, a numerical model was developed to explore the stress and strain distribution under the application of real conditions. A comparison between different material properties was also done.

Keywords: visco-hyperelasticity, FEM, orthodontic treatment, inverse method

Procedia PDF Downloads 259
12341 Expanding the Evaluation Criteria for a Wind Turbine Performance

Authors: Ivan Balachin, Geanette Polanco, Jiang Xingliang, Hu Qin

Abstract:

The problem of global warming raised up interest towards renewable energy sources. To reduce cost of wind energy is a challenge. Before building of wind park conditions such as: average wind speed, direction, time for each wind, probability of icing, must be considered in the design phase. Operation values used on the setting of control systems also will depend on mentioned variables. Here it is proposed a procedure to be include in the evaluation of the performance of a wind turbine, based on the amplitude of wind changes, the number of changes and their duration. A generic study case based on actual data is presented. Data analysing techniques were applied to model the power required for yaw system based on amplitude and data amount of wind changes. A theoretical model between time, amplitude of wind changes and angular speed of nacelle rotation was identified.

Keywords: field data processing, regression determination, wind turbine performance, wind turbine placing, yaw system losses

Procedia PDF Downloads 390
12340 Non-Linear Vibration and Stability Analysis of an Axially Moving Beam with Rotating-Prismatic Joint

Authors: M. Najafi, F. Rahimi Dehgolan

Abstract:

In this paper, the dynamic modeling of a single-link flexible beam with a tip mass is given by using Hamilton's principle. The link has been rotational and translational motion and it was assumed that the beam is moving with a harmonic velocity about a constant mean velocity. Non-linearity has been introduced by including the non-linear strain to the analysis. Dynamic model is obtained by Euler-Bernoulli beam assumption and modal expansion method. Also, the effects of rotary inertia, axial force, and associated boundary conditions of the dynamic model were analyzed. Since the complex boundary value problem cannot be solved analytically, the multiple scale method is utilized to obtain an approximate solution. Finally, the effects of several conditions on the differences among the behavior of the non-linear term, mean velocity on natural frequencies and the system stability are discussed.

Keywords: non-linear vibration, stability, axially moving beam, bifurcation, multiple scales method

Procedia PDF Downloads 370
12339 Recycling Service Strategy by Considering Demand-Supply Interaction

Authors: Hui-Chieh Li

Abstract:

Circular economy promotes greater resource productivity and avoids pollution through greater recycling and re-use which bring benefits for both the environment and the economy. The concept is contrast to a linear economy which is ‘take, make, dispose’ model of production. A well-design reverse logistics service strategy could enhance the willingness of recycling of the users and reduce the related logistics cost as well as carbon emissions. Moreover, the recycle brings the manufacturers most advantages as it targets components for closed-loop reuse, essentially converting materials and components from worn-out product into inputs for new ones at right time and right place. This study considers demand-supply interaction, time-dependent recycle demand, time-dependent surplus value of recycled product and constructs models on recycle service strategy for the recyclable waste collector. A crucial factor in optimizing a recycle service strategy is consumer demand. The study considers the relationships between consumer demand towards recycle and product characteristics, surplus value and user behavior. The study proposes a recycle service strategy which differs significantly from the conventional and typical uniform service strategy. Periods with considerable demand and large surplus product value suggest frequent and short service cycle. The study explores how to determine a recycle service strategy for recyclable waste collector in terms of service cycle frequency and duration and vehicle type for all service cycles by considering surplus value of recycled product, time-dependent demand, transportation economies and demand-supply interaction. The recyclable waste collector is responsible for the collection of waste product for the manufacturer. The study also examines the impacts of utilization rate on the cost and profit in the context of different sizes of vehicles. The model applies mathematical programming methods and attempts to maximize the total profit of the distributor during the study period. This study applies the binary logit model, analytical model and mathematical programming methods to the problem. The model specifically explores how to determine a recycle service strategy for the recycler by considering product surplus value, time-dependent recycle demand, transportation economies and demand-supply interaction. The model applies mathematical programming methods and attempts to minimize the total logistics cost of the recycler and maximize the recycle benefits of the manufacturer during the study period. The study relaxes the constant demand assumption and examines how service strategy affects consumer demand towards waste recycling. Results of the study not only help understanding how the user demand for recycle service and product surplus value affects the logistics cost and manufacturer’s benefits, but also provide guidance such as award bonus and carbon emission regulations for the government.

Keywords: circular economy, consumer demand, product surplus value, recycle service strategy

Procedia PDF Downloads 392
12338 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking

Authors: Noga Bregman

Abstract:

Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.

Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves

Procedia PDF Downloads 52
12337 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 336
12336 Experimental Determination of Aluminum 7075-T6 Parameters Using Stabilized Cycle Tests to Predict Thermal Ratcheting

Authors: Armin Rahmatfam, Mohammad Zehsaz, Farid Vakili Tahami, Nasser Ghassembaglou

Abstract:

In this paper the thermal ratcheting, kinematic hardening parameters C, γ, isotropic hardening parameters and also k, b, Q combined isotropic/kinematic hardening parameters have been obtained experimentally from the monotonic, strain controlled cyclic tests at room and elevated temperatures of 20°C, 100°C, and 400°C. These parameters are used in nonlinear combined isotropic/kinematic hardening model to predict better description of the loading and reloading cycles in the cyclic indentation as well as thermal ratcheting. For this purpose, three groups of specimens made of Aluminum 7075-T6 have been investigated. After each test and using stable hysteretic cycles, material parameters have been obtained for using in combined nonlinear isotropic/kinematic hardening models. Also the methodology of obtaining the correct kinematic/isotropic hardening parameters is presented.

Keywords: combined hardening model, kinematic hardening, isotropic hardening, cyclic tests

Procedia PDF Downloads 480
12335 A Parallel Poromechanics Finite Element Method (FEM) Model for Reservoir Analyses

Authors: Henrique C. C. Andrade, Ana Beatriz C. G. Silva, Fernando Luiz B. Ribeiro, Samir Maghous, Jose Claudio F. Telles, Eduardo M. R. Fairbairn

Abstract:

The present paper aims at developing a parallel computational model for numerical simulation of poromechanics analyses of heterogeneous reservoirs. In the context of macroscopic poroelastoplasticity, the hydromechanical coupling between the skeleton deformation and the fluid pressure is addressed by means of two constitutive equations. The first state equation relates the stress to skeleton strain and pore pressure, while the second state equation relates the Lagrangian porosity change to skeleton volume strain and pore pressure. A specific algorithm for local plastic integration using a tangent operator is devised. A modified Cam-clay type yield surface with associated plastic flow rule is adopted to account for both contractive and dilative behavior.

Keywords: finite element method, poromechanics, poroplasticity, reservoir analysis

Procedia PDF Downloads 391
12334 Nonparametric Estimation of Risk-Neutral Densities via Empirical Esscher Transform

Authors: Manoel Pereira, Alvaro Veiga, Camila Epprecht, Renato Costa

Abstract:

This paper introduces an empirical version of the Esscher transform for risk-neutral option pricing. Traditional parametric methods require the formulation of an explicit risk-neutral model and are operational only for a few probability distributions for the returns of the underlying. In our proposal, we make only mild assumptions on the pricing kernel and there is no need for the formulation of the risk-neutral model for the returns. First, we simulate sample paths for the returns under the physical distribution. Then, based on the empirical Esscher transform, the sample is reweighted, giving rise to a risk-neutralized sample from which derivative prices can be obtained by a weighted sum of the options pay-offs in each path. We compare our proposal with some traditional parametric pricing methods in four experiments with artificial and real data.

Keywords: esscher transform, generalized autoregressive Conditional Heteroscedastic (GARCH), nonparametric option pricing

Procedia PDF Downloads 489
12333 Stock Prediction and Portfolio Optimization Thesis

Authors: Deniz Peksen

Abstract:

This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.

Keywords: stock prediction, portfolio optimization, data science, machine learning

Procedia PDF Downloads 80
12332 Towards an African Model: A Survey of Social Enterprises in South Africa

Authors: Kerryn Krige, Kerrin Myers

Abstract:

Social entrepreneurship offers the opportunity to simultaneously address both social and economic inequality in South Africa. Its appeal across racial groups, its attractiveness to young people, its applicability in rural and peri-urban markets, and its acceleration in middle income, large-business economies suits the South African context. However, the potential to deliver much-needed developmental benefits has not been realised because the social entrepreneurship debate lacks evidence as to who social entrepreneurs are, their goals and operations and the socio-economic results they achieve. As a result, policy development has been stunted, and legislative barriers and red tape remain. Social entrepreneurs are isolated from the mainstream economy, and struggle to access funding because of limitations in legislative and organisational structures. The objective of the study is to strengthen the ecosystem for social entrepreneurship in South Africa by producing robust, policy-rich information from and about social enterprises currently in operation across the country. The study employs a quantitative survey methodology, using online and telephonic data collection methods. A purposive sample of 1000 social enterprises was included in the first large-scale study of social entrepreneurship in South Africa. The results offer deep insight into the characteristics of social enterprises; the activities they undertake and the markets they serve; their modes of operation and funding sources as well as key challenges and support systems. The results contribute towards developing a model of social enterprise in the African context.

Keywords: social enterprise, key characteristics, challenges and enablers, towards an African model

Procedia PDF Downloads 307
12331 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition

Authors: Aisultan Shoiynbek, Darkhan Kuanyshbay, Paulo Menezes, Akbayan Bekarystankyzy, Assylbek Mukhametzhanov, Temirlan Shoiynbek

Abstract:

Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed.

Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset

Procedia PDF Downloads 26
12330 Trigonelline: A Promising Compound for The Treatment of Alzheimer's Disease

Authors: Mai M. Farid, Ximeng Yang, Tomoharu Kuboyama, Chihiro Tohda

Abstract:

Trigonelline is a major alkaloid component derived from Trigonella foenum-graecum L. (fenugreek) and has been reported before as a potential neuroprotective agent, especially in Alzheimer’s disease (AD). However, the previous data were unclear and used model mice were not well established. In the present study, the effect of trigonelline on memory function was investigated in Alzheimer’s disease transgenic model mouse, 5XFAD which overexpresses the mutated APP and PS1 genes. Oral administration of trigonelline for 14 days significantly enhanced object recognition and object location memories. Plasma and cerebral cortex were isolated at 30 min, 1h, 3h, and 6 h after oral administration of trigonelline. LC-MS/MS analysis indicated that trigonelline was detected in both plasma and cortex from 30 min after, suggesting good penetration of trigonelline into the brain. In addition, trigonelline significantly ameliorated axonal and dendrite atrophy in Amyloid β-treated cortical neurons. These results suggest that trigonelline could be a promising therapeutic candidate for AD.

Keywords: alzheimer’s disease, cortical neurons, LC-MS/MS analysis, trigonelline

Procedia PDF Downloads 147
12329 Knowledge Transfer and the Translation of Technical Texts

Authors: Ahmed Alaoui

Abstract:

This paper contributes to the ongoing debate as to the relevance of translation studies to professional practitioners. It exposes the various misconceptions permeating the links between theory and practice in the translation landscape in the Arab World. It is a thesis of this paper that specialization in translation should be redefined; taking account of the fact, that specialized knowledge alone is neither crucial nor sufficient in technical translation. It should be tested against the readability of the translated text, the appropriateness of its style and the usability of its content by end-users to carry out their intended tasks. The paper also proposes a preliminary model to establish a working link between theory and practice from the perspective of professional trainers and practitioners, calling for the latter to participate in the production of knowledge in a systematic fashion. While this proposal is driven by a rather intuitive conviction, a research line is needed to specify the methodological moves to establish the mediation strategies that would relate the components in the model of knowledge transfer proposed in this paper.

Keywords: knowledge transfer, misconceptions, specialized texts, translation theory, translation practice

Procedia PDF Downloads 393
12328 Dynamic of Nonlinear Duopoly Game with Heterogeneous Players

Authors: Jixiang Zhang, Yanhua Wang

Abstract:

A dynamic of Bertrand duopoly game is analyzed, where players use different production methods and choose their prices with bounded rationality. The equilibriums of the corresponding discrete dynamical systems are investigated. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability of Nash equilibrium, as some parameters of the model are varied, gives rise to complex dynamics such as cycles of higher order and chaos. On this basis, we discover that an increase of adjustment speed of bounded rational player can make Bertrand market sink into the chaotic state. Finally, the complex dynamics, bifurcations and chaos are displayed by numerical simulation.

Keywords: Bertrand duopoly model, discrete dynamical system, heterogeneous expectations, nash equilibrium

Procedia PDF Downloads 415
12327 Predicting National Football League (NFL) Match with Score-Based System

Authors: Marcho Setiawan Handok, Samuel S. Lemma, Abdoulaye Fofana, Naseef Mansoor

Abstract:

This paper is proposing a method to predict the outcome of the National Football League match with data from 2019 to 2022 and compare it with other popular models. The model uses open-source statistical data of each team, such as passing yards, rushing yards, fumbles lost, and scoring. Each statistical data has offensive and defensive. For instance, a data set of anticipated values for a specific matchup is created by comparing the offensive passing yards obtained by one team to the defensive passing yards given by the opposition. We evaluated the model’s performance by contrasting its result with those of established prediction algorithms. This research is using a neural network to predict the score of a National Football League match and then predict the winner of the game.

Keywords: game prediction, NFL, football, artificial neural network

Procedia PDF Downloads 84
12326 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 139
12325 A Model Suggestion on Competitiveness and Sustainability of SMEs in Developing Countries

Authors: Ahmet Diken, Tahsin Karabulut

Abstract:

The factor which developing countries are in need is capital. Such countries make an effort to increase their income in order to meet their expenses for employment, infrastructure, superstructure investments, education, health and defense. The sole income of the countries is taxes collected from businesses. The businesses should drive profit and return in order to be able to toll. In a world where competition exists, different strategies may be followed by business in developing countries and they must specify their target markets. İn order to minimize cost and maximize profit, SMEs have to concentrate on target markets and select cost oriented strategy. In this study, a theoretical model is suggested that SME firms have to act as cluster between each other, and also must be optimal provider for large scale firms. SMEs’ policy must be supported by public. This relationship can benefit large scale firms to have brand over the world, and this organization increases value added for developing countries.

Keywords: competitiveness, countries, SMEs developing, sustainability

Procedia PDF Downloads 314
12324 Finite Element Analysis of Cold Formed Steel Screwed Connections

Authors: Jikhil Joseph, S. R. Satish Kumar

Abstract:

Steel Structures are commonly used for rapid erections and multistory constructions due to its inherent advantages. However, the high accuracy required in detailing and heavier sections, make it difficult to erect in place and transport. Cold Formed steel which are specially made by reducing carbon and other alloys are used nowadays to make thin-walled structures. Various types of connections are being reported as well as practiced for the thin-walled members such as bolting, riveting, welding and other mechanical connections. Commonly self-drilling screw connections are used for cold-formed purlin sheeting connection. In this paper an attempt is made to develop a moment resting frame which can be rapidly and remotely constructed with thin walled sections and self-drilling screws. Semi-rigid Moment connections are developed with Rectangular thin-walled tubes and the screws. The Finite Element Analysis programme ABAQUS is used for modelling the screwed connections. The various modelling procedures for simulating the connection behavior such as tie-constraint model, oriented spring model and solid interaction modelling are compared and are critically reviewed. From the experimental validations the solid-interaction modelling identified to be the most accurate one and are used for predicting the connection behaviors. From the finite element analysis, hysteresis curves and the modes of failure were identified. Parametric studies were done on the connection model to optimize the connection configurations to get desired connection characteristics.

Keywords: buckling, cold formed steel, finite element analysis, screwed connections

Procedia PDF Downloads 187
12323 The Analysis of Swales Model (Cars Model) in the UMT Final Year Engineering Students

Authors: Kais Amir Kadhim

Abstract:

Context: The study focuses on the rhetorical structure of chapters in engineering final year projects, specifically the Introduction chapter, written by UMT (University of Marine Technology) engineering students. Existing research has explored the use of genre-based approaches to analyze the writing of final year projects in various disciplines. Research Aim: The aim of this study is to investigate the rhetorical structure of Introduction chapters in engineering final year projects by UMT students. The study aims to identify the frequency of communicative moves and their constituent steps within the Introduction chapters, as well as understand how students justify their research projects. Methodology: The research design will utilize a mixed method approach, combining both quantitative and qualitative methods. Forty Introduction chapters from two different fields in UMT engineering undergraduate programs will be selected for analysis. Findings: The study intends to identify the types of moves present in the Introduction chapters of engineering final year projects by UMT students. Additionally, it aims to determine if these moves and steps are obligatory, conventional, or optional. Theoretical Importance: The study draws upon Bunton's modified CARS (Creating a Research Space) model, which is a conceptual framework used for analyzing the introduction sections of theses. By applying this model, the study contributes to the understanding of the rhetorical structure of Introduction chapters in engineering final year projects. Data Collection: The study will collect data from forty Introduction chapters of engineering final year projects written by UMT engineering students. These chapters will be selected from two different fields within UMT's engineering undergraduate programs. Analysis Procedures: The analysis will involve identifying and categorizing the communicative moves and their constituent steps within the Introduction chapters. The study will utilize both quantitative and qualitative analysis methods to examine the frequency and nature of these moves. Question Addressed: The study aims to address the question of how UMT engineering students structure and justify their research projects within the Introduction chapters of their final year projects. Conclusion: The study aims to contribute to the knowledge of rhetorical structure in engineering final year projects by investigating the Introduction chapters written by UMT engineering students. By using a mixed method research design and applying the modified CARS model, the study intends to identify the types of moves and steps employed by students and explore their justifications for their research projects. The findings have the potential to enhance the understanding of effective academic writing in engineering disciplines.

Keywords: cohesive markers, learning, meaning, students

Procedia PDF Downloads 75
12322 Deep Neural Network Approach for Navigation of Autonomous Vehicles

Authors: Mayank Raj, V. G. Narendra

Abstract:

Ever since the DARPA challenge on autonomous vehicles in 2005, there has been a lot of buzz about ‘Autonomous Vehicles’ amongst the major tech giants such as Google, Uber, and Tesla. Numerous approaches have been adopted to solve this problem, which can have a long-lasting impact on mankind. In this paper, we have used Deep Learning techniques and TensorFlow framework with the goal of building a neural network model to predict (speed, acceleration, steering angle, and brake) features needed for navigation of autonomous vehicles. The Deep Neural Network has been trained on images and sensor data obtained from the comma.ai dataset. A heatmap was used to check for correlation among the features, and finally, four important features were selected. This was a multivariate regression problem. The final model had five convolutional layers, followed by five dense layers. Finally, the calculated values were tested against the labeled data, where the mean squared error was used as a performance metric.

Keywords: autonomous vehicles, deep learning, computer vision, artificial intelligence

Procedia PDF Downloads 158
12321 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 309