Search results for: stock movement prediction
4037 Locating Speed Limit Signs for Highway Tunnel Entrance and Exit
Authors: Han Bai, Lemei Yu, Tong Zhang, Doudou Xie, Liang Zhao
Abstract:
The brightness changes at highway tunnel entrance and exit have an effect on the physical and psychological conditions of drivers. It is more conducive for examining driving safety with quantitative analysis of the physical and psychological characteristics of drivers to determine the speed limit sign locations at the tunnel entrance and exit sections. In this study, the physical and psychological effects of tunnels on traffic sign recognition of drivers are analyzed; subsequently, experiments with the assistant of Eyelink-II Type eye movement monitoring system are conducted in the typical tunnels in Ji-Qing freeway and Xi-Zha freeway, to collect the data of eye movement indexes “Fixation Duration” and “Eyeball Rotating Speed”, which typically represent drivers' mental load and visual characteristics. On this basis, the paper establishes a visual recognition model for the speed limit signs at the highway tunnel entrances and exits. In combination with related standards and regulations, it further presents the recommended values for locating speed limit signs under different tunnel conditions. A case application on Panlong tunnel in Ji-Qing freeway is given to generate the helpful improvement suggestions.Keywords: driver psychological load, eye movement index, speed limit sign location, tunnel entrance and exit
Procedia PDF Downloads 2954036 Prediction of Soil Liquefaction by Using UBC3D-PLM Model in PLAXIS
Authors: A. Daftari, W. Kudla
Abstract:
Liquefaction is a phenomenon in which the strength and stiffness of a soil is reduced by earthquake shaking or other rapid cyclic loading. Liquefaction and related phenomena have been responsible for huge amounts of damage in historical earthquakes around the world. Modelling of soil behaviour is the main step in soil liquefaction prediction process. Nowadays, several constitutive models for sand have been presented. Nevertheless, only some of them can satisfy this mechanism. One of the most useful models in this term is UBCSAND model. In this research, the capability of this model is considered by using PLAXIS software. The real data of superstition hills earthquake 1987 in the Imperial Valley was used. The results of the simulation have shown resembling trend of the UBC3D-PLM model.Keywords: liquefaction, plaxis, pore-water pressure, UBC3D-PLM
Procedia PDF Downloads 3114035 Establishment of a Nomogram Prediction Model for Postpartum Hemorrhage during Vaginal Delivery
Authors: Yinglisong, Jingge Chen, Jingxuan Chen, Yan Wang, Hui Huang, Jing Zhnag, Qianqian Zhang, Zhenzhen Zhang, Ji Zhang
Abstract:
Purpose: The study aims to establish a nomogram prediction model for postpartum hemorrhage (PPH) in vaginal delivery. Patients and Methods: Clinical data were retrospectively collected from vaginal delivery patients admitted to a hospital in Zhengzhou, China, from June 1, 2022 - October 31, 2022. Univariate and multivariate logistic regression were used to filter out independent risk factors. A nomogram model was established for PPH in vaginal delivery based on the risk factors coefficient. Bootstrapping was used for internal validation. To assess discrimination and calibration, receiver operator characteristics (ROC) and calibration curves were generated in the derivation and validation groups. Results: A total of 1340 cases of vaginal delivery were enrolled, with 81 (6.04%) having PPH. Logistic regression indicated that history of uterine surgery, induction of labor, duration of first labor, neonatal weight, WBC value (during the first stage of labor), and cervical lacerations were all independent risk factors of hemorrhage (P <0.05). The area-under-curve (AUC) of ROC curves of the derivation group and the validation group were 0.817 and 0.821, respectively, indicating good discrimination. Two calibration curves showed that nomogram prediction and practical results were highly consistent (P = 0.105, P = 0.113). Conclusion: The developed individualized risk prediction nomogram model can assist midwives in recognizing and diagnosing high-risk groups of PPH and initiating early warning to reduce PPH incidence.Keywords: vaginal delivery, postpartum hemorrhage, risk factor, nomogram
Procedia PDF Downloads 784034 Wrapping–Decorative Movement of Time
Authors: Rudranil Das
Abstract:
Wrapping is a basic textile technique; it is having a great quality of decorative view. Since long back it has been embellishing life of people and their culture in different forms. It links cultures, beliefs, thoughts, technology, and above all, people. Through etymology we can study the movement of the word power of wrapping undoubtedly but in depth analyze it could provide many concepts of structural ability. Only in India, more than 105 different processes exist in the way of saree [a type of women attire] wrapping. Then many more other clothing we found in allover world which connects this technique and construction too. One of the main objectives of this study is to enrich wrapping explanation and come up with surfaces by this technique. The deliberate more fragile and stretchable structural framework makes it more appropriate in different users according to their necessity. Developments of design and technology could create new industry segment and generate a marginalized employment for the people too.Keywords: concept, existence, philosophical attachment, technological advancement
Procedia PDF Downloads 2314033 Evaluation of Machine Learning Algorithms and Ensemble Methods for Prediction of Students’ Graduation
Authors: Soha A. Bahanshal, Vaibhav Verdhan, Bayong Kim
Abstract:
Graduation rates at six-year colleges are becoming a more essential indicator for incoming fresh students and for university rankings. Predicting student graduation is extremely beneficial to schools and has a huge potential for targeted intervention. It is important for educational institutions since it enables the development of strategic plans that will assist or improve students' performance in achieving their degrees on time (GOT). A first step and a helping hand in extracting useful information from these data and gaining insights into the prediction of students' progress and performance is offered by machine learning techniques. Data analysis and visualization techniques are applied to understand and interpret the data. The data used for the analysis contains students who have graduated in 6 years in the academic year 2017-2018 for science majors. This analysis can be used to predict the graduation of students in the next academic year. Different Predictive modelings such as logistic regression, decision trees, support vector machines, Random Forest, Naïve Bayes, and KNeighborsClassifier are applied to predict whether a student will graduate. These classifiers were evaluated with k folds of 5. The performance of these classifiers was compared based on accuracy measurement. The results indicated that Ensemble Classifier achieves better accuracy, about 91.12%. This GOT prediction model would hopefully be useful to university administration and academics in developing measures for assisting and boosting students' academic performance and ensuring they graduate on time.Keywords: prediction, decision trees, machine learning, support vector machine, ensemble model, student graduation, GOT graduate on time
Procedia PDF Downloads 734032 Selecting the Best RBF Neural Network Using PSO Algorithm for ECG Signal Prediction
Authors: Najmeh Mohsenifar, Narjes Mohsenifar, Abbas Kargar
Abstract:
In this paper, has been presented a stable method for predicting the ECG signals through the RBF neural networks, by the PSO algorithm. In spite of quasi-periodic ECG signal from a healthy person, there are distortions in electro cardiographic data for a patient. Therefore, there is no precise mathematical model for prediction. Here, we have exploited neural networks that are capable of complicated nonlinear mapping. Although the architecture and spread of RBF networks are usually selected through trial and error, the PSO algorithm has been used for choosing the best neural network. In this way, 2 second of a recorded ECG signal is employed to predict duration of 20 second in advance. Our simulations show that PSO algorithm can find the RBF neural network with minimum MSE and the accuracy of the predicted ECG signal is 97 %.Keywords: electrocardiogram, RBF artificial neural network, PSO algorithm, predict, accuracy
Procedia PDF Downloads 6274031 Equivalent Circuit Representation of Lossless and Lossy Power Transmission Systems Including Discrete Sampler
Authors: Yuichi Kida, Takuro Kida
Abstract:
In a new smart society supported by the recent development of 5G and 6G Communication systems, the im- portance of wireless power transmission is increasing. These systems contain discrete sampling systems in the middle of the transmission path and equivalent circuit representation of lossless or lossy power transmission through these systems is an important issue in circuit theory. In this paper, for the given weight function, we show that a lossless power transmission system with the given weight is expressed by an equivalent circuit representation of the Kida’s optimal signal prediction system followed by a reactance multi-port circuit behind it. Further, it is shown that, when the system is lossy, the system has an equivalent circuit in the form of connecting a multi-port positive-real circuit behind the Kida’s optimal signal prediction system. Also, for the convenience of the reader, in this paper, the equivalent circuit expression of the reactance multi-port circuit and the positive- real multi-port circuit by Cauer and Ohno, whose information is currently being lost even in the world of the Internet.Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, power transmission
Procedia PDF Downloads 1224030 A Neural Network System for Predicting the Hardness of Titanium Aluminum Nitrite (TiAlN) Coatings
Authors: Omar M. Elmabrouk
Abstract:
The cutting tool, in the high-speed machining process, is consistently dealing with high localized stress at the tool tip, tip temperature exceeds 800°C and the chip slides along the rake face. These conditions are affecting the tool wear, the cutting tool performances, the quality of the produced parts and the tool life. Therefore, a thin film coating on the cutting tool should be considered to improve the tool surface properties while maintaining its bulks properties. One of the general coating processes in applying thin film for hard coating purpose is PVD magnetron sputtering. In this paper, the prediction of the effects of PVD magnetron sputtering coating process parameters, sputter power in the range of (4.81-7.19 kW), bias voltage in the range of (50.00-300.00 Volts) and substrate temperature in the range of (281.08-600.00 °C), were studied using artificial neural network (ANN). The results were compared with previously published results using RSM model. It was found that the ANN is more accurate in prediction of tool hardness, and hence, it will not only improve the tool life of the tool but also significantly enhances the efficiency of the machining processes.Keywords: artificial neural network, hardness, prediction, titanium aluminium nitrate coating
Procedia PDF Downloads 5544029 Probability-Based Damage Detection of Structures Using Kriging Surrogates and Enhanced Ideal Gas Molecular Movement Algorithm
Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee
Abstract:
Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.Keywords: probability-based damage detection (PBDD), Kriging, surrogate modeling, uncertainty quantification, artificial intelligence, enhanced ideal gas molecular movement (EIGMM)
Procedia PDF Downloads 2404028 The Effect of Information Technology on the Quality of Accounting Information
Authors: Mohammad Hadi Khorashadi Zadeh, Amin Karkon, Hamid Golnari
Abstract:
This study aimed to investigate the impact of information technology on the quality of accounting information was made in 2014. A survey of 425 executives of listed companies in Tehran Stock Exchange, using the Cochran formula simple random sampling method, 84 managers of these companies as the sample size was considered. Methods of data collection based on questionnaire information technology some of the questions of the impact of information technology was standardized questionnaires and the questions were designed according to existing components. After the distribution and collection of questionnaires, data analysis and hypothesis testing using structural equation modeling Smart PLS2 and software measurement model and the structure was conducted in two parts. In the first part of the questionnaire technical characteristics including reliability, validity, convergent and divergent validity for PLS has been checked and in the second part, application no significant coefficients were used to examine the research hypotheses. The results showed that IT and its dimensions (timeliness, relevance, accuracy, adequacy, and the actual transfer rate) affect the quality of accounting information of listed companies in Tehran Stock Exchange influence.Keywords: information technology, information quality, accounting, transfer speed
Procedia PDF Downloads 2774027 Prediction of Disability-Adjustment Mental Illness Using Machine Learning
Authors: S. R. M. Krishna, R. Santosh Kumar, V. Kamakshi Prasad
Abstract:
Machine learning techniques are applied for the analysis of the impact of mental illness on the burden of disease. It is calculated using the disability-adjusted life year (DALY). DALYs for a disease is the sum of years of life lost due to premature mortality (YLLs) + No of years of healthy life lost due to disability (YLDs). The critical analysis is done based on the Data sources, machine learning techniques and feature extraction method. The reviewing is done based on major databases. The extracted data is examined using statistical analysis and machine learning techniques were applied. The prediction of the impact of mental illness on the population using machine learning techniques is an alternative approach to the old traditional strategies, which are time-consuming and may not be reliable. The approach makes it necessary for a comprehensive adoption, innovative algorithms, and an understanding of the limitations and challenges. The obtained prediction is a way of understanding the underlying impact of mental illness on the health of the people and it enables us to get a healthy life expectancy. The growing impact of mental illness and the challenges associated with the detection and treatment of mental disorders make it necessary for us to understand the complete effect of it on the majority of the population. Procedia PDF Downloads 374026 IoT and Deep Learning approach for Growth Stage Segregation and Harvest Time Prediction of Aquaponic and Vermiponic Swiss Chards
Authors: Praveen Chandramenon, Andrew Gascoyne, Fideline Tchuenbou-Magaia
Abstract:
Aquaponics offers a simple conclusive solution to the food and environmental crisis of the world. This approach combines the idea of Aquaculture (growing fish) to Hydroponics (growing vegetables and plants in a soilless method). Smart Aquaponics explores the use of smart technology including artificial intelligence and IoT, to assist farmers with better decision making and online monitoring and control of the system. Identification of different growth stages of Swiss Chard plants and predicting its harvest time is found to be important in Aquaponic yield management. This paper brings out the comparative analysis of a standard Aquaponics with a Vermiponics (Aquaponics with worms), which was grown in the controlled environment, by implementing IoT and deep learning-based growth stage segregation and harvest time prediction of Swiss Chards before and after applying an optimal freshwater replenishment. Data collection, Growth stage classification and Harvest Time prediction has been performed with and without water replenishment. The paper discusses the experimental design, IoT and sensor communication with architecture, data collection process, image segmentation, various regression and classification models and error estimation used in the project. The paper concludes with the results comparison, including best models that performs growth stage segregation and harvest time prediction of the Aquaponic and Vermiponic testbed with and without freshwater replenishment.Keywords: aquaponics, deep learning, internet of things, vermiponics
Procedia PDF Downloads 724025 Factors Influencing the Voluntary Disclosure of Vietnamese Listed Companies
Authors: Pham Duc Hieu, Do Thi Huong Lan
Abstract:
The aim of this paper is to investigate the factors affecting the extent of voluntary disclosure by examining the annual reports of 205 industrial and manufacturing companies listing on Ho Chi Minh Stock Exchange (HSX) and Hanoi Stock Exchange (HNX) for the year end of 2012. Those factors include company size, profitability, leverage, state ownership, managerial ownership, and foreign ownership, board independence, role duality and type of external auditors. Evidence from this study suggests two main findings. (1) Companies with high foreign ownership have a high level of voluntary disclosure. (2) The company size is an important factor related to the increased level of voluntary disclosure in annual reports made by Vietnamese listed companies. The larger the company, the higher the information is disclosed. However, no significant associations are found between profitability, leverage, state ownership, managerial ownership, board independence, role duality and type of external auditors as hypothesized in this study.Keywords: voluntary disclosure, Vietnamese listed companies, voluntary, duality
Procedia PDF Downloads 4104024 Folk Media and Political Movement: A Case Study on the Bodos of North East India
Authors: Faguna Barmahalia
Abstract:
Politics of ethnic identity in the north-east India is well-known phenomenon. The ethnic assertion in this region is mostly linguistic and cultural in nature. Most of the ethnic groups in the north-east region have been demanding either autonomous or separate state to maintain their socio-cultural identity. After the Indian Independence, the ethnic groups of people think that they have not developed till. Despite having many natural resources, North East India remained backward in terms of economic, education as well as politics. In this scenario, many educated and middle-class elite people have involved in working for the all-round development of their community. The Bodos are one of the major tribes in North Eeast India. In Assam, the Bodos are assumed by themselves to be exploited and suppressed by the Assamese Hindu society. Consequently, the socio-cultural identity movement has emerged among the Bodos.The main aims of my study are: i. to focus on how the Bodos of Assam are using the folk media in their political movement and iii. To analyse the role of folklore towards serving the ethnic unity and nationalism among the Bodos. Methodology: The study is based on the primary and secondary sources. Interview and observation method was conducted for collecting the primary data. For secondary source, some printed books, magazines and others materials published by the distinguished publishers and websites have been used.Keywords: media, culture, nationalism, politics
Procedia PDF Downloads 2224023 Reexamining Contrarian Trades as a Proxy of Informed Trades: Evidence from China's Stock Market
Authors: Dongqi Sun, Juan Tao, Yingying Wu
Abstract:
This paper reexamines the appropriateness of contrarian trades as a proxy of informed trades, using high frequency Chinese stock data. Employing this measure for 5 minute intervals, a U-shaped intraday pattern of probability of informed trades (PIN) is found for the CSI300 stocks, which is consistent with previous findings for other markets. However, while dividing the trades into different sizes, a reversed U-shaped PIN from large-sized trades, opposed to the U-shaped pattern for small- and medium-sized trades, is observed. Drawing from the mixed evidence with different trade sizes, the price impact of trades is further investigated. By examining the relationship between trade imbalances and unexpected returns, larges-sized trades are found to have significant price impact. This implies that in those intervals with large trades, it is non-contrarian trades that are more likely to be informed trades. Taking account of the price impact of large-sized trades, non-contrarian trades are used to proxy for informed trading in those intervals with large trades, and contrarian trades are still used to measure informed trading in other intervals. A stronger U-shaped PIN is demonstrated from this modification. Auto-correlation and information advantage tests for robustness also support the modified informed trading measure.Keywords: contrarian trades, informed trading, price impact, trade imbalance
Procedia PDF Downloads 1654022 A Probabilistic Theory of the Buy-Low and Sell-High for Algorithmic Trading
Authors: Peter Shi
Abstract:
Algorithmic trading is a rapidly expanding domain within quantitative finance, constituting a substantial portion of trading volumes in the US financial market. The demand for rigorous and robust mathematical theories underpinning these trading algorithms is ever-growing. In this study, the author establishes a new stock market model that integrates the Efficient Market Hypothesis and the statistical arbitrage. The model, for the first time, finds probabilistic relations between the rational price and the market price in terms of the conditional expectation. The theory consequently leads to a mathematical justification of the old market adage: buy-low and sell-high. The thresholds for “low” and “high” are precisely derived using a max-min operation on Bayes’s error. This explicit connection harmonizes the Efficient Market Hypothesis and Statistical Arbitrage, demonstrating their compatibility in explaining market dynamics. The amalgamation represents a pioneering contribution to quantitative finance. The study culminates in comprehensive numerical tests using historical market data, affirming that the “buy-low” and “sell-high” algorithm derived from this theory significantly outperforms the general market over the long term in four out of six distinct market environments.Keywords: efficient market hypothesis, behavioral finance, Bayes' decision, algorithmic trading, risk control, stock market
Procedia PDF Downloads 724021 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 694020 Images of Spiritism in Brazilian Catholic Discourse (1889-1937)
Authors: Frantisek Kalenda
Abstract:
With the ultimate triumph of the republican movement in 1889 in Brazil and adoption of constitution promoting religious freedom, formerly dominant Roman Catholic Church entered a long period of struggle to recover its lost position, fighting both liberal and secular character of the new regime and rising competition on the “market of faith”. Spiritism in its originally Brazilian form proved to be one if its key adversaries during the First (1889-1930) and Second Republic (1930-1937), provoking significant attempt within official Church to discredit and destroy the movement. This paper explores this effort through Catholic portrayal of Spiritism in its official media, focusing, on the creation of stereotypes and both theological and “scientific” arguments used against it. Its core is based on primary sources’ analysis, mainly influential A Ordem and Mensageiro da Fé.Keywords: Catholic Church, media, other, spiritism, stereotype
Procedia PDF Downloads 2744019 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model
Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl
Abstract:
Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the work piece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.Keywords: dexel, process stability, material removal, milling
Procedia PDF Downloads 5254018 Grey Prediction of Atmospheric Pollutants in Shanghai Based on GM(1,1) Model Group
Authors: Diqin Qi, Jiaming Li, Siman Li
Abstract:
Based on the use of the three-point smoothing method for selectively processing original data columns, this paper establishes a group of grey GM(1,1) models to predict the concentration ranges of four major air pollutants in Shanghai from 2023 to 2024. The results indicate that PM₁₀, SO₂, and NO₂ maintain the national Grade I standards, while the concentration of PM₂.₅ has decreased but still remains within the national Grade II standards. Combining the forecast results, recommendations are provided for the Shanghai municipal government's efforts in air pollution prevention and control.Keywords: atmospheric pollutant prediction, Grey GM(1, 1), model group, three-point smoothing method
Procedia PDF Downloads 354017 Non-Linear Velocity Fields in Turbulent Wave Boundary Layer
Authors: Shamsul Chowdhury
Abstract:
The objective of this paper is to present the detailed analysis of the turbulent wave boundary layer produced by progressive finite-amplitude waves theory. Most of the works have done for the mass transport in the turbulent boundary layer assuming the eddy viscosity is not time varying, where the sediment movement is induced by the mean velocity. Near the ocean bottom, the waves produce a thin turbulent boundary layer, where the flow is highly rotational, and shear stress associated with the fluid motion cannot be neglected. The magnitude and the predominant direction of the sediment transport near the bottom are known to be closely related to the flow in the wave induced boundary layer. The magnitude of water particle velocity at the Crest phase differs from the one of the Trough phases due to the non-linearity of the waves, which plays an important role to determine the sediment movement. The non-linearity of the waves become predominant in the surf zone area, where the sediment movement occurs vigorously. Therefore, in order to describe the flow near the bottom and relationship between the flow and the movement of the sediment, the analysis was done using the non-linear boundary layer equation and the finite amplitude wave theory was applied to represent the velocity fields in the turbulent wave boundary layer. At first, the calculation was done for turbulent wave boundary layer by two-dimensional model where throughout the calculation is non-linear. But Stokes second order wave profile is adopted at the upper boundary. The calculated profile was compared with the experimental data. Finally, the calculation is done based on various modes of the velocity and turbulent energy. The mean velocity is found to differ from condition of the relative depth and the roughness. It is also found that due to non-linearity, the absolute value for velocity and turbulent energy as well as Reynolds stress are asymmetric. The mean velocity of the laminar boundary layer is always positive but in the turbulent boundary layer plays a very complicated role.Keywords: wave boundary, mass transport, mean velocity, shear stress
Procedia PDF Downloads 2624016 A Computational Analysis of Flow and Acoustics around a Car Wing Mirror
Authors: Aidan J. Bowes, Reaz Hasan
Abstract:
The automotive industry is continually aiming to develop the aerodynamics of car body design. This may be for a variety of beneficial reasons such as to increase speed or fuel efficiency by reducing drag. However recently there has been a greater amount of focus on wind noise produced while driving. Designers in this industry seek a combination of both simplicity of approach and overall effectiveness. This combined with the growing availability of commercial CFD (Computational Fluid Dynamics) packages is likely to lead to an increase in the use of RANS (Reynolds Averaged Navier-Stokes) based CFD methods. This is due to these methods often being simpler than other CFD methods, having a lower demand on time and computing power. In this investigation the effectiveness of turbulent flow and acoustic noise prediction using RANS based methods has been assessed for different wing mirror geometries. Three different RANS based models were used, standard k-ε, realizable k-ε and k-ω SST. The merits and limitations of these methods are then discussed, by comparing with both experimental and numerical results found in literature. In general, flow prediction is fairly comparable to more complex LES (Large Eddy Simulation) based methods; in particular for the k-ω SST model. However acoustic noise prediction still leaves opportunities for more improvement using RANS based methods.Keywords: acoustics, aerodynamics, RANS models, turbulent flow
Procedia PDF Downloads 4474015 Artificial Intelligence in Bioscience: The Next Frontier
Authors: Parthiban Srinivasan
Abstract:
With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction
Procedia PDF Downloads 3584014 Proposing an Architecture for Drug Response Prediction by Integrating Multiomics Data and Utilizing Graph Transformers
Authors: Nishank Raisinghani
Abstract:
Efficiently predicting drug response remains a challenge in the realm of drug discovery. To address this issue, we propose four model architectures that combine graphical representation with varying positions of multiheaded self-attention mechanisms. By leveraging two types of multi-omics data, transcriptomics and genomics, we create a comprehensive representation of target cells and enable drug response prediction in precision medicine. A majority of our architectures utilize multiple transformer models, one with a graph attention mechanism and the other with a multiheaded self-attention mechanism, to generate latent representations of both drug and omics data, respectively. Our model architectures apply an attention mechanism to both drug and multiomics data, with the goal of procuring more comprehensive latent representations. The latent representations are then concatenated and input into a fully connected network to predict the IC-50 score, a measure of cell drug response. We experiment with all four of these architectures and extract results from all of them. Our study greatly contributes to the future of drug discovery and precision medicine by looking to optimize the time and accuracy of drug response prediction.Keywords: drug discovery, transformers, graph neural networks, multiomics
Procedia PDF Downloads 1544013 Masked Candlestick Model: A Pre-Trained Model for Trading Prediction
Authors: Ling Qi, Matloob Khushi, Josiah Poon
Abstract:
This paper introduces a pre-trained Masked Candlestick Model (MCM) for trading time-series data. The pre-trained model is based on three core designs. First, we convert trading price data at each data point as a set of normalized elements and produce embeddings of each element. Second, we generate a masked sequence of such embedded elements as inputs for self-supervised learning. Third, we use the encoder mechanism from the transformer to train the inputs. The masked model learns the contextual relations among the sequence of embedded elements, which can aid downstream classification tasks. To evaluate the performance of the pre-trained model, we fine-tune MCM for three different downstream classification tasks to predict future price trends. The fine-tuned models achieved better accuracy rates for all three tasks than the baseline models. To better analyze the effectiveness of MCM, we test the same architecture for three currency pairs, namely EUR/GBP, AUD/USD, and EUR/JPY. The experimentation results demonstrate MCM’s effectiveness on all three currency pairs and indicate the MCM’s capability for signal extraction from trading data.Keywords: masked language model, transformer, time series prediction, trading prediction, embedding, transfer learning, self-supervised learning
Procedia PDF Downloads 1294012 Firm Performance and Evolving Corporate Governance: An Empirical Study from Pakistan
Authors: Mohammed Nishat, Ahmad Ghazali
Abstract:
This study empirically examines the corporate governance and firm performance, and tries to evaluate the governance, ownership and control related variables which are hypothesized to affect on firms performance. This study tries to evaluate the effectiveness of corporate governance mechanism to achieve high level performance among companies listed on the Karachi Stock Exchange (KSE) over the period from 2005 to 2008. To measure the firm performance level this research uses three measures of performance; Return on assets (ROA), Return on Equity (ROE) and Tobin’s Q. To link the performance of firms with the corporate governance three categories of corporate governance variables are tested which includes governance, ownership and control related variables. Fixed effect regression model is used to test the link between corporate governance and firm performance for 267 KSE listed Pakistani firms. The result shows that corporate governance variables such as percentage block holding by individuals have positive impact on firm performance. When CEO is also the chairperson of board then it is found that firm performance is adversely affected. Also negative relationship is found between share held by insiders and performance of firm. Leverage has negative impact on the performance of the firm and firm size is positively related with the firms performance.Keywords: corporate governance, performance, agency cost, Karachi stock market
Procedia PDF Downloads 3574011 Mathematical Modeling of the Fouling Phenomenon in Ultrafiltration of Latex Effluent
Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi
Abstract:
An efficient and well-planned ultrafiltration process is becoming a necessity for monetary returns in the industrial settings. The aim of the present study was to develop a mathematical model for an accurate prediction of ultrafiltration membrane fouling of latex effluent applied to homogeneous and heterogeneous membranes with uniform and non-uniform pore sizes, respectively. The models were also developed for an accurate prediction of power consumption that can handle the large-scale purposes. The model incorporated the fouling attachments as well as chemical and physical factors in membrane fouling for accurate prediction and scale-up application. Both Polycarbonate and Polysulfone flat membranes, with pore sizes of 0.05 µm and a molecular weight cut-off of 60,000, respectively, were used under a constant feed flow rate and a cross-flow mode in ultrafiltration of the simulated paint effluent. Furthermore, hydrophilic ultrafilic and hydrophobic PVDF membranes with MWCO of 100,000 were used to test the reliability of the models. Monodisperse particles of 50 nm and 100 nm in diameter, and a latex effluent with a wide range of particle size distributions were utilized to validate the models. The aggregation and the sphericity of the particles indicated a significant effect on membrane fouling.Keywords: membrane fouling, mathematical modeling, power consumption, attachments, ultrafiltration
Procedia PDF Downloads 4704010 Evaluating Portfolio Performance by Highlighting Network Property and the Sharpe Ratio in the Stock Market
Authors: Zahra Hatami, Hesham Ali, David Volkman
Abstract:
Selecting a portfolio for investing is a crucial decision for individuals and legal entities. In the last two decades, with economic globalization, a stream of financial innovations has rushed to the aid of financial institutions. The importance of selecting stocks for the portfolio is always a challenging task for investors. This study aims to create a financial network to identify optimal portfolios using network centralities metrics. This research presents a community detection technique of superior stocks that can be described as an optimal stock portfolio to be used by investors. By using the advantages of a network and its property in extracted communities, a group of stocks was selected for each of the various time periods. The performance of the optimal portfolios compared to the famous index. Their Sharpe ratio was calculated in a timely manner to evaluate their profit for making decisions. The analysis shows that the selected potential portfolio from stocks with low centrality measurement can outperform the market; however, they have a lower Sharpe ratio than stocks with high centrality scores. In other words, stocks with low centralities could outperform the S&P500 yet have a lower Sharpe ratio than high central stocks.Keywords: portfolio management performance, network analysis, centrality measurements, Sharpe ratio
Procedia PDF Downloads 1544009 Financial Centers and BRICS Stock Markets: The Effect of the Recent Crises
Authors: Marco Barassi, Nicola Spagnolo
Abstract:
This paper uses a DCC-GARCH model framework to examine mean and volatility spillovers (i.e. causality in mean and variance) dynamics between financial centers and the stock market indexes of the BRICS countries. In addition, tests for changes in the transmission mechanism are carried out by first testing for structural breaks and then setting a dummy variable to control for the 2008 financial crises. We use weekly data for nine countries, four financial centers (Germany, Japan, UK and USA) and the five BRICS countries (Brazil, Russia, India, China and South Africa). Furthermore, we control for monetary policy using domestic interest rates (90-day Treasury Bill interest rate) over the period 03/1/1990 - 04/2/2014, for a total of 1204 observations. Results show that the 2008 financial crises changed the causality dynamics for most of the countries considered. The same pattern can also be observed in conditional correlation showing a shift upward following the turbulence associated to the 2008 crises. The magnitude of these effects suggests a leading role played by the financial centers in effecting Brazil and South Africa, whereas Russia, India and China show a higher degree of resilience.Keywords: financial crises, DCC-GARCH model, volatility spillovers, economics
Procedia PDF Downloads 3574008 Modelling Impacts of Global Financial Crises on Stock Volatility of Nigeria Banks
Authors: Maruf Ariyo Raheem, Patrick Oseloka Ezepue
Abstract:
This research aimed at determining most appropriate heteroskedastic model to predicting volatility of 10 major Nigerian banks: Access, United Bank for Africa (UBA), Guaranty Trust, Skye, Diamond, Fidelity, Sterling, Union, ETI and Zenith banks using daily closing stock prices of each of the banks from 2004 to 2014. The models employed include ARCH (1), GARCH (1, 1), EGARCH (1, 1) and TARCH (1, 1). The results show that all the banks returns are highly leptokurtic, significantly skewed and thus non-normal across the four periods except for Fidelity bank during financial crises; findings similar to those of other global markets. There is also strong evidence for the presence of heteroscedasticity, and that volatility persistence during crisis is higher than before the crisis across the 10 banks, with that of UBA taking the lead, about 11 times higher during the crisis. Findings further revealed that Asymmetric GARCH models became dominant especially during financial crises and post crises when the second reforms were introduced into the banking industry by the Central Bank of Nigeria (CBN). Generally, one could say that Nigerian banks returns are volatility persistent during and after the crises, and characterised by leverage effects of negative and positive shocks during these periodsKeywords: global financial crisis, leverage effect, persistence, volatility clustering
Procedia PDF Downloads 526