Search results for: stock price prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3886

Search results for: stock price prediction

1426 Contact Phenomena in Medieval Business Texts

Authors: Carmela Perta

Abstract:

Among the studies flourished in the field of historical sociolinguistics, mainly in the strand devoted to English history, during its Medieval and early modern phases, multilingual texts had been analysed using theories and models coming from contact linguistics, thus applying synchronic models and approaches to the past. This is true also in the case of contact phenomena which would transcend the writing level involving the language systems implicated in contact processes to the point of perceiving a new variety. This is the case for medieval administrative-commercial texts in which, according to some Scholars, the degree of fusion of Anglo-Norman, Latin and middle English is so high a mixed code emerges, and there are recurrent patterns of mixed forms. Interesting is a collection of multilingual business writings by John Balmayn, an Englishman overseeing a large shipment in Tuscany, namely the Cantelowe accounts. These documents display various analogies with multilingual texts written in England in the same period; in fact, the writer seems to make use of the above-mentioned patterns, with Middle English, Latin, Anglo-Norman, and the newly added Italian. Applying an atomistic yet dynamic approach to the study of contact phenomena, we will investigate these documents, trying to explore the nature of the switching forms they contain from an intra-writer variation perspective. After analysing the accounts and the type of multilingualism in them, we will take stock of the assumed mixed code nature, comparing the characteristics found in this genre with modern assumptions. The aim is to evaluate the possibility to consider the switching forms as core elements of a mixed code, used as professional variety among merchant communities, or whether such texts should be analysed from a switching perspective.

Keywords: historical sociolinguistics, historical code switching, letters, medieval england

Procedia PDF Downloads 63
1425 Integration GIS–SCADA Power Systems to Enclosure Air Dispersion Model

Authors: Ibrahim Shaker, Amr El Hossany, Moustafa Osman, Mohamed El Raey

Abstract:

This paper will explore integration model between GIS–SCADA system and enclosure quantification model to approach the impact of failure-safe event. There are real demands to identify spatial objects and improve control system performance. Nevertheless, the employed methodology is predicting electro-mechanic operations and corresponding time to environmental incident variations. Open processing, as object systems technology, is presented for integration enclosure database with minimal memory size and computation time via connectivity drivers such as ODBC:JDBC during main stages of GIS–SCADA connection. The function of Geographic Information System is manipulating power distribution in contrast to developing issues. In other ward, GIS-SCADA systems integration will require numerical objects of process to enable system model calibration and estimation demands, determine of past events for analysis and prediction of emergency situations for response training.

Keywords: air dispersion model, environmental management, SCADA systems, GIS system, integration power system

Procedia PDF Downloads 353
1424 Representativity Based Wasserstein Active Regression

Authors: Benjamin Bobbia, Matthias Picard

Abstract:

In recent years active learning methodologies based on the representativity of the data seems more promising to limit overfitting. The presented query methodology for regression using the Wasserstein distance measuring the representativity of our labelled dataset compared to the global distribution. In this work a crucial use of GroupSort Neural Networks is made therewith to draw a double advantage. The Wasserstein distance can be exactly expressed in terms of such neural networks. Moreover, one can provide explicit bounds for their size and depth together with rates of convergence. However, heterogeneity of the dataset is also considered by weighting the Wasserstein distance with the error of approximation at the previous step of active learning. Such an approach leads to a reduction of overfitting and high prediction performance after few steps of query. After having detailed the methodology and algorithm, an empirical study is presented in order to investigate the range of our hyperparameters. The performances of this method are compared, in terms of numbers of query needed, with other classical and recent query methods on several UCI datasets.

Keywords: active learning, Lipschitz regularization, neural networks, optimal transport, regression

Procedia PDF Downloads 71
1423 A Simple, Precise and Cost Effective PTFE Container Design Capable to Work in Domestic Microwave Oven

Authors: Mehrdad Gholami, Shima Behkami, Sharifuddin B. Md. Zain, Firdaus A. B. Kamaruddin

Abstract:

Starting from the first application of a microwave oven for sample preparation in 1975 for the purpose of wet ashing of biological samples using a domestic microwave oven, many microwave-assisted dissolution vessels have been developed. The advanced vessels are armed with special safety valve that release the excess of pressure while the vessels are in critical conditions due to applying high power of microwave. Nevertheless, this releasing of pressure may cause lose of volatile elements. In this study Teflon bottles are designed with relatively thicker wall compared to commercial ones and a silicone based polymer was used to prepare an O-ring which plays the role of safety valve. In this design, eight vessels are located in an ABS holder to keep them stable and safe. The advantage of these vessels is that they need only 2 mL of HNO3 and 1mL H2O2 to digest different environmental samples, namely, sludge, apple leave, peach leave, spinach leave and tomato leave. In order to investigate the performance of this design an ICP-MS instrument was applied for multi elemental analysis of 20 elements on the SRM of above environmental samples both using this design and a commercial microwave digestion design. Very comparable recoveries were obtained from this simple design with the commercial one. Considering the price of ultrapure chemicals and the amount of them which normally is about 8-10 mL, these simple vessels with the procedures that will be discussed in detail are very cost effective and very suitable for environmental studies.

Keywords: inductively coupled plasma mass spectroscopy (ICP-MS), PTFE vessels, Teflon bombs, microwave digestion, trace element

Procedia PDF Downloads 323
1422 Extended Strain Energy Density Criterion for Fracture Investigation of Orthotropic Materials

Authors: Mahdi Fakoor, Hannaneh Manafi Farid

Abstract:

In order to predict the fracture behavior of cracked orthotropic materials under mixed-mode loading, well-known minimum strain energy density (SED) criterion is extended. The crack is subjected along the fibers at plane strain conditions. Despite the complicities to solve the nonlinear equations which are requirements of SED criterion, SED criterion for anisotropic materials is derived. In the present research, fracture limit curve of SED criterion is depicted by a numerical solution, hence the direction of crack growth is figured out by derived criterion, MSED. The validated MSED demonstrates the improvement in prediction of fracture behavior of the materials. Also, damaged factor that plays a crucial role in the fracture behavior of quasi-brittle materials is derived from this criterion and proved its dependency on mechanical properties and direction of crack growth.

Keywords: mixed-mode fracture, minimum strain energy density criterion, orthotropic materials, fracture limit curve, mode II critical stress intensity factor

Procedia PDF Downloads 155
1421 The Role and Importance of Genome Sequencing in Prediction of Cancer Risk

Authors: M. Sadeghi, H. Pezeshk, R. Tusserkani, A. Sharifi Zarchi, A. Malekpour, M. Foroughmand, S. Goliaei, M. Totonchi, N. Ansari–Pour

Abstract:

The role and relative importance of intrinsic and extrinsic factors in the development of complex diseases such as cancer still remains a controversial issue. Determining the amount of variation explained by these factors needs experimental data and statistical models. These models are nevertheless based on the occurrence and accumulation of random mutational events during stem cell division, thus rendering cancer development a stochastic outcome. We demonstrate that not only individual genome sequencing is uninformative in determining cancer risk, but also assigning a unique genome sequence to any given individual (healthy or affected) is not meaningful. Current whole-genome sequencing approaches are therefore unlikely to realize the promise of personalized medicine. In conclusion, since genome sequence differs from cell to cell and changes over time, it seems that determining the risk factor of complex diseases based on genome sequence is somewhat unrealistic, and therefore, the resulting data are likely to be inherently uninformative.

Keywords: cancer risk, extrinsic factors, genome sequencing, intrinsic factors

Procedia PDF Downloads 259
1420 A Resource Optimization Strategy for CPU (Central Processing Unit) Intensive Applications

Authors: Junjie Peng, Jinbao Chen, Shuai Kong, Danxu Liu

Abstract:

On the basis of traditional resource allocation strategies, the usage of resources on physical servers in cloud data center is great uncertain. It will cause waste of resources if the assignment of tasks is not enough. On the contrary, it will cause overload if the assignment of tasks is too much. This is especially obvious when the applications are the same type because of its resource preferences. Considering CPU intensive application is one of the most common types of application in the cloud, we studied the optimization strategy for CPU intensive applications on the same server. We used resource preferences to analyze the case that multiple CPU intensive applications run simultaneously, and put forward a model which can predict the execution time for CPU intensive applications which run simultaneously. Based on the prediction model, we proposed the method to select the appropriate number of applications for a machine. Experiments show that the model can predict the execution time accurately for CPU intensive applications. To improve the execution efficiency of applications, we propose a scheduling model based on priority for CPU intensive applications. Extensive experiments verify the validity of the scheduling model.

Keywords: cloud computing, CPU intensive applications, resource optimization, strategy

Procedia PDF Downloads 269
1419 Prediction of Maximum Inter-Story Drifts of Steel Frames Using Intensity Measures

Authors: Edén Bojórquez, Victor Baca, Alfredo Reyes-Salazar, Jorge González

Abstract:

In this paper, simplified equations to predict maximum inter-story drift demands of steel framed buildings are proposed in terms of two ground motion intensity measures based on the acceleration spectral shape. For this aim, the maximum inter-story drifts of steel frames with 4, 6, 8 and 10 stories subjected to narrow-band ground motion records are estimated and compared with the spectral acceleration at first mode of vibration Sa(T1) which is commonly used in earthquake engineering and seismology, and with a new parameter related with the structural response known as INp. It is observed that INp is the parameter best related with the structural response of steel frames under narrow-band motions. Finally, equations to compute maximum inter-story drift demands of steel frames as a function of spectral acceleration and INp are proposed.

Keywords: intensity measures, spectral shape, steel frames, peak demands

Procedia PDF Downloads 381
1418 Evaluating the Diagnostic Accuracy of the ctDNA Methylation for Liver Cancer

Authors: Maomao Cao

Abstract:

Objective: To test the performance of ctDNA methylation for the detection of liver cancer. Methods: A total of 1233 individuals have been recruited in 2017. 15 male and 15 female samples (including 10 cases of liver cancer) were randomly selected in the present study. CfDNA was extracted by MagPure Circulating DNA Maxi Kit. The concentration of cfDNA was obtained by Qubit™ dsDNA HS Assay Kit. A pre-constructed predictive model was used to analyze methylation data and to give a predictive score for each cfDNA sample. Individuals with a predictive score greater than or equal to 80 were classified as having liver cancer. CT tests were considered the gold standard. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for the diagnosis of liver cancer were calculated. Results: 9 patients were diagnosed with liver cancer according to the prediction model (with high sensitivity and threshold of 80 points), with scores of 99.2, 91.9, 96.6, 92.4, 91.3, 92.5, 96.8, 91.1, and 92.2, respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of ctDNA methylation for the diagnosis of liver cancer were 0.70, 0.90, 0.78, and 0.86, respectively. Conclusions: ctDNA methylation could be an acceptable diagnostic modality for the detection of liver cancer.

Keywords: liver cancer, ctDNA methylation, detection, diagnostic performance

Procedia PDF Downloads 140
1417 Non-Burn Treatment of Health Care Risk Waste

Authors: Jefrey Pilusa, Tumisang Seodigeng

Abstract:

This research discusses a South African case study for the potential of utilizing refuse-derived fuel (RDF) obtained from non-burn treatment of health care risk waste (HCRW) as potential feedstock for green energy production. This specific waste stream can be destroyed via non-burn treatment technology involving high-speed mechanical shredding followed by steam or chemical injection to disinfect the final product. The RDF obtained from this process is characterised by a low moisture, low ash, and high calorific value which means it can be potentially used as high-value solid fuel. Due to the raw feed of this RDF being classified as hazardous, the final RDF has been reported to be non-infectious and can blend with other combustible wastes such as rubber and plastic for waste to energy applications. This study evaluated non-burn treatment technology as a possible solution for on-site destruction of HCRW in South African private and public health care centres. Waste generation quantities were estimated based on the number of registered patient beds, theoretical bed occupancy. Time and motion study was conducted to evaluate the logistics viability of on-site treatment. Non-burn treatment technology for HCRW is a promising option for South Africa, and successful implementation of this method depends upon the initial capital investment, operational cost and environmental permitting of such technology; there are other influencing factors such as the size of the waste stream, product off-take price as well as product demand.

Keywords: autoclave, disposal, fuel, incineration, medical waste

Procedia PDF Downloads 166
1416 Digital Platform of Crops for Smart Agriculture

Authors: Pascal François Faye, Baye Mor Sall, Bineta Dembele, Jeanne Ana Awa Faye

Abstract:

In agriculture, estimating crop yields is key to improving productivity and decision-making processes such as financial market forecasting and addressing food security issues. The main objective of this paper is to have tools to predict and improve the accuracy of crop yield forecasts using machine learning (ML) algorithms such as CART , KNN and SVM . We developed a mobile app and a web app that uses these algorithms for practical use by farmers. The tests show that our system (collection and deployment architecture, web application and mobile application) is operational and validates empirical knowledge on agro-climatic parameters in addition to proactive decision-making support. The experimental results obtained on the agricultural data, the performance of the ML algorithms are compared using cross-validation in order to identify the most effective ones following the agricultural data. The proposed applications demonstrate that the proposed approach is effective in predicting crop yields and provides timely and accurate responses to farmers for decision support.

Keywords: prediction, machine learning, artificial intelligence, digital agriculture

Procedia PDF Downloads 69
1415 Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection

Authors: Ashkan Zakaryazad, Ekrem Duman

Abstract:

A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently.

Keywords: neural network, profit-based neural network, sum of squared errors (SSE), MBO, gradient descent

Procedia PDF Downloads 462
1414 Transfer Learning for Protein Structure Classification at Low Resolution

Authors: Alexander Hudson, Shaogang Gong

Abstract:

Structure determination is key to understanding protein function at a molecular level. Whilst significant advances have been made in predicting structure and function from amino acid sequence, researchers must still rely on expensive, time-consuming analytical methods to visualise detailed protein conformation. In this study, we demonstrate that it is possible to make accurate (≥80%) predictions of protein class and architecture from structures determined at low (>3A) resolution, using a deep convolutional neural network trained on high-resolution (≤3A) structures represented as 2D matrices. Thus, we provide proof of concept for high-speed, low-cost protein structure classification at low resolution, and a basis for extension to prediction of function. We investigate the impact of the input representation on classification performance, showing that side-chain information may not be necessary for fine-grained structure predictions. Finally, we confirm that high resolution, low-resolution and NMR-determined structures inhabit a common feature space, and thus provide a theoretical foundation for boosting with single-image super-resolution.

Keywords: transfer learning, protein distance maps, protein structure classification, neural networks

Procedia PDF Downloads 122
1413 Estimation of Coefficient of Discharge of Side Trapezoidal Labyrinth Weir Using Group Method of Data Handling Technique

Authors: M. A. Ansari, A. Hussain, A. Uddin

Abstract:

A side weir is a flow diversion structure provided in the side wall of a channel to divert water from the main channel to a branch channel. The trapezoidal labyrinth weir is a special type of weir in which crest length of the weir is increased to pass higher discharge. Experimental and numerical studies related to the coefficient of discharge of trapezoidal labyrinth weir in an open channel have been presented in the present study. Group Method of Data Handling (GMDH) with the transfer function of quadratic polynomial has been used to predict the coefficient of discharge for the side trapezoidal labyrinth weir. A new model is developed for coefficient of discharge of labyrinth weir by regression method. Generalized models for predicting the coefficient of discharge for labyrinth weir using Group Method of Data Handling (GMDH) network have also been developed. The prediction based on GMDH model is more satisfactory than those given by traditional regression equations.

Keywords: discharge coefficient, group method of data handling, open channel, side labyrinth weir

Procedia PDF Downloads 149
1412 Capability of Available Seismic Soil Liquefaction Potential Assessment Models Based on Shear-Wave Velocity Using Banchu Case History

Authors: Nima Pirhadi, Yong Bo Shao, Xusheng Wa, Jianguo Lu

Abstract:

Several models based on the simplified method introduced by Seed and Idriss (1971) have been developed to assess the liquefaction potential of saturated sandy soils. The procedure includes determining the cyclic resistance of the soil as the cyclic resistance ratio (CRR) and comparing it with earthquake loads as cyclic stress ratio (CSR). Of all methods to determine CRR, the methods using shear-wave velocity (Vs) are common because of their low sensitivity to the penetration resistance reduction caused by fine content (FC). To evaluate the capability of the models, based on the Vs., the new data from Bachu-Jianshi earthquake case history collected, then the prediction results of the models are compared to the measured results; consequently, the accuracy of the models are discussed via three criteria and graphs. The evaluation demonstrates reasonable accuracy of the models in the Banchu region.

Keywords: seismic liquefaction, banchu-jiashi earthquake, shear-wave velocity, liquefaction potential evaluation

Procedia PDF Downloads 223
1411 Instability Index Method and Logistic Regression to Assess Landslide Susceptibility in County Route 89, Taiwan

Authors: Y. H. Wu, Ji-Yuan Lin, Yu-Ming Liou

Abstract:

This study aims to set up the landslide susceptibility map of County Route 89 at Ren-Ai Township in Nantou County using the Instability Index Method and Logistic regression. Seven susceptibility factors including Slope Angle, Aspect, Elevation, Distance to fold, Distance to River, Distance to Road and Accumulated Rainfall were obtained by GIS based on the Typhoon Toraji landslide area identified by Industrial Technology Research Institute in 2001. To calculate the landslide percentage of each factor and acquire the weight and grade the grid by means of Instability Index Method. In this study, landslide susceptibility can be classified into four grades: high, medium high, medium low and low, in order to determine the advantages and disadvantages of the two models. The precision of this model is verified by classification error matrix and SRC curve. These results suggest that the logistic regression model is a preferred method than instability index in the assessment of landslide susceptibility. It is suitable for the landslide prediction and precaution in this area in the future.

Keywords: instability index method, logistic regression, landslide susceptibility, SRC curve

Procedia PDF Downloads 277
1410 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network

Authors: Jia Xin Low, Keng Wah Choo

Abstract:

This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.

Keywords: convolutional neural network, discrete wavelet transform, deep learning, heart sound classification

Procedia PDF Downloads 334
1409 Urban Runoff Modeling of Ungauged Volcanic Catchment in Madinah, Western Saudi Arabia

Authors: Fahad Alahmadi, Norhan Abd Rahman, Mohammad Abdulrazzak, Zulikifli Yusop

Abstract:

Runoff prediction of ungauged catchment is still a challenging task especially in arid regions with a unique land cover such as volcanic basalt rocks where geological weathering and fractures are highly significant. In this study, Bathan catchment in Madinah western Saudi Arabia was selected for analysis. The aim of this paper is to evaluate different rainfall loss methods; soil conservation Services curve number (SCS-CN), green-ampt and initial-constant rate. Different direct runoff methods were evaluated: soil conservation services dimensionless unit hydrograph (SCS-UH), Snyder unit hydrograph and Clark unit hydrograph. The study showed the superiority of SCS-CN loss method and Clark unit hydrograph method for ungauged catchment where there is no observed runoff data.

Keywords: urban runoff modelling, arid regions, ungauged catchments, volcanic rocks, Madinah, Saudi Arabia

Procedia PDF Downloads 389
1408 On Hyperbolic Gompertz Growth Model (HGGM)

Authors: S. O. Oyamakin, A. U. Chukwu,

Abstract:

We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a stabilizing parameter called θ using hyperbolic sine function into the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while using testing the independence of the error term using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE, and AIC confirmed the predictive power of the Hyperbolic Monomolecular growth models over its source model.

Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz

Procedia PDF Downloads 429
1407 An Integrated Framework for Seismic Risk Mitigation Decision Making

Authors: Mojtaba Sadeghi, Farshid Baniassadi, Hamed Kashani

Abstract:

One of the challenging issues faced by seismic retrofitting consultants and employers is quick decision-making on the demolition or retrofitting of a structure at the current time or in the future. For this reason, the existing models proposed by researchers have only covered one of the aspects of cost, execution method, and structural vulnerability. Given the effect of each factor on the final decision, it is crucial to devise a new comprehensive model capable of simultaneously covering all the factors. This study attempted to provide an integrated framework that can be utilized to select the most appropriate earthquake risk mitigation solution for buildings. This framework can overcome the limitations of current models by taking into account several factors such as cost, execution method, risk-taking and structural failure. In the newly proposed model, the database and essential information about retrofitting projects are developed based on the historical data on a retrofit project. In the next phase, an analysis is conducted in order to assess the vulnerability of the building under study. Then, artificial neural networks technique is employed to calculate the cost of retrofitting. While calculating the current price of the structure, an economic analysis is conducted to compare demolition versus retrofitting costs. At the next stage, the optimal method is identified. Finally, the implementation of the framework was demonstrated by collecting data concerning 155 previous projects.

Keywords: decision making, demolition, construction management, seismic retrofit

Procedia PDF Downloads 227
1406 First Principle Calculations of the Structural and Optoelectronic Properties of Cubic Perovskite CsSrF3

Authors: Meriem Harmel, Houari Khachai

Abstract:

We have investigated the structural, electronic and optical properties of a compound perovskite CsSrF3 using the full-potential linearized augmented plane wave (FP-LAPW) method within density functional theory (DFT). In this approach, both the local density approximation (LDA) and the generalized gradient approximation (GGA) were used for exchange-correlation potential calculation. The ground state properties such as lattice parameter, bulk modulus and its pressure derivative were calculated and the results are compared whit experimental and theoretical data. Electronic and bonding properties are discussed from the calculations of band structure, density of states and electron charge density, where the fundamental energy gap is direct under ambient conditions. The contribution of the different bands was analyzed from the total and partial density of states curves. The optical properties (namely: the real and the imaginary parts of the dielectric function ε(ω), the refractive index n(ω) and the extinction coefficient k(ω)) were calculated for radiation up to 35.0 eV. This is the first quantitative theoretical prediction of the optical properties for the investigated compound and still awaits experimental confirmations.

Keywords: DFT, fluoroperovskite, electronic structure, optical properties

Procedia PDF Downloads 450
1405 Hybrid Equity Warrants Pricing Formulation under Stochastic Dynamics

Authors: Teh Raihana Nazirah Roslan, Siti Zulaiha Ibrahim, Sharmila Karim

Abstract:

A warrant is a financial contract that confers the right but not the obligation, to buy or sell a security at a certain price before expiration. The standard procedure to value equity warrants using call option pricing models such as the Black–Scholes model had been proven to contain many flaws, such as the assumption of constant interest rate and constant volatility. In fact, existing alternative models were found focusing more on demonstrating techniques for pricing, rather than empirical testing. Therefore, a mathematical model for pricing and analyzing equity warrants which comprises stochastic interest rate and stochastic volatility is essential to incorporate the dynamic relationships between the identified variables and illustrate the real market. Here, the aim is to develop dynamic pricing formulations for hybrid equity warrants by incorporating stochastic interest rates from the Cox-Ingersoll-Ross (CIR) model, along with stochastic volatility from the Heston model. The development of the model involves the derivations of stochastic differential equations that govern the model dynamics. The resulting equations which involve Cauchy problem and heat equations are then solved using partial differential equation approaches. The analytical pricing formulas obtained in this study comply with the form of analytical expressions embedded in the Black-Scholes model and other existing pricing models for equity warrants. This facilitates the practicality of this proposed formula for comparison purposes and further empirical study.

Keywords: Cox-Ingersoll-Ross model, equity warrants, Heston model, hybrid models, stochastic

Procedia PDF Downloads 121
1404 Conflicts and Similarities among Energy Law, Environmental Law and Economic Aspects

Authors: Bahareh Arghand, Seyed Abbas Poorhashemi, Ramin Roshandel

Abstract:

Nowadays, Economic growth and the increasing use of fossil fuel have caused major damages to environment. Therefore, international law has tried to codify the rules and regulations and identify legal principles to decrease conflict of interests between energy law and environmental law. The open relationship between energy consumption and the law of nature has been ignored for years, because the focus of energy law has been on an affordable price of a reliable supply of energy; while the focus of environmental law was on protection of the nature. In fact, the legal and overall policies of energy are based on Sic Omnes and inter part for governments whereas environmental law is based on common interests and Erga Omnes. The relationship between energy law, environmental law and economic aspects is multilateral, complex and important. Moreover, they influence each other. There are similarities in the triangle of energy, environment and economic aspects and in some cases there are conflict of interest but their conflicts are in goals not in practice and their legal jurisdiction is in international law. The development of national and international rules and regulations relevant to energy-environment has been done by separate sectors, whereas sustainable development principle, especially in the economic sector, requires environmental considerations. It is an important turning point to integrate and decrease conflict of interest among energy law, environmental law and economic aspects. The present study examines existing legal principles on energy and the environment and identifies the similarities and conflicts based on the descriptive-analytic study. The purpose of investigating these legal principles is to integrate and decrease conflict of interest between energy law and environmental law.

Keywords: energy law, environmental law, erga omnes, sustainable development

Procedia PDF Downloads 368
1403 Hidden Markov Model for Financial Limit Order Book and Its Application to Algorithmic Trading Strategy

Authors: Sriram Kashyap Prasad, Ionut Florescu

Abstract:

This study models the intraday asset prices as driven by Markov process. This work identifies the latent states of the Hidden Markov model, using limit order book data (trades and quotes) to continuously estimate the states throughout the day. This work builds a trading strategy using estimated states to generate signals. The strategy utilizes current state to recalibrate buy/ sell levels and the transition between states to trigger stop-loss when adverse price movements occur. The proposed trading strategy is tested on the Stevens High Frequency Trading (SHIFT) platform. SHIFT is a highly realistic market simulator with functionalities for creating an artificial market simulation by deploying agents, trading strategies, distributing initial wealth, etc. In the implementation several assets on the NASDAQ exchange are used for testing. In comparison to a strategy with static buy/ sell levels, this study shows that the number of limit orders that get matched and executed can be increased. Executing limit orders earns rebates on NASDAQ. The system can capture jumps in the limit order book prices, provide dynamic buy/sell levels and trigger stop loss signals to improve the PnL (Profit and Loss) performance of the strategy.

Keywords: algorithmic trading, Hidden Markov model, high frequency trading, limit order book learning

Procedia PDF Downloads 144
1402 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 188
1401 Using the Technology Acceptance Model to Examine Seniors’ Attitudes toward Facebook

Authors: Chien-Jen Liu, Shu Ching Yang

Abstract:

Using the technology acceptance model (TAM), this study examined the external variables of technological complexity (TC) to acquire a better understanding of the factors that influence the acceptance of computer application courses by learners at Active Aging Universities. After the learners in this study had completed a 27-hour Facebook course, 44 learners responded to a modified TAM survey. Data were collected to examine the path relationships among the variables that influence the acceptance of Facebook-mediated community learning. The partial least squares (PLS) method was used to test the measurement and the structural model. The study results demonstrated that attitudes toward Facebook use directly influence behavioral intentions (BI) with respect to Facebook use, evincing a high prediction rate of 58.3%. In addition to the perceived usefulness (PU) and perceived ease of use (PEOU) measures that are proposed in the TAM, other external variables, such as TC, also indirectly influence BI. These four variables can explain 88% of the variance in BI and demonstrate a high level of predictive ability. Finally, limitations of this investigation and implications for further research are discussed.

Keywords: technology acceptance model (TAM), technological complexity, partial least squares (PLS), perceived usefulness

Procedia PDF Downloads 336
1400 The Role of Brand Loyalty in Generating Positive Word of Mouth among Malaysian Hypermarket Customers

Authors: S. R. Nikhashemi, Laily Haj Paim, Ali Khatibi

Abstract:

Structural Equation Modeling (SEM) was used to test a hypothesized model explaining Malaysian hypermarket customers’ perceptions of brand trust (BT), customer perceived value (CPV) and perceived service quality (PSQ) on building their brand loyalty (CBL) and generating positive word-of-mouth communication (WOM). Self-administered questionnaires were used to collect data from 374 Malaysian hypermarket customers from Mydin, Tesco, Aeon Big and Giant in Kuala Lumpur, a metropolitan city of Malaysia. The data strongly supported the model exhibiting that BT, CPV and PSQ are prerequisite factors in building customer brand loyalty, while PSQ has the strongest effect on prediction of customer brand loyalty compared to other factors. Besides, the present study suggests the effect of the aforementioned factors via customer brand loyalty strongly contributes to generate positive word of mouth communication.

Keywords: brand trust, perceived value, Perceived Service Quality, Brand loyalty, positive word of mouth communication

Procedia PDF Downloads 474
1399 The Strategic Entering Time of a Commerce Platform

Authors: Chia-li Wang

Abstract:

The surge of service and commerce platforms, such as e-commerce and internet-of-things, have rapidly changed our lives. How to avoid the congestion and get the job done in the platform is now a common problem that many people encounter every day. This requires platform users to make decisions about when to enter the platform. To that end, we investigate the strategic entering time of a simple platform containing random numbers of buyers and sellers of some item. Upon a trade, the buyer and the seller gain respective profits, yet they pay the cost of waiting in the platform. To maximize their expected payoffs from trading, both buyers and sellers can choose their entering times. This creates an interesting and practical framework of a game that is played among buyers, among sellers, and between them. That is, a strategy employed by a player is not only against players of its type but also a response to those of the other type, and, thus, a strategy profile is composed of strategies of buyers and sellers. The players' best response, the Nash equilibrium (NE) strategy profile, is derived by a pair of differential equations, which, in turn, are used to establish its existence and uniqueness. More importantly, its structure sheds valuable insights of how the entering strategy of one side (buyers or sellers) is affected by the entering behavior of the other side. These results provide a base for the study of dynamic pricing for stochastic demand-supply imbalances. Finally, comparisons between the social welfares (the sum of the payoffs incurred by individual participants) obtained by the optimal strategy and by the NE strategy are conducted for showing the efficiency loss relative to the socially optimal solution. That should help to manage the platform better.

Keywords: double-sided queue, non-cooperative game, nash equilibrium, price of anarchy

Procedia PDF Downloads 76
1398 Phytosynthesized Iron Nanoparticles Elicited Growth and Biosynthesis of Steviol Glycosides in Invitro Stevia rebaudiana Plant Cultures

Authors: Amir Ali, Laura Yael Mendoza

Abstract:

The application of nanomaterials is becoming the most effective strategy of elicitation to produce a desirable level of plant biomass with complex medicinal compounds. This study was designed to check the influence of phytosynthesized iron nanoparticles (FeNPs) on physical growth characteristics, antioxidant status, and production of steviol glycosides of in vitro grown Stevia rebaudiana. Effect of different concentrations of iron nanoparticles replacement of iron sulfate in MS medium (stock solution) on invitro stevia plant growth following positive control (MS basal medium), negative control (iron sulfate devoid medium), iron sulfate devoid MS medium and supplemented with FeNPs at different concentrations (5.6 mg/L, 11.2 mg/L, 16.8 mg/L, 22.4 mg/L) was evaluated. The iron deficiency leads to a drastic reduction in plant growth. In contrast, applying FeNPs leads to improvement in plant height, leave diameter, improved leave morphology, etc., in a concentration-dependent manner. Furthermore, the stress caused by FeNPs at 16.8 mg/L in cultures produced higher levels of total phenolic content (3.7 ± 0.042 mg/g dry weight: DW) and total flavonoid content (1.9 ± 0.022 mg/g DW and antioxidant activity (78 ± 4.6%). In addition, plants grown in the presence of FeNPs at 22.4 mg/L resulted in higher enzymatic antioxidant activities (SOD = 3.5 ± 0.042 U/mg; POD = 2.6 ± 0.026 U/mg; CAT = 2.8 ± 0.034 U/mg and APx = 3.6 ± 0.043 U/ mg), respectively. Furthermore, exposure to a higher dose of FeNPs (22.4 mg/L) exhibited the maximum amount of stevioside (stevioside: 4.6 ± 0.058 mg/g (DW) and rebaudioside A: 4.9 ± 0.068 mg/g DW) as compared to other doses. The current investigation confirms the effectiveness of FeNPs in growth media. It offers a suitable prospect for commercially desirable production of S. rebaudiana biomass with higher sweet glycosides profiles in vitro.

Keywords: cell culture, stevia, iron nanoparticles, antioxidants

Procedia PDF Downloads 87
1397 Classification of Health Risk Factors to Predict the Risk of Falling in Older Adults

Authors: L. Lindsay, S. A. Coleman, D. Kerr, B. J. Taylor, A. Moorhead

Abstract:

Cognitive decline and frailty is apparent in older adults leading to an increased likelihood of the risk of falling. Currently health care professionals have to make professional decisions regarding such risks, and hence make difficult decisions regarding the future welfare of the ageing population. This study uses health data from The Irish Longitudinal Study on Ageing (TILDA), focusing on adults over the age of 50 years, in order to analyse health risk factors and predict the likelihood of falls. This prediction is based on the use of machine learning algorithms whereby health risk factors are used as inputs to predict the likelihood of falling. Initial results show that health risk factors such as long-term health issues contribute to the number of falls. The identification of such health risk factors has the potential to inform health and social care professionals, older people and their family members in order to mitigate daily living risks.

Keywords: classification, falls, health risk factors, machine learning, older adults

Procedia PDF Downloads 133