Search results for: vector calculus
777 Formation of the Investment Portfolio of Intangible Assets with a Wide Pairwise Comparison Matrix Application
Authors: Gulnara Galeeva
Abstract:
The Analytic Hierarchy Process is widely used in the economic and financial studies, including the formation of investment portfolios. In this study, a generalized method of obtaining a vector of priorities for the case with separate pairwise comparisons of the expert opinion being presented as a set of several equal evaluations on a ratio scale is examined. The author claims that this method allows solving an important and up-to-date problem of excluding vagueness and ambiguity of the expert opinion in the decision making theory. The study describes the authentic wide pairwise comparison matrix. Its application in the formation of the efficient investment portfolio of intangible assets of a small business enterprise with limited funding is considered. The proposed method has been successfully approbated on the practical example of a functioning dental clinic. The result of the study confirms that the wide pairwise comparison matrix can be used as a simple and reliable method for forming the enterprise investment policy. Moreover, a comparison between the method based on the wide pairwise comparison matrix and the classical analytic hierarchy process was conducted. The results of the comparative analysis confirm the correctness of the method based on the wide matrix. The application of a wide pairwise comparison matrix also allows to widely use the statistical methods of experimental data processing for obtaining the vector of priorities. A new method is available for simple users. Its application gives about the same accuracy result as that of the classical hierarchy process. Financial directors of small and medium business enterprises get an opportunity to solve the problem of companies’ investments without resorting to services of analytical agencies specializing in such studies.Keywords: analytic hierarchy process, decision processes, investment portfolio, intangible assets
Procedia PDF Downloads 265776 New Variational Approach for Contrast Enhancement of Color Image
Authors: Wanhyun Cho, Seongchae Seo, Soonja Kang
Abstract:
In this work, we propose a variational technique for image contrast enhancement which utilizes global and local information around each pixel. The energy functional is defined by a weighted linear combination of three terms which are called on a local, a global contrast term and dispersion term. The first one is a local contrast term that can lead to improve the contrast of an input image by increasing the grey-level differences between each pixel and its neighboring to utilize contextual information around each pixel. The second one is global contrast term, which can lead to enhance a contrast of image by minimizing the difference between its empirical distribution function and a cumulative distribution function to make the probability distribution of pixel values becoming a symmetric distribution about median. The third one is a dispersion term that controls the departure between new pixel value and pixel value of original image while preserving original image characteristics as well as possible. Second, we derive the Euler-Lagrange equation for true image that can achieve the minimum of a proposed functional by using the fundamental lemma for the calculus of variations. And, we considered the procedure that this equation can be solved by using a gradient decent method, which is one of the dynamic approximation techniques. Finally, by conducting various experiments, we can demonstrate that the proposed method can enhance the contrast of colour images better than existing techniques.Keywords: color image, contrast enhancement technique, variational approach, Euler-Lagrang equation, dynamic approximation method, EME measure
Procedia PDF Downloads 449775 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 106774 Impact Force Difference on Natural Grass Versus Synthetic Turf Football Fields
Authors: Nathaniel C. Villanueva, Ian K. H. Chun, Alyssa S. Fujiwara, Emily R. Leibovitch, Brennan E. Yamamoto, Loren G. Yamamoto
Abstract:
Introduction: In previous studies of high school sports, over 15% of concussions were attributed to contact with the playing surface. While artificial turf fields are increasing in popularity due to lower maintenance costs, artificial turf has been associated with more ankle and knee injuries, with inconclusive data on concussions. In this study, natural grass and artificial football fields were compared in terms of deceleration on fall impact. Methods: Accelerometers were placed on the forehead, apex of the head, and right ear of a Century Body Opponent Bag (BOB) manikin. A Riddell HITS football helmet was secured onto the head of the manikin over the accelerometers. This manikin was dropped onto natural grass (n = 10) and artificial turf (n = 9) high school football fields. The manikin was dropped from a stationary position at a height of 60 cm onto its front, back, and left side. Each of these drops was conducted 10 times at the 40-yard line, 20-yard line, and endzone. The net deceleration on impact was calculated as a net vector from each of the three accelerometers’ x, y, and z vectors from the three different locations on the manikin’s head (9 vector measurements per drop). Results: Mean values for the multiple drops were calculated for each accelerometer and drop type for each field. All accelerometers in forward and backward falls and one accelerometer in side falls showed significantly greater impact force on synthetic turf compared to the natural grass surfaces. Conclusion: Impact force was higher on synthetic fields for all drop types for at least one of the accelerometer locations. These findings suggest that concussion risk might be higher for athletes playing on artificial turf fields.Keywords: concussion, football, biomechanics, sports
Procedia PDF Downloads 158773 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa
Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam
Abstract:
Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines
Procedia PDF Downloads 515772 Transformations between Bivariate Polynomial Bases
Authors: Dimitris Varsamis, Nicholas Karampetakis
Abstract:
It is well known that any interpolating polynomial P(x,y) on the vector space Pn,m of two-variable polynomials with degree less than n in terms of x and less than m in terms of y has various representations that depends on the basis of Pn,m that we select i.e. monomial, Newton and Lagrange basis etc. The aim of this paper is twofold: a) to present transformations between the coordinates of the polynomial P(x,y) in the aforementioned basis and b) to present transformations between these bases.Keywords: bivariate interpolation polynomial, polynomial basis, transformations, interpolating polynomial
Procedia PDF Downloads 405771 Automatic Lexicon Generation for Domain Specific Dataset for Mining Public Opinion on China Pakistan Economic Corridor
Authors: Tayyaba Azim, Bibi Amina
Abstract:
The increase in the popularity of opinion mining with the rapid growth in the availability of social networks has attracted a lot of opportunities for research in the various domains of Sentiment Analysis and Natural Language Processing (NLP) using Artificial Intelligence approaches. The latest trend allows the public to actively use the internet for analyzing an individual’s opinion and explore the effectiveness of published facts. The main theme of this research is to account the public opinion on the most crucial and extensively discussed development projects, China Pakistan Economic Corridor (CPEC), considered as a game changer due to its promise of bringing economic prosperity to the region. So far, to the best of our knowledge, the theme of CPEC has not been analyzed for sentiment determination through the ML approach. This research aims to demonstrate the use of ML approaches to spontaneously analyze the public sentiment on Twitter tweets particularly about CPEC. Support Vector Machine SVM is used for classification task classifying tweets into positive, negative and neutral classes. Word2vec and TF-IDF features are used with the SVM model, a comparison of the trained model on manually labelled tweets and automatically generated lexicon is performed. The contributions of this work are: Development of a sentiment analysis system for public tweets on CPEC subject, construction of an automatic generation of the lexicon of public tweets on CPEC, different themes are identified among tweets and sentiments are assigned to each theme. It is worth noting that the applications of web mining that empower e-democracy by improving political transparency and public participation in decision making via social media have not been explored and practised in Pakistan region on CPEC yet.Keywords: machine learning, natural language processing, sentiment analysis, support vector machine, Word2vec
Procedia PDF Downloads 148770 Numerical Solution of Space Fractional Order Linear/Nonlinear Reaction-Advection Diffusion Equation Using Jacobi Polynomial
Authors: Shubham Jaiswal
Abstract:
During modelling of many physical problems and engineering processes, fractional calculus plays an important role. Those are greatly described by fractional differential equations (FDEs). So a reliable and efficient technique to solve such types of FDEs is needed. In this article, a numerical solution of a class of fractional differential equations namely space fractional order reaction-advection dispersion equations subject to initial and boundary conditions is derived. In the proposed approach shifted Jacobi polynomials are used to approximate the solutions together with shifted Jacobi operational matrix of fractional order and spectral collocation method. The main advantage of this approach is that it converts such problems in the systems of algebraic equations which are easier to be solved. The proposed approach is effective to solve the linear as well as non-linear FDEs. To show the reliability, validity and high accuracy of proposed approach, the numerical results of some illustrative examples are reported, which are compared with the existing analytical results already reported in the literature. The error analysis for each case exhibited through graphs and tables confirms the exponential convergence rate of the proposed method.Keywords: space fractional order linear/nonlinear reaction-advection diffusion equation, shifted Jacobi polynomials, operational matrix, collocation method, Caputo derivative
Procedia PDF Downloads 445769 Extension of Positive Linear Operator
Authors: Manal Azzidani
Abstract:
This research consideres the extension of special functions called Positive Linear Operators. the bounded linear operator which defined from normed space to Banach space will extend to the closure of the its domain, And extend identified linear functional on a vector subspace by Hana-Banach theorem which could be generalized to the positive linear operators.Keywords: extension, positive operator, Riesz space, sublinear function
Procedia PDF Downloads 517768 Differential Diagnosis of Malaria and Dengue Fever on the Basis of Clinical Findings and Laboratory Investigations
Authors: Aman Ullah Khan, Muhammad Younus, Aqil Ijaz, Muti-Ur-Rehman Khan, Sayyed Aun Muhammad, Asif Idrees, Sanan Raza, Amar Nasir
Abstract:
Dengue fever and malaria are important vector-borne diseases of public health significance affecting millions of people around the globe. Dengue fever is caused by Dengue virus while malaria is caused by plasmodium protozoan. Generally, the consequences of Malaria are less severe compared to dengue fever. This study was designed to differentiate dengue fever and malaria on the basis of clinical and laboratory findings and to compare the changes in both diseases having different causative agents transmitted by the common vector. A total of 200 patients of dengue viral infection (120 males, 80 females) were included in this prospective descriptive study. The blood samples of the individuals were first screened for malaria by blood smear examination and then the negative samples were tested by anti-dengue IgM strip. The strip positive cases were further screened by IgM capture ELISA and their complete blood count including hemoglobin estimation (Hb), total and differential leukocyte counts (TLC and DLC), erythrocyte sedimentation rate (ESR) and platelet counts were performed. On the basis of the severity of signs and symptoms, dengue virus infected patients were subdivided into dengue fever (DF) and dengue hemorrhagic fever (DHF) comprising 70 and 100 confirmed patients, respectively. On the other hand, 30 patients were found infected with Malaria while overall 120 patients showed thrombocytopenia. The patients of DHF were found to have more leucopenia, raised hemoglobin level and thrombocytopenia < 50,000/µl compared to the patients belonging to DF and malaria. On the basis of the outcomes of the study, it was concluded that patients affected by DF were at a lower risk of undergoing haematological disturbance than suffering from DHF. While, the patients infected by Malaria were found to have no significant change in their blood components.Keywords: dengue fever, blood, serum, malaria, ELISA
Procedia PDF Downloads 392767 A New Nonlinear State-Space Model and Its Application
Authors: Abdullah Eqal Al Mazrooei
Abstract:
In this work, a new nonlinear model will be introduced. The model is in the state-space form. The nonlinearity of this model is in the state equation where the state vector is multiplied by its self. This technique makes our model generalizes many famous models as Lotka-Volterra model and Lorenz model which have many applications in the real life. We will apply our new model to estimate the wind speed by using a new nonlinear estimator which suitable to work with our model.Keywords: nonlinear systems, state-space model, Kronecker product, nonlinear estimator
Procedia PDF Downloads 691766 Trajectory Generation Procedure for Unmanned Aerial Vehicles
Authors: Amor Jnifene, Cedric Cocaud
Abstract:
One of the most constraining problems facing the development of autonomous vehicles is the limitations of current technologies. Guidance and navigation controllers need to be faster and more robust. Communication data links need to be more reliable and secure. For an Unmanned Aerial Vehicles (UAV) to be useful, and fully autonomous, one important feature that needs to be an integral part of the navigation system is autonomous trajectory planning. The work discussed in this paper presents a method for on-line trajectory planning for UAV’s. This method takes into account various constraints of different types including specific vectors of approach close to target points, multiple objectives, and other constraints related to speed, altitude, and obstacle avoidance. The trajectory produced by the proposed method ensures a smooth transition between different segments, satisfies the minimum curvature imposed by the dynamics of the UAV, and finds the optimum velocity based on available atmospheric conditions. Given a set of objective points and waypoints a skeleton of the trajectory is constructed first by linking all waypoints with straight segments based on the order in which they are encountered in the path. Secondly, vectors of approach (VoA) are assigned to objective waypoints and their preceding transitional waypoint if any. Thirdly, the straight segments are replaced by 3D curvilinear trajectories taking into account the aircraft dynamics. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircrafts, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircraft, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers.Keywords: trajectory planning, unmanned autonomous air vehicle, vector of approach, waypoints
Procedia PDF Downloads 409765 Improving the Teaching of Mathematics at University Using the Inverted Classroom Model: A Case in Greece
Authors: G. S. Androulakis, G. Deli, M. Kaisari, N. Mihos
Abstract:
Teaching practices at the university level have changed and developed during the last decade. Implementation of inverted classroom method in secondary education consists of a well-formed basis for academic teachers. On the other hand, distance learning is a well-known field in education research and widespread as a method of teaching. Nonetheless, the new pandemic found many Universities all over the world unprepared, which made adaptations to new methods of teaching a necessity. In this paper, we analyze a model of an inverted university classroom in a distance learning context. Thus, the main purpose of our research is to investigate students’ difficulties as they transit to a new style of teaching and explore their learning development during a semester totally different from others. Our teaching experiment took place at the Business Administration department of the University of Patras, in the context of two courses: Calculus, a course aimed at first-year students, and Statistics, a course aimed at second-year students. Second-year students had the opportunity to attend courses in the university classroom. First-year students started their semester with distance learning. Using a comparative study of these two groups, we explored significant differences in students’ learning procedures. Focused group interviews, written tests, analyses of students’ dialogues were used in a mixed quantity and quality research. Our analysis reveals students’ skills, capabilities but also a difficulty in following, non-traditional style of teaching. The inverted classroom model, according to our findings, offers benefits in the educational procedure, even in a distance learning environment.Keywords: distance learning, higher education, inverted classroom, mathematics teaching
Procedia PDF Downloads 132764 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network
Authors: Abdulaziz Alsadhan, Naveed Khan
Abstract:
In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion Detection System (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw data set for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. These optimal feature subset used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.Keywords: Particle Swarm Optimization (PSO), Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP)
Procedia PDF Downloads 367763 Assessing the Macroeconomic Effects of Fiscal Policy Changes in Egypt: A Bayesian Structural Vector Autoregression Approach
Authors: Walaa Diab, Baher Atlam, Nadia El Nimer
Abstract:
Egypt faces many obvious economic challenges, and it is so clear that a real economic transformation is needed to address those problems, especially after the recent decisions of floating the Egyptian pound and the gradual subsidy cuts that are trying to meet the needed conditions to get the IMF support of (a £12bn loan) for its economic reform program. Following the post-2008 revival of the interest in the fiscal policy and its vital role in speeding up or slowing down the economic growth. Here comes the value of this paper as it seeks to analyze the macroeconomic effects of fiscal policy in Egypt by applying A Bayesian SVAR Approach. The study uses the Bayesian method because it includes the prior information and no relevant information is omitted and so it is well suited for rational, evidence-based decision-making. Since the study aims to define the effects of fiscal policy shocks in Egypt to help the decision-makers in determining the proper means to correct the structural problems in the Egyptian economy, it has to study the period of 1990s economic reform, but unfortunately; the available data is on an annual frequency. Thus, it uses annual time series to study the period 1991: 2005 And quarterly data over the period 2006–2016. It uses a set of six main variables includes government expenditure and net tax revenues as fiscal policy arms affecting real GDP, unemployment, inflation and the interest rate. The study also tries to assess the 'crowding out' effects by considering the effects of government spending and government revenue shocks on the composition of GDP, namely, on private consumption and private investment. Last but not least the study provides its policy implications regarding the needed role of fiscal policy in Egypt in the upcoming economic reform building on the results it concludes from the previous reform program.Keywords: fiscal policy, government spending, structural vector autoregression, taxation
Procedia PDF Downloads 278762 Comparative Study and Parallel Implementation of Stochastic Models for Pricing of European Options Portfolios using Monte Carlo Methods
Authors: Vinayak Bassi, Rajpreet Singh
Abstract:
Over the years, with the emergence of sophisticated computers and algorithms, finance has been quantified using computational prowess. Asset valuation has been one of the key components of quantitative finance. In fact, it has become one of the embryonic steps in determining risk related to a portfolio, the main goal of quantitative finance. This study comprises a drawing comparison between valuation output generated by two stochastic dynamic models, namely Black-Scholes and Dupire’s bi-dimensionality model. Both of these models are formulated for computing the valuation function for a portfolio of European options using Monte Carlo simulation methods. Although Monte Carlo algorithms have a slower convergence rate than calculus-based simulation techniques (like FDM), they work quite effectively over high-dimensional dynamic models. A fidelity gap is analyzed between the static (historical) and stochastic inputs for a sample portfolio of underlying assets. In order to enhance the performance efficiency of the model, the study emphasized the use of variable reduction methods and customizing random number generators to implement parallelization. An attempt has been made to further implement the Dupire’s model on a GPU to achieve higher computational performance. Furthermore, ideas have been discussed around the performance enhancement and bottleneck identification related to the implementation of options-pricing models on GPUs.Keywords: monte carlo, stochastic models, computational finance, parallel programming, scientific computing
Procedia PDF Downloads 161761 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling
Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari
Abstract:
A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis
Procedia PDF Downloads 147760 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning
Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic
Abstract:
Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method
Procedia PDF Downloads 249759 Sound Analysis of Young Broilers Reared under Different Stocking Densities in Intensive Poultry Farming
Authors: Xiaoyang Zhao, Kaiying Wang
Abstract:
The choice of stocking density in poultry farming is a potential way for determining welfare level of poultry. However, it is difficult to measure stocking densities in poultry farming because of a lot of variables such as species, age and weight, feeding way, house structure and geographical location in different broiler houses. A method was proposed in this paper to measure the differences of young broilers reared under different stocking densities by sound analysis. Vocalisations of broilers were recorded and analysed under different stocking densities to identify the relationship between sounds and stocking densities. Recordings were made continuously for three-week-old chickens in order to evaluate the variation of sounds emitted by the animals at the beginning. The experimental trial was carried out in an indoor reared broiler farm; the audio recording procedures lasted for 5 days. Broilers were divided into 5 groups, stocking density treatments were 8/m², 10/m², 12/m² (96birds/pen), 14/m² and 16/m², all conditions including ventilation and feed conditions were kept same except from stocking densities in every group. The recordings and analysis of sounds of chickens were made noninvasively. Sound recordings were manually analysed and labelled using sound analysis software: GoldWave Digital Audio Editor. After sound acquisition process, the Mel Frequency Cepstrum Coefficients (MFCC) was extracted from sound data, and the Support Vector Machine (SVM) was used as an early detector and classifier. This preliminary study, conducted in an indoor reared broiler farm shows that this method can be used to classify sounds of chickens under different densities economically (only a cheap microphone and recorder can be used), the classification accuracy is 85.7%. This method can predict the optimum stocking density of broilers with the complement of animal welfare indicators, animal productive indicators and so on.Keywords: broiler, stocking density, poultry farming, sound monitoring, Mel Frequency Cepstrum Coefficients (MFCC), Support Vector Machine (SVM)
Procedia PDF Downloads 161758 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 99757 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm
Authors: Lydia Novozhilova, Vladimir Urazhdin
Abstract:
An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier
Procedia PDF Downloads 327756 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification
Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran
Abstract:
The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM
Procedia PDF Downloads 249755 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 139754 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models
Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti
Abstract:
In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics
Procedia PDF Downloads 53753 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 69752 Construction of Genetic Recombinant Yeasts with High Environmental Tolerance by Accumulation of Trehalose and Detoxication of Aldehyde
Authors: Yun-Chin Chung, Nileema Divate, Gen-Hung Chen, Pei-Ru Huang, Rupesh Divate
Abstract:
Many environmental factors, such as glucose concentration, ethanol, temperature, osmotic pressure and pH, decrease the production rate of ethanol using yeast as a starter. Fermentation starters with high tolerance to various stresses are always demanded for brewing industry. Trehalose, a storage carbohydrate in cell wall of yeast, plays an important role in tolerance of environmental stress by preserving integrity of plasma membrane and stabilizing proteins. Furan aldehydes are toxic to yeast and the growth rate of yeast is significantly reduced if furan aldehydes were present in the fermentation medium. In yeast, aldehyde reductase is involved in the detoxification of reactive aldehydes and consequently the growth of yeast is improved. The aims of this study were to construct a genetic recombinant Saccharomyces cerevisiae or Pichia pastoris with furfural and HMF degrading and high ethanol tolerance capacities. Yeast strains were engineered by genetic recombination for overexpression of trehalose-6-phosphate synthase gene (tps1) and aldehyde reductase gene (ari1). TPS1 gene was cloned from S. cerevisiae by reverse transcription-polymerase chain reaction (RT-PCR) and then ligated with pGAPZαC vector. The constructed vector, pGAPZC-tps1, was transformed to recombinant yeasts strain with overexpression of ari1. The transformants with pGAPZC-tps1-ari1 were generated called STA (S. cerevisiae) and PTA (P. pastoris) with overexpression of tps1, ari1. PCR with tps1-specific primers and western blot with his-tag confirmed the gene insertion and protein expression of tps1 in the transformants, respectively. The neutral trehalase gene (nth1) of STA was successfully deleted and the novel strain STAΔN will be used for further study, including the measurement of trehalose concentration and ethanol, furfural tolerance assay.Keywords: genetic recombinant, yeast, ethanol tolerance, trehalase, aldehyde reductase
Procedia PDF Downloads 422751 Non Linear Stability of Non Newtonian Thin Liquid Film Flowing down an Incline
Authors: Lamia Bourdache, Amar Djema
Abstract:
The effect of non-Newtonian property (power law index n) on traveling waves of thin layer of power law fluid flowing over an inclined plane is investigated. For this, a simplified second-order two-equation model (SM) is used. The complete model is second-order four-equation (CM). It is derived by combining the weighted residual integral method and the lubrication theory. This is due to the fact that at the beginning of the instability waves, a very small number of waves is observed. Using a suitable set of test functions, second order terms are eliminated from the calculus so that the model is still accurate to the second order approximation. Linear, spatial, and temporal stabilities are studied. For travelling waves, a particular type of wave form that is steady in a moving frame, i.e., that travels at a constant celerity without changing its shape is studied. This type of solutions which are characterized by their celerity exists under suitable conditions, when the widening due to dispersion is balanced exactly by the narrowing effect due to the nonlinearity. Changing the parameter of celerity in some range allows exploring the entire spectrum of asymptotic behavior of these traveling waves. The (SM) model is converted into a three dimensional dynamical system. The result is that the model exhibits bifurcation scenarios such as heteroclinic, homoclinic, Hopf, and period-doubling bifurcations for different values of the power law index n. The influence of the non-Newtonian parameter on the nonlinear development of these travelling waves is discussed. It is found at the end that the qualitative characters of bifurcation scenarios are insensitive to the variation of the power law index.Keywords: inclined plane, nonlinear stability, non-Newtonian, thin film
Procedia PDF Downloads 283750 Damping and Stability Evaluation for the Dynamical Hunting Motion of the Bullet Train Wheel Axle Equipped with Cylindrical Wheel Treads
Authors: Barenten Suciu
Abstract:
Classical matrix calculus and Routh-Hurwitz stability conditions, applied to the snake-like motion of the conical wheel axle, lead to the conclusion that the hunting mode is inherently unstable, and its natural frequency is a complex number. In order to analytically solve such a complicated vibration model, either the inertia terms were neglected, in the model designated as geometrical, or restrictions on the creep coefficients and yawing diameter were imposed, in the so-called dynamical model. Here, an alternative solution is proposed to solve the hunting mode, based on the observation that the bullet train wheel axle is equipped with cylindrical wheels. One argues that for such wheel treads, the geometrical hunting is irrelevant, since its natural frequency becomes nil, but the dynamical hunting is significant since its natural frequency reduces to a real number. Moreover, one illustrates that the geometrical simplification of the wheel causes the stabilization of the hunting mode, since the characteristic quartic equation, derived for conical wheels, reduces to a quadratic equation of positive coefficients, for cylindrical wheels. Quite simple analytical expressions for the damping ratio and natural frequency are obtained, without applying restrictions into the model of contact. Graphs of the time-depending hunting lateral perturbation, including the maximal and inflexion points, are presented both for the critically-damped and the over-damped wheel axles.Keywords: bullet train, creep, cylindrical wheels, damping, dynamical hunting, stability, vibration analysis
Procedia PDF Downloads 153749 Lattice Dynamics of (ND4Br)x(KBr)1-x Mixed Crystals
Authors: Alpana Tiwari, N. K. Gaur
Abstract:
We have incorporated the translational rotational (TR) coupling effects in the framework of three body force shell model (TSM) to develop an extended TSM (ETSM). The dynamical matrix of ETSM has been applied to compute the phonon frequencies of orientationally disordered mixed crystal (ND4Br)x(KBr)1-x in (q00), (qq0) and (qqq) symmetry directions for compositions 0.10≤x≤0.50 at T=300K.These frequencies are plotted as a function of wave vector k. An unusual acoustic mode softening is found along symmetry directions (q00) and (qq0) as a result of translation-rotation coupling.Keywords: orientational glass, phonons, TR-coupling, lattice dynamics
Procedia PDF Downloads 305748 Numerical Investigation of Hybrid Ferrofluid Unsteady Flow through Porous Channel
Authors: Wajahat Hussain Khan, M. Zubair Akbar Qureshi
Abstract:
The viscous, two-dimensional, incompressible, and laminar time-dependent heat transfer flow through a ferromagnetic fluid is considered in this paper. Flow takes place in a channel between two porous walls under the influence of the magnetic field located beyond the channel. It is assumed that there are no electric field effects and the variation in the magnetic field vector that could occur within the FKeywords: hybrid ferrofluid, heat transfer, magnetic field, porous channel
Procedia PDF Downloads 177