Search results for: estimation algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3837

Search results for: estimation algorithms

2547 Particle Size Distribution Estimation of a Mixture of Regular and Irregular Sized Particles Using Acoustic Emissions

Authors: Ejay Nsugbe, Andrew Starr, Ian Jennions, Cristobal Ruiz-Carcel

Abstract:

This works investigates the possibility of using Acoustic Emissions (AE) to estimate the Particle Size Distribution (PSD) of a mixture of particles that comprise of particles of different densities and geometry. The experiments carried out involved the mixture of a set of glass and polyethylene particles that ranged from 150-212 microns and 150-250 microns respectively and an experimental rig that allowed the free fall of a continuous stream of particles on a target plate which the AE sensor was placed. By using a time domain based multiple threshold method, it was observed that the PSD of the particles in the mixture could be estimated.

Keywords: acoustic emissions, particle sizing, process monitoring, signal processing

Procedia PDF Downloads 353
2546 Financial Ethics: A Review of 2010 Flash Crash

Authors: Omer Farooq, Salman Ahmed Khan, Sadaf Khalid

Abstract:

Modern day stock markets have almost entirely became automated. Even though it means increased profits for the investors by algorithms acting upon the slightest price change in order of microseconds, it also has given birth to many ethical dilemmas in the sense that slightest mistake can cause people to lose all of their livelihoods. This paper reviews one such event that happened on May 06, 2010 in which $1 trillion dollars disappeared from the Dow Jones Industrial Average. We are going to discuss its various aspects and the ethical dilemmas that have arisen due to it.

Keywords: flash crash, market crash, stock market, stock market crash

Procedia PDF Downloads 520
2545 Design, Construction and Performance Evaluation of a HPGe Detector Shield

Authors: M. Sharifi, M. Mirzaii, F. Bolourinovin, H. Yousefnia, M. Akbari, K. Yousefi-Mojir

Abstract:

A multilayer passive shield composed of low-activity lead (Pb), copper (Cu), tin (Sn) and iron (Fe) was designed and manufactured for a coaxial HPGe detector placed at a surface laboratory for reducing background radiation and radiation dose to the personnel. The performance of the shield was evaluated and efficiency curves of the detector were plotted by using of the various standard sources in different distances. Monte Carlo simulations and a set of TLD chips were used for dose estimation in two distances of 20 and 40 cm. The results show that the shield reduced background spectrum and the personnel dose more than 95%.

Keywords: HPGe shield, background count, personnel dose, efficiency curve

Procedia PDF Downloads 456
2544 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes

Authors: Madushani Rodrigo, Banuka Athuraliya

Abstract:

In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.

Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16

Procedia PDF Downloads 120
2543 Performance Evaluation of Packet Scheduling with Channel Conditioning Aware Based on Wimax Networks

Authors: Elmabruk Laias, Abdalla M. Hanashi, Mohammed Alnas

Abstract:

Worldwide Interoperability for Microwave Access (WiMAX) became one of the most challenging issues, since it was responsible for distributing available resources of the network among all users this leaded to the demand of constructing and designing high efficient scheduling algorithms in order to improve the network utilization, to increase the network throughput, and to minimize the end-to-end delay. In this study, the proposed algorithm focuses on an efficient mechanism to serve non-real time traffic in congested networks by considering channel status.

Keywords: WiMAX, Quality of Services (QoS), OPNE, Diff-Serv (DS).

Procedia PDF Downloads 286
2542 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements

Authors: Henok Hailemariam, Frank Wuttke

Abstract:

Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.

Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence

Procedia PDF Downloads 363
2541 Influence of the Test Environment on the Dynamic Response of a Composite Beam

Authors: B. Moueddene, B. Labbaci, L. Missoum, R. Abdeldjebar

Abstract:

Quality estimation of the experimental simulation of boundary conditions is one of the problems encountered while performing an experimental program. In fact, it is not easy to estimate directly the effective influence of these simulations on the results of experimental investigation. The aim of this is article to evaluate the effect of boundary conditions uncertainties on structure response, using the change of the dynamics characteristics. The experimental models used and the correlation by the Frequency Domain Assurance Criterion (FDAC) allowed an interpretation of the change in the dynamic characteristics. The application of this strategy to stratified composite structures (glass/ polyester) has given satisfactory results.

Keywords: vibration, composite, endommagement, correlation

Procedia PDF Downloads 366
2540 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms

Authors: Naina Mahajan, Bikram Pal Kaur

Abstract:

The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.

Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool

Procedia PDF Downloads 338
2539 Assessment the Quality of Telecommunication Services by Fuzzy Inferences System

Authors: Oktay Nusratov, Ramin Rzaev, Aydin Goyushov

Abstract:

Fuzzy inference method based approach to the forming of modular intellectual system of assessment the quality of communication services is proposed. Developed under this approach the basic fuzzy estimation model takes into account the recommendations of the International Telecommunication Union in respect of the operation of packet switching networks based on IP-protocol. To implement the main features and functions of the fuzzy control system of quality telecommunication services it is used multilayer feedforward neural network.

Keywords: quality of communication, IP-telephony, fuzzy set, fuzzy implication, neural network

Procedia PDF Downloads 469
2538 The Beta-Fisher Snedecor Distribution with Applications to Cancer Remission Data

Authors: K. A. Adepoju, O. I. Shittu, A. U. Chukwu

Abstract:

In this paper, a new four-parameter generalized version of the Fisher Snedecor distribution called Beta- F distribution is introduced. The comprehensive account of the statistical properties of the new distributions was considered. Formal expressions for the cumulative density function, moments, moment generating function and maximum likelihood estimation, as well as its Fisher information, were obtained. The flexibility of this distribution as well as its robustness using cancer remission time data was demonstrated. The new distribution can be used in most applications where the assumption underlying the use of other lifetime distributions is violated.

Keywords: fisher-snedecor distribution, beta-f distribution, outlier, maximum likelihood method

Procedia PDF Downloads 347
2537 Performance Comparison of Cooperative Banks in the EU, USA and Canada

Authors: Matěj Kuc

Abstract:

This paper compares different types of profitability measures of cooperative banks from two developed regions: the European Union and the United States of America together with Canada. We created balanced dataset of more than 200 cooperative banks covering 2011-2016 period. We made series of tests and run Random Effects estimation on panel data. We found that American and Canadian cooperatives are more profitable in terms of return on assets (ROA) and return on equity (ROE). There is no significant difference in net interest margin (NIM). Our results show that the North American cooperative banks accommodated better to the current market environment.

Keywords: cooperative banking, panel data, profitability measures, random effects

Procedia PDF Downloads 113
2536 An Extended Inverse Pareto Distribution, with Applications

Authors: Abdel Hadi Ebraheim

Abstract:

This paper introduces a new extension of the Inverse Pareto distribution in the framework of Marshal-Olkin (1997) family of distributions. This model is capable of modeling various shapes of aging and failure data. The statistical properties of the new model are discussed. Several methods are used to estimate the parameters involved. Explicit expressions are derived for different types of moments of value in reliability analysis are obtained. Besides, the order statistics of samples from the new proposed model have been studied. Finally, the usefulness of the new model for modeling reliability data is illustrated using two real data sets with simulation study.

Keywords: pareto distribution, marshal-Olkin, reliability, hazard functions, moments, estimation

Procedia PDF Downloads 82
2535 Key Transfer Protocol Based on Non-invertible Numbers

Authors: Luis A. Lizama-Perez, Manuel J. Linares, Mauricio Lopez

Abstract:

We introduce a method to perform remote user authentication on what we call non-invertible cryptography. It exploits the fact that the multiplication of an invertible integer and a non-invertible integer in a ring Zn produces a non-invertible integer making infeasible to compute factorization. The protocol requires the smallest key size when is compared with the main public key algorithms as Diffie-Hellman, Rivest-Shamir-Adleman or Elliptic Curve Cryptography. Since we found that the unique opportunity for the eavesdropper is to mount an exhaustive search on the keys, the protocol seems to be post-quantum.

Keywords: invertible, non-invertible, ring, key transfer

Procedia PDF Downloads 179
2534 Eresa, Hospital General Universitario de Elche

Authors: Ashish Kumar Singh, Mehak Gulati, Neelam Verma

Abstract:

Arginine majorly acts as a substrate for the enzyme nitric oxide synthase (NOS) for the production of nitric oxide, a strong vasodilator. Current study demonstrated a novel amperometric approach for estimation of arginine using nitric oxide synthase. The enzyme was co-immobilized in carbon paste electrode with NADP+, FAD and BH4 as cofactors. The detection principle of the biosensor is enzyme NOS catalyzes the conversion of arginine into nitric oxide. The developed biosensor could able to detect up to 10-9M of arginine. The oxidation peak of NO was observed at 0.65V. The developed arginine biosensor was used to monitor arginine content in fruit juices.

Keywords: arginine, biosensor, carbon paste elctrode, nitric oxide

Procedia PDF Downloads 425
2533 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography

Authors: Devansh Desai, Rahul Nigam

Abstract:

Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.

Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration

Procedia PDF Downloads 70
2532 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering

Authors: Zelalem Fantahun

Abstract:

Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.

Keywords: POS tagging, Amharic, unsupervised learning, k-means

Procedia PDF Downloads 451
2531 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 93
2530 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 175
2529 A Novel Machine Learning Approach to Aid Agrammatism in Non-fluent Aphasia

Authors: Rohan Bhasin

Abstract:

Agrammatism in non-fluent Aphasia Cases can be defined as a language disorder wherein a patient can only use content words ( nouns, verbs and adjectives ) for communication and their speech is devoid of functional word types like conjunctions and articles, generating speech of with extremely rudimentary grammar . Past approaches involve Speech Therapy of some order with conversation analysis used to analyse pre-therapy speech patterns and qualitative changes in conversational behaviour after therapy. We describe this approach as a novel method to generate functional words (prepositions, articles, ) around content words ( nouns, verbs and adjectives ) using a combination of Natural Language Processing and Deep Learning algorithms. The applications of this approach can be used to assist communication. The approach the paper investigates is : LSTMs or Seq2Seq: A sequence2sequence approach (seq2seq) or LSTM would take in a sequence of inputs and output sequence. This approach needs a significant amount of training data, with each training data containing pairs such as (content words, complete sentence). We generate such data by starting with complete sentences from a text source, removing functional words to get just the content words. However, this approach would require a lot of training data to get a coherent input. The assumptions of this approach is that the content words received in the inputs of both text models are to be preserved, i.e, won't alter after the functional grammar is slotted in. This is a potential limit to cases of severe Agrammatism where such order might not be inherently correct. The applications of this approach can be used to assist communication mild Agrammatism in non-fluent Aphasia Cases. Thus by generating these function words around the content words, we can provide meaningful sentence options to the patient for articulate conversations. Thus our project translates the use case of generating sentences from content-specific words into an assistive technology for non-Fluent Aphasia Patients.

Keywords: aphasia, expressive aphasia, assistive algorithms, neurology, machine learning, natural language processing, language disorder, behaviour disorder, sequence to sequence, LSTM

Procedia PDF Downloads 164
2528 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 130
2527 Adaptive Power Control of the City Bus Integrated Photovoltaic System

Authors: Piotr Kacejko, Mariusz Duk, Miroslaw Wendeker

Abstract:

This paper presents an adaptive controller to track the maximum power point of a photovoltaic modules (PV) under fast irradiation change on the city-bus roof. Photovoltaic systems have been a prominent option as an additional energy source for vehicles. The Municipal Transport Company (MPK) in Lublin has installed photovoltaic panels on its buses roofs. The solar panels turn solar energy into electric energy and are used to load the buses electric equipment. This decreases the buses alternators load, leading to lower fuel consumption and bringing both economic and ecological profits. A DC–DC boost converter is selected as the power conditioning unit to coordinate the operating point of the system. In addition to the conversion efficiency of a photovoltaic panel, the maximum power point tracking (MPPT) method also plays a main role to harvest most energy out of the sun. The MPPT unit on a moving vehicle must keep tracking accuracy high in order to compensate rapid change of irradiation change due to dynamic motion of the vehicle. Maximum power point track controllers should be used to increase efficiency and power output of solar panels under changing environmental factors. There are several different control algorithms in the literature developed for maximum power point tracking. However, energy performances of MPPT algorithms are not clarified for vehicle applications that cause rapid changes of environmental factors. In this study, an adaptive MPPT algorithm is examined at real ambient conditions. PV modules are mounted on a moving city bus designed to test the solar systems on a moving vehicle. Some problems of a PV system associated with a moving vehicle are addressed. The proposed algorithm uses a scanning technique to determine the maximum power delivering capacity of the panel at a given operating condition and controls the PV panel. The aim of control algorithm was matching the impedance of the PV modules by controlling the duty cycle of the internal switch, regardless of changes of the parameters of the object of control and its outer environment. Presented algorithm was capable of reaching the aim of control. The structure of an adaptive controller was simplified on purpose. Since such a simple controller, armed only with an ability to learn, a more complex structure of an algorithm can only improve the result. The presented adaptive control system of the PV system is a general solution and can be used for other types of PV systems of both high and low power. Experimental results obtained from comparison of algorithms by a motion loop are presented and discussed. Experimental results are presented for fast change in irradiation and partial shading conditions. The results obtained clearly show that the proposed method is simple to implement with minimum tracking time and high tracking efficiency proving superior to the proposed method. This work has been financed by the Polish National Centre for Research and Development, PBS, under Grant Agreement No. PBS 2/A6/16/2013.

Keywords: adaptive control, photovoltaic energy, city bus electric load, DC-DC converter

Procedia PDF Downloads 211
2526 Fine Characterization of Glucose Modified Human Serum Albumin by Different Biophysical and Biochemical Techniques at a Range

Authors: Neelofar, Khursheed Alam, Jamal Ahmad

Abstract:

Protein modification in diabetes mellitus may lead to early glycation products (EGPs) or amadori product as well as advanced glycation end products (AGEs). Early glycation involves the reaction of glucose with N-terminal and lysyl side chain amino groups to form Schiff’s base which undergoes rearrangements to form more stable early glycation product known as Amadori product. After Amadori, the reactions become more complicated leading to the formation of advanced glycation end products (AGEs) that interact with various AGE receptors, thereby playing an important role in the long-term complications of diabetes. Millard reaction or nonenzymatic glycation reaction accelerate in diabetes due to hyperglycation and alter serum protein’s structure, their normal functions that lead micro and macro vascular complications in diabetic patients. In this study, Human Serum Albumin (HSA) with a constant concentration was incubated with different concentrations of glucose at 370C for a week. At 4th day, Amadori product was formed that was confirmed by colorimetric method NBT assay and TBA assay which both are authenticate early glycation product. Conformational changes in native as well as all samples of Amadori albumin with different concentrations of glucose were investigated by various biophysical and biochemical techniques. Main biophysical techniques hyperchromacity, quenching of fluorescence intensity, FTIR, CD and SDS-PAGE were used. Further conformational changes were observed by biochemical assays mainly HMF formation, fructoseamine, reduction of fructoseamine with NaBH4, carbonyl content estimation, lysine and arginine residues estimation, ANS binding property and thiol group estimation. This study find structural and biochemical changes in Amadori modified HSA with normal to hyperchronic range of glucose with respect to native HSA. When glucose concentration was increased from normal to chronic range biochemical and structural changes also increased. Highest alteration in secondary and tertiary structure and conformation in glycated HSA was observed at the hyperchronic concentration (75mM) of glucose. Although it has been found that Amadori modified proteins is also involved in secondary complications of diabetes as AGEs but very few studies have been done to analyze the conformational changes in Amadori modified proteins due to early glycation. Most of the studies were found on the structural changes in Amadori protein at a particular glucose concentration but no study was found to compare the biophysical and biochemical changes in HSA due to early glycation with a range of glucose concentration at a constant incubation time. So this study provide the information about the biochemical and biophysical changes occur in Amadori modified albumin at a range of glucose normal to chronic in diabetes. Although many implicates currently in use i.e. glycaemic control, insulin treatment and other chemical therapies that can control many aspects of diabetes. However, even with intensive use of current antidiabetic agents more than 50 % of diabetic patient’s type 2 suffers poor glycaemic control and 18 % develop serious complications within six years of diagnosis. Experimental evidence related to diabetes suggests that preventing the nonenzymatic glycation of relevant proteins or blocking their biological effects might beneficially influence the evolution of vascular complications in diabetic patients or quantization of amadori adduct of HSA by authentic antibodies against HSA-EGPs can be used as marker for early detection of the initiation/progression of secondary complications of diabetes. So this research work may be helpful for the same.

Keywords: diabetes mellitus, glycation, albumin, amadori, biophysical and biochemical techniques

Procedia PDF Downloads 272
2525 Assessment of Petrophysical Parameters Using Well Log and Core Data

Authors: Khulud M. Rahuma, Ibrahim B. Younis

Abstract:

Assessment of petrophysical parameters are very essential for reservoir engineer. Three techniques can be used to predict reservoir properties: well logging, well testing, and core analysis. Cementation factor and saturation exponent are very required for calculation, and their values role a great effect on water saturation estimation. In this study a sensitive analysis was performed to investigate the influence of cementation factor and saturation exponent variation applying logs, and core analysis. Measurements of water saturation resulted in a maximum difference around fifteen percent.

Keywords: porosity, cementation factor, saturation exponent, formation factor, water saturation

Procedia PDF Downloads 693
2524 New Technique of Estimation of Charge Carrier Density of Nanomaterials from Thermionic Emission Data

Authors: Dilip K. De, Olukunle C. Olawole, Emmanuel S. Joel, Moses Emetere

Abstract:

A good number of electronic properties such as electrical and thermal conductivities depend on charge carrier densities of nanomaterials. By controlling the charge carrier densities during the fabrication (or growth) processes, the physical properties can be tuned. In this paper, we discuss a new technique of estimating the charge carrier densities of nanomaterials from the thermionic emission data using the newly modified Richardson-Dushman equation. We find that the technique yields excellent results for graphene and carbon nanotube.

Keywords: charge carrier density, nano materials, new technique, thermionic emission

Procedia PDF Downloads 321
2523 The Accuracy of Small Firms at Predicting Their Employment

Authors: Javad Nosratabadi

Abstract:

This paper investigates the difference between firms' actual and expected employment along with the amount of loans invested by them. In addition, it examines the relationship between the amount of loans received by firms and wages. Empirically, using a causal effect estimation and firm-level data from a province in Iran between 2004 and 2011, the results show that there is a range of the loan amount for which firms' expected employment meets their actual one. In contrast, there is a gap between firms' actual and expected employment for any other loan amount. Furthermore, the result shows that there is a positive and significant relationship between the amount of loan invested by firms and wages.

Keywords: expected employment, actual employment, wage, loan

Procedia PDF Downloads 161
2522 Unravelling the Knot: Towards a Definition of ‘Digital Labor’

Authors: Marta D'Onofrio

Abstract:

The debate on the digitalization of the economy has raised questions about how both labor and the regulation of work processes are changing due to the introduction of digital technologies in the productive system. Within the literature, the term ‘digital labor’ is commonly used to identify the impact of digitalization on labor. Despite the wide use of this term, it is still not available an unambiguous definition of it, and this could create confusion in the use of terminology and in the attempts of classification. As a consequence, the purpose of this paper is to provide for a definition and to propose a classification of ‘digital labor’, resorting to the theoretical approach of organizational studies.

Keywords: digital labor, digitalization, data-driven algorithms, big data, organizational studies

Procedia PDF Downloads 153
2521 Error Estimation for the Reconstruction Algorithm with Fan Beam Geometry

Authors: Nirmal Yadav, Tanuja Srivastava

Abstract:

Shannon theory is an exact method to recover a band limited signals from its sampled values in discrete implementation, using sinc interpolators. But sinc based results are not much satisfactory for band-limited calculations so that convolution with window function, having compact support, has been introduced. Convolution Backprojection algorithm with window function is an approximation algorithm. In this paper, the error has been calculated, arises due to this approximation nature of reconstruction algorithm. This result will be defined for fan beam projection data which is more faster than parallel beam projection.

Keywords: computed tomography, convolution backprojection, radon transform, fan beam

Procedia PDF Downloads 492
2520 Krill-Herd Step-Up Approach Based Energy Efficiency Enhancement Opportunities in the Offshore Mixed Refrigerant Natural Gas Liquefaction Process

Authors: Kinza Qadeer, Muhammad Abdul Qyyum, Moonyong Lee

Abstract:

Natural gas has become an attractive energy source in comparison with other fossil fuels because of its lower CO₂ and other air pollutant emissions. Therefore, compared to the demand for coal and oil, that for natural gas is increasing rapidly world-wide. The transportation of natural gas over long distances as a liquid (LNG) preferable for several reasons, including economic, technical, political, and safety factors. However, LNG production is an energy-intensive process due to the tremendous amount of power requirements for compression of refrigerants, which provide sufficient cold energy to liquefy natural gas. Therefore, one of the major issues in the LNG industry is to improve the energy efficiency of existing LNG processes through a cost-effective approach that is 'optimization'. In this context, a bio-inspired Krill-herd (KH) step-up approach was examined to enhance the energy efficiency of a single mixed refrigerant (SMR) natural gas liquefaction (LNG) process, which is considered as a most promising candidate for offshore LNG production (FPSO). The optimal design of a natural gas liquefaction processes involves multivariable non-linear thermodynamic interactions, which lead to exergy destruction and contribute to process irreversibility. As key decision variables, the optimal values of mixed refrigerant flow rates and process operating pressures were determined based on the herding behavior of krill individuals corresponding to the minimum energy consumption for LNG production. To perform the rigorous process analysis, the SMR process was simulated in Aspen Hysys® software and the resulting model was connected with the Krill-herd approach coded in MATLAB. The optimal operating conditions found by the proposed approach significantly reduced the overall energy consumption of the SMR process by ≤ 22.5% and also improved the coefficient of performance in comparison with the base case. The proposed approach was also compared with other well-proven optimization algorithms, such as genetic and particle swarm optimization algorithms, and was found to exhibit a superior performance over these existing approaches.

Keywords: energy efficiency, Krill-herd, LNG, optimization, single mixed refrigerant

Procedia PDF Downloads 155
2519 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 406
2518 A Generalisation of Pearson's Curve System and Explicit Representation of the Associated Density Function

Authors: S. B. Provost, Hossein Zareamoghaddam

Abstract:

A univariate density approximation technique whereby the derivative of the logarithm of a density function is assumed to be expressible as a rational function is introduced. This approach which extends Pearson’s curve system is solely based on the moments of a distribution up to a determinable order. Upon solving a system of linear equations, the coefficients of the polynomial ratio can readily be identified. An explicit solution to the integral representation of the resulting density approximant is then obtained. It will be explained that when utilised in conjunction with sample moments, this methodology lends itself to the modelling of ‘big data’. Applications to sets of univariate and bivariate observations will be presented.

Keywords: density estimation, log-density, moments, Pearson's curve system

Procedia PDF Downloads 281