Search results for: performance criteria
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6482

Search results for: performance criteria

3572 Endeavor in Management Process by Executive Dashboards: The Case of the Financial Directorship in Brazilian Navy

Authors: R. S. Quintal, J. L. Tesch Santos, M. D. Davis, E. C. de Santana, M. de F. Bandeira dos Santos

Abstract:

The objective is to identify the contributions from the introduction of the computerized system deal within the Accounting Department of Brazilian Navy Financial Directorship and its possible effects on the budgetary and financial harvest of Brazilian Navy. The relevance lies in the fact that the management process is responsible for the continuous improvement of organizational performance through higher levels of quality in their activities. Improvements in organizational processes have direct effects on crops cost, quality, reliability, flexibility and speed. The method of study of this research is the case study. The choice of case study attended, among other demands, a need for greater flexibility to study processes related to a computerized system. The sources of evidence were used literature, documentary and direct observation. Direct observation was made by monitoring the implementation of the computerized system in the Division of Management Analysis. The main findings of the study point to the fact that the computerized system may contribute significantly to the standardization of information. There was improvement of internal processes in the division of management analysis, made possible the consolidation of a standard management and performance analysis that contribute to global homogeneity in the treatment of information essential to the process of decision making. This study has limitations related to the fact the search result be subject exclusively to the case studied, and it is impossible to generalize to other organs of government.

Keywords: Process Management, Management Control, Business Intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
3571 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications

Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber

Abstract:

Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.

Keywords: Classification, High dimensional data, Machine learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2375
3570 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools

Authors: M. Kaya, M. Eris

Abstract:

Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.

Keywords: Block matching, digital evidence, hash list.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352
3569 Perceived Risks in Business-to-Consumer Online Contracts: An Empirical Study in Saudi Arabia

Authors: Shaya Alshahrani

Abstract:

Perceived risks play a major role in consumer intentions, behaviors, attitudes, and decisions about online shopping in the KSA. This paper investigates the influence of six perceived risk dimensions on Saudi consumers: product risk, information risk, financial risk, privacy and security risk, delivery risk, and terms and conditions risk empirically. To ensure the success of this study, a random survey was distributed to reflect the consumers’ perceived risk and to enable the generalization of the results. Data were collected from 323 respondents in the Kingdom of Saudi Arabia (KSA): 50 who had never shopped online and 273 who had done so. The results indicated that all six risks influenced the respondents’ perceptions of online shopping. The non-online shoppers perceived financial and delivery risks as the most significant barriers to online shopping. This was followed closely by performance, information, and privacy and security risks. Terms and conditions were perceived as less significant. The online consumers considered delivery and performance risks to be the most significant influences on internet shopping. This was followed closely by information and terms and conditions. Financial and privacy and security risks were perceived as less significant. This paper argues that introducing adequate legal solutions to addressing related problems arising from this study is an urgent need. This may enhance consumer trust in the KSA online market, increase consumers’ intentions regarding online shopping, and improve consumer protection.

Keywords: Perceived risk, consumer protection, online shopping, Saudi Arabia, online contracts, e-commerce.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 903
3568 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level

Authors: Pedro M. Abreu, Bruno R. Mendes

Abstract:

The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.

Keywords: Clinical pharmacy, co-payments, healthcare, medicines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1341
3567 Developing Proof Demonstration Skills in Teaching Mathematics in the Secondary School

Authors: M. Rodionov, Z. Dedovets

Abstract:

The article describes the theoretical concept of teaching secondary school students proof demonstration skills in mathematics. It describes in detail different levels of mastery of the concept of proof-which correspond to Piaget’s idea of there being three distinct and progressively more complex stages in the development of human reflection. Lessons for each level contain a specific combination of the visual-figurative components and deductive reasoning. It is vital at the transition point between levels to carefully and rigorously recalibrate teaching to reflect the development of more complex reflective understanding. This can apply even within the same age range, since students will develop at different speeds and to different potential. The authors argue that this requires an aware and adaptive approach to lessons to reflect this complexity and variation. The authors also contend that effective teaching which enables students to properly understand the implementation of proof arguments must develop specific competences. These are: understanding of the importance of completeness and generality in making a valid argument; being task focused; having an internalised locus of control and being flexible in approach and evaluation. These criteria must be correlated with the systematic application of corresponding methodologies which are best likely to achieve success. The particular pedagogical decisions which are made to deliver this objective are illustrated by concrete examples from the existing secondary school mathematics courses. The proposed theoretical concept formed the basis of the development of methodological materials which have been tested in 47 secondary schools.

Keywords: Education, teaching of mathematics, proof, deductive reasoning, secondary school.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888
3566 The Relationship between Procurement Strategies and Sustainability Outcomes: A Systematic Literature Review

Authors: Cathy T. Mpanga Kowet, Aghaegbuna Obinna U. Ozumba

Abstract:

This study examined and identified the inconsistencies, relationships, gaps and recurring themes in literature regarding the relationship between procurement strategies employed in the construction projects for sustainable buildings and realization of sustainability goals. A systematic literature review of studies on the relationship between various procurement strategies and attainment of sustainability outcomes was conducted. Using specific terms, papers published between 2002 and 2018 were identified and screened according to an inclusion and exclusion criteria. Current findings reveal that, although the attainment of sustainability goals is achievable with both traditional and contemporary procurement strategies, only projects delivered using modern procurement strategies are capable of meeting and exceeding targeted sustainability objectives. However, traditional procurement strategy remains the preferred method for most green building construction projects. The results suggest implications for decision makers in considering the impact of selected procurement strategies on targeted sustainability goals, in the early stages of sustainable building construction projects. The study shows that there is a gap between the reported appropriate procurement strategies and what is being practiced currently. Theoretically, the study expands on the literature on adoption and diffusion of contemporary procurement strategies, by consolidating existing studies to highlight the current gaps. While the study is at the literature review stage, deductions will serve as basis for field work involving empirical data.

Keywords: Green building, green construction, procurement method, procurement strategy, sustainability objectives, sustainability outcomes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 944
3565 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation

Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu

Abstract:

This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.

Keywords: machine learning, neural network, pressurized water reactor, supervisory controller

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 495
3564 Performance of BLDC Motor under Kalman Filter Sensorless Drive

Authors: Yuri Boiko, Ci Lin, Iluju Kiringa, Tet Yeap

Abstract:

The performance of a permanent magnet brushless direct current (BLDC) motor controlled by the Kalman filter based position-sensorless drive is studied in terms of its dependence from the system’s parameters variations. The effects of the system’s parameters changes on the dynamic behavior of state variables are verified. Simulated is the closed loop control scheme with Kalman filter in the feedback line. Distinguished are two separate data sampling modes in analyzing feedback output from the BLDC motor: (1) equal angular separation and (2) equal time intervals. In case (1), the data are collected via equal intervals  of rotor’s angular position i, i.e. keeping  = const. In case (2), the data collection time points ti are separated by equal sampling time intervals t = const. Demonstrated are the effects of the parameters changes on the sensorless control flow, in particular, reduction of the instability torque ripples, switching spikes, and torque load balancing. It is specifically shown that an efficient suppression of commutation induced instability torque ripples is an achievable selection of the sampling rate in the Kalman filter settings above a certain critical value. The computational cost of such suppression is shown to be higher for the motors with lower induction values of the windings.

Keywords: BLDC motor, Kalman filter, sensorless drive, state variables, instability torque ripples reduction, sampling rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 712
3563 Substantial Fatigue Similarity of a New Small-Scale Test Rig to Actual Wheel-Rail System

Authors: Meysam Naeimi, Zili Li, Roumen Petrov, Rolf Dollevoet, Jilt Sietsma, Jun Wu

Abstract:

The substantial similarity of fatigue mechanism in a new test rig for rolling contact fatigue (RCF) has been investigated. A new reduced-scale test rig is designed to perform controlled RCF tests in wheel-rail materials. The fatigue mechanism of the rig is evaluated in this study using a combined finite element-fatigue prediction approach. The influences of loading conditions on fatigue crack initiation have been studied. Furthermore, the effects of some artificial defects (squat-shape) on fatigue lives are examined. To simulate the vehicle-track interaction by means of the test rig, a threedimensional finite element (FE) model is built up. The nonlinear material behaviour of the rail steel is modelled in the contact interface. The results of FE simulations are combined with the critical plane concept to determine the material points with the greatest possibility of fatigue failure. Based on the stress-strain responses, by employing of previously postulated criteria for fatigue crack initiation (plastic shakedown and ratchetting), fatigue life analysis is carried out. The results are reported for various loading conditions and different defect sizes. Afterward, the cyclic mechanism of the test rig is evaluated from the operational viewpoint. The results of fatigue life predictions are compared with the expected number of cycles of the test rig by its cyclic nature. Finally, the estimative duration of the experiments until fatigue crack initiation is roughly determined.

Keywords: Fatigue, test rig, crack initiation, life, rail, squats.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2163
3562 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics

Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh

Abstract:

In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.

Keywords: Bond ball mill, population balance model, product size distribution, vertical stirred mill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1142
3561 Closed form Delay Model for on-Chip VLSIRLCG Interconnects for Ramp Input for Different Damping Conditions

Authors: Susmita Sahoo, Madhumanti Datta, Rajib Kar

Abstract:

Fast delay estimation methods, as opposed to simulation techniques, are needed for incremental performance driven layout synthesis. On-chip inductive effects are becoming predominant in deep submicron interconnects due to increasing clock speed and circuit complexity. Inductance causes noise in signal waveforms, which can adversely affect the performance of the circuit and signal integrity. Several approaches have been put forward which consider the inductance for on-chip interconnect modelling. But for even much higher frequency, of the order of few GHz, the shunt dielectric lossy component has become comparable to that of other electrical parameters for high speed VLSI design. In order to cope up with this effect, on-chip interconnect has to be modelled as distributed RLCG line. Elmore delay based methods, although efficient, cannot accurately estimate the delay for RLCG interconnect line. In this paper, an accurate analytical delay model has been derived, based on first and second moments of RLCG interconnection lines. The proposed model considers both the effect of inductance and conductance matrices. We have performed the simulation in 0.18μm technology node and an error of as low as less as 5% has been achieved with the proposed model when compared to SPICE. The importance of the conductance matrices in interconnect modelling has also been discussed and it is shown that if G is neglected for interconnect line modelling, then it will result an delay error of as high as 6% when compared to SPICE.

Keywords: Delay Modelling; On-Chip Interconnect; RLCGInterconnect; Ramp Input; Damping; VLSI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039
3560 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems

Authors: Nyeng P. Gyang

Abstract:

Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.

Keywords: Cloud computing systems, multicore systems, parallel delaunay triangulation, parallel surface modeling and generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 874
3559 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
3558 Urban Air Pollution – Trend and Forecasting of Major Pollutants by Timeseries Analysis

Authors: A.L. Seetharam, B.L. Udaya Simha

Abstract:

The Bangalore City is facing the acute problem of pollution in the atmosphere due to the heavy increase in the traffic and developmental activities in recent years. The present study is an attempt in the direction to assess trend of the ambient air quality status of three stations, viz., AMCO Batteries Factory, Mysore Road, GRAPHITE INDIA FACTORY, KHB Industrial Area, Whitefield and Ananda Rao Circle, Gandhinagar with respect to some of the major criteria pollutants such as Total Suspended particular matter (SPM), Oxides of nitrogen (NOx), and Oxides of sulphur (SO2). The sites are representative of various kinds of growths viz., commercial, residential and industrial, prevailing in Bangalore, which are contributing to air pollution. The concentration of Sulphur Dioxide (SO2) at all locations showed a falling trend due to use of refined petrol and diesel in the recent years. The concentration of Oxides of nitrogen (NOx) showed an increasing trend but was within the permissible limits. The concentration of the Suspended particular matter (SPM) showed the mixed trend. The correlation between model and observed values is found to vary from 0.4 to 0.7 for SO2, 0.45 to 0.65 for NOx and 0.4 to 0.6 for SPM. About 80% of data is observed to fall within the error band of ±50%. Forecast test for the best fit models showed the same trend as actual values in most of the cases. However, the deviation observed in few cases could be attributed to change in quality of petro products, increase in the volume of traffic, introduction of LPG as fuel in many types of automobiles, poor condition of roads, prevailing meteorological conditions, etc.

Keywords: Bangalore, urban air pollution, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
3557 Numerical Simulation of the Dynamic Behavior of a LaNi5 Water Pumping System

Authors: Miled Amel, Ben Maad Hatem, Askri Faouzi, Ben Nasrallah Sassi

Abstract:

Metal hydride water pumping system uses hydrogen as working fluid to pump water for low head and high discharge. The principal operation of this pump is based on the desorption of hydrogen at high pressure and its absorption at low pressure by a metal hydride. This work is devoted to study a concept of the dynamic behavior of a metal hydride pump using unsteady model and LaNi5 as hydriding alloy. This study shows that with MHP, it is possible to pump 340l/kg-cycle of water in 15 000s using 1 Kg of LaNi5 at a desorption temperature of 360 K, a pumping head equal to 5 m and a desorption gear ratio equal to 33. This study reveals also that the error given by the steady model, using LaNi5 is about 2%.A dimensional mathematical model and the governing equations of the pump were presented to predict the coupled heat and mass transfer within the MHP. Then, a numerical simulation is carried out to present the time evolution of the specific water discharge and to test the effect of different parameters (desorption temperature, absorption temperature, desorption gear ratio) on the performance of the water pumping system (specific water discharge, pumping efficiency and pumping time). In addition, a comparison between results obtained with steady and unsteady model is performed with different hydride mass. Finally, a geometric configuration of the reactor is simulated to optimize the pumping time.

Keywords: Dynamic behavior, unsteady model, LaNi5, performance of the water pumping system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764
3556 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band

Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant Kumar Srivastava

Abstract:

An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986 and 0.9214 respectively at HHpolarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373 and 0.9428 respectively.

Keywords: Bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3218
3555 Influence of Dietary Inclusion of Butyric Acids, Calcium Formate, Organic Acids and Its Salts on Rabbits Productive Performance, Carcass Traits and Meat Quality

Authors: V. Viliene, A. Raceviciute-Stupeliene, V. Sasyte, V. Slausgalvis, R. Gruzauskas, J. Al-Saifi

Abstract:

Animal nutritionists and scientists have searched for alternative measures to improve the production. One of such alternative is use of organic acids as feed additive in animal nutrition. The study was conducted to investigate the impact of butyric acids, calcium formate, organic acids, and its salts (BCOS) additives on rabbit’s productive performance, carcass traits and meat quality. The study was conducted with 14 Californian breed rabbits. The rabbits were assigned to two treatment groups (seven rabbits per each treatment group). The dietary treatments were 1) control diet, 2) diet supplemented with a mixture BCOS - 2 kg/t of feed. Growth performance characteristics (body weight, daily weight gain, daily feed intake, feed conversion ratio, mortality) were evaluated. Rabbits were slaughtered; carcass characteristics and meat quality were evaluated. Samples loin and hind leg meat were analysed to determine carcass characteristics, pH and colour measurements, cholesterol, and malonyldialdehyde (MDA) content in loin and hind leg meat. Differences between treatments were significant for body weight (1.30 vs. 1.36 kg; P<0.05), daily weight gain (16.60 vs. 17.85 g; P<0.05), and daily feed intake (78.25 vs. 80.58 g; P<0.05) for control and experimental group respectively for the entire experimental period (from 28–77 days old). No significant differences were found in feed conversion ratio and mortality. The feed additives insertion in the diets did not significantly influence the carcass yield or the proportions of the various carcass parts and organs. Differences between treatments were significant for pH value after 48h in loin (5.86 vs. 5.74; P<0.05), hind leg meat (6.62 vs. 6.65; P<0.05), more intense colour b* of loin (5.57 vs. 6.06; P<0.05), less intense colour a* (14.99 vs. 13.15; P<0.05) in hind leg meat. Cholesterol content in hind leg meat decreased by 17.67 mg/100g compared to control group (P<0.05). After storage for three months, MDA concentration decreased in loin and hind leg meat by 0.3 μmol/kg and 0.26 μmol/kg respectively compared to that of the control group (P<0.05). The results of this study suggest that BCOS could potentially be used in rabbit nutrition with consequent benefits on the rabbits’ productivity and nutritional quality of rabbit meat for consumers.

Keywords: Butyric acids, calcium formate, meat quality, organic acids salts, rabbits, productivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1398
3554 Design and Construction Validation of Pile Performance through High Strain Pile Dynamic Tests for both Contiguous Flight Auger and Drilled Displacement Piles

Authors: S. Pirrello

Abstract:

Sydney’s booming real estate market has pushed property developers to invest in historically “no-go” areas, which were previously too expensive to develop. These areas are usually near rivers where the sites are underlain by deep alluvial and estuarine sediments. In these ground conditions, conventional bored pile techniques are often not competitive. Contiguous Flight Auger (CFA) and Drilled Displacement (DD) Piles techniques are on the other hand suitable for these ground conditions. This paper deals with the design and construction challenges encountered with these piling techniques for a series of high-rise towers in Sydney’s West. The advantages of DD over CFA piles such as reduced overall spoil with substantial cost savings and achievable rock sockets in medium strength bedrock are discussed. Design performances were assessed with PIGLET. Pile performances are validated in two stages, during constructions with the interpretation of real-time data from the piling rigs’ on-board computer data, and after construction with analyses of results from high strain pile dynamic testing (PDA). Results are then presented and discussed. High Strain testing data are presented as Case Pile Wave Analysis Program (CAPWAP) analyses.

Keywords: Contiguous flight auger, case pile wave analysis, high strain pile, drilled displacement, pile performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 971
3553 Adaptive Kalman Filter for Noise Estimation and Identification with Bayesian Approach

Authors: Farhad Asadi, S. Hossein Sadati

Abstract:

Bayesian approach can be used for parameter identification and extraction in state space models and its ability for analyzing sequence of data in dynamical system is proved in different literatures. In this paper, adaptive Kalman filter with Bayesian approach for identification of variances in measurement parameter noise is developed. Next, it is applied for estimation of the dynamical state and measurement data in discrete linear dynamical system. This algorithm at each step time estimates noise variance in measurement noise and state of system with Kalman filter. Next, approximation is designed at each step separately and consequently sufficient statistics of the state and noise variances are computed with a fixed-point iteration of an adaptive Kalman filter. Different simulations are applied for showing the influence of noise variance in measurement data on algorithm. Firstly, the effect of noise variance and its distribution on detection and identification performance is simulated in Kalman filter without Bayesian formulation. Then, simulation is applied to adaptive Kalman filter with the ability of noise variance tracking in measurement data. In these simulations, the influence of noise distribution of measurement data in each step is estimated, and true variance of data is obtained by algorithm and is compared in different scenarios. Afterwards, one typical modeling of nonlinear state space model with inducing noise measurement is simulated by this approach. Finally, the performance and the important limitations of this algorithm in these simulations are explained. 

Keywords: adaptive filtering, Bayesian approach Kalman filtering approach, variance tracking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 607
3552 Synthesis of Temperature Sensitive Nano/Microgels by Soap-Free Emulsion Polymerization and Their Application in Hydrate Sediments Drilling Operations

Authors: Xuan Li, Weian Huang, Jinsheng Sun, Fuhao Zhao, Zhiyuan Wang, Jintang Wang

Abstract:

Natural gas hydrates (NGHs) as promising alternative energy sources have gained increasing attention. Hydrate-bearing formation in marine areas is highly unconsolidated formation and is fragile, which is composed of weakly cemented sand-clay and silty sediments. During the drilling process, the invasion of drilling fluid can easily lead to excessive water content in the formation. It will change the soil liquid plastic limit index, which significantly affects the formation quality, leading to wellbore instability due to the metastable character of hydrate-bearing sediments. Therefore, controlling the filtrate loss into the formation in the drilling process has to be highly regarded for protecting the stability of the wellbore. In this study, the temperature-sensitive nanogel of P(NIPAM-co-AMPS-co-tBA) was prepared by soap-free emulsion polymerization, and the temperature-sensitive behavior was employed to achieve self-adaptive plugging in hydrate sediments. First, the effects of additional amounts of 2-acrylamido-2-methyl-1-propanesulfonic acid (AMPS), tert-butyl acrylate (tBA), and methylene-bis-acrylamide (MBA) on the microgel synthesis process and temperature-sensitive behaviors were investigated. Results showed that, as a reactive emulsifier, AMPS can not only participate in the polymerization reaction but also act as an emulsifier to stabilize micelles and enhance the stability of nanoparticles. The volume phase transition temperature (VPTT) of nanogels gradually decreased with the increase of the contents of hydrophobic monomer tBA. An increase in the content of the cross-linking agent MBA can lead to a rise in the coagulum content and instability of the emulsion. The plugging performance of nanogel was evaluated in a core sample with a pore size distribution range of 100-1000 nm. The temperature-sensitive nanogel can effectively improve the microfiltration performance of drilling fluid. Since a combination of a series of nanogels could have a wide particle size distribution at any temperature, around 200 nm to 800 nm, the self-adaptive plugging capacity of nanogels for the hydrate sediments was revealed. Thermosensitive nanogel is a potential intelligent plugging material for drilling operations in NGH-bearing sediments.

Keywords: Temperature-sensitive nanogel, NIPAM, self-adaptive plugging performance, drilling operations, hydrate-bearing sediments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112
3551 Membrane Distillation Process Modeling: Dynamical Approach

Authors: Fadi Eleiwi, Taous Meriem Laleg-Kirati

Abstract:

This paper presents a complete dynamic modeling of a membrane distillation process. The model contains two consistent dynamic models. A 2D advection-diffusion equation for modeling the whole process and a modified heat equation for modeling the membrane itself. The complete model describes the temperature diffusion phenomenon across the feed, membrane, permeate containers and boundary layers of the membrane. It gives an online and complete temperature profile for each point in the domain. It explains heat conduction and convection mechanisms that take place inside the process in terms of mathematical parameters, and justify process behavior during transient and steady state phases. The process is monitored for any sudden change in the performance at any instance of time. In addition, it assists maintaining production rates as desired, and gives recommendations during membrane fabrication stages. System performance and parameters can be optimized and controlled using this complete dynamic model. Evolution of membrane boundary temperature with time, vapor mass transfer along the process, and temperature difference between membrane boundary layers are depicted and included. Simulations were performed over the complete model with real membrane specifications. The plots show consistency between 2D advection-diffusion model and the expected behavior of the systems as well as literature. Evolution of heat inside the membrane starting from transient response till reaching steady state response for fixed and varying times is illustrated.

Keywords: Membrane distillation, Dynamical modeling, Advection-diffusion equation, Thermal equilibrium, Heat equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2846
3550 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling. The research proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling. The paper concludes the challenges and improvement directions for Deep Reinforcement Learning-based resource scheduling algorithms.

Keywords: Resource scheduling, deep reinforcement learning, distributed system, artificial intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 473
3549 Using Data Mining in Automotive Safety

Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler

Abstract:

Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.

Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2838
3548 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: Airborne laser scanning, digital terrain models, filtering, forested areas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 707
3547 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.

Keywords: Deep learning, artificial neural networks, energy price forecasting, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1085
3546 Study on Mitigation Measures of Gumti Hydro Power Plant Using Analytic Hierarchy Process and Concordance Analysis Techniques

Authors: K. Majumdar, S. Datta

Abstract:

Electricity is recognized as fundamental to industrialization and improving the quality of life of the people. Harnessing the immense untapped hydropower potential in Tripura region opens avenues for growth and provides an opportunity to improve the well-being of the people of the region, while making substantial contribution to the national economy. Gumti hydro power plant generates power to mitigate the crisis of power in Tripura, India. The first unit of hydro power plant (5MW) was commissioned in June 1976 & another two units of 5 MW was commissioned simultaneously. But out of 15MW capacity at present only 8MW- 9MW power is produced from Gumti hydro power plant during rainy season. But during lean season the production reduces to 0.5MW due to shortage of water. Now, it is essential to implement some mitigation measures so that the further atrocities can be prevented and originality will be possible to restore. The decision making ability of the Analytic Hierarchy Process (AHP) and Concordance Analysis Techniques (CAT) are utilized to identify the better decision or solution to the present problem. Some related attributes are identified by the method of surveying within the experts and the available reports and literatures. Similar criteria are removed and ultimately seven relevant ones are identified. All the attributes are compared with each other and rated accordingly to their importance over the other with the help of Pair wise Comparison Matrix. In the present investigation different mitigation measures are identified and compared to find the best suitable alternative which can solve the present uncertainties involving the existence of the Gumti Hydro Power Plant.

Keywords: Concordance Analysis Techniques, Analytic Hierarchy Process, Hydro Power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981
3545 The Importance of Zakat in Struggle against Circle of Poverty and Income Redistribution

Authors: Hasan Bulent Kantarcı

Abstract:

This paper examines how “Zakat” provides fair income redistribution and aids the struggle against poverty. Providing fair income redistribution and combating poverty constitutes some of the fundamental tasks performed by countries all over the world. Each country seeks a solution for these problems according to their political, economic and administrative styles through applying various economic and financial policies. The same situation can be handled via “zakat” association in Islam. Nowadays, we observe different versions of “zakat” in developed countries. Applications such as negative income tax denote merely a different form of “zakat” that is being applied almost in the same way but under changed names. However, the minimum values to donate under zakat (e.g. 85 gr. gold and 40 animals) get altered and various amounts are put into practice. It might be named as negative income tax instead of zakat, nonetheless, these applications are based on the Holy Koran and the hadith released 1400 years ago. Besides, considering the savage and slavery in the world at those times, we might easily recognize the true value of the zakat being applied for the first time then in the Islamic system. Through zakat, governments are able to transfer incomes to the poor as a means of enabling them achieve the minimum standard of living required. With regards to who benefits from the Zakat, an objective and fair criteria was used to determine who benefits from the zakat contrary to the notion that it was based on peoples’ own choices. Since the zakat is obligatory, the transfers do not get forwarded directly but via the government and get distributed, which requires vast governmental organizations. Through the application of Zakat, reduced levels of poverty can be achieved and also ensure the fair income redistribution.

Keywords: Cycle of poverty, Islamic finance, income redistribution, zakat.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2293
3544 Predictor Factors for Treatment Failure among Patients on Second Line Antiretroviral Therapy

Authors: Mohd. A. M. Rahim, Yahaya Hassan, Mathumalar L. Fahrni

Abstract:

Second line antiretroviral therapy (ART) regimen is used when patients fail their first line regimen. There are many factors such as non-adherence, drug resistance as well as virological and immunological failure that lead to second line highly active antiretroviral therapy (HAART) regimen treatment failure. This study was aimed at determining predictor factors to treatment failure with second line HAART and analyzing median survival time. An observational, retrospective study was conducted in Sungai Buloh Hospital (HSB) to assess current status of HIV patients treated with second line HAART regimen. Convenience sampling was used and 104 patients were included based on the study’s inclusion and exclusion criteria. Data was collected for six months i.e. from July until December 2013. Data was then analysed using SPSS version 18. Kaplan-Meier and Cox regression analyses were used to measure median survival times and predictor factors for treatment failure. The study population consisted mainly of male subjects, aged 30- 45 years, who were heterosexual, and had HIV infection for less than 6 years. The most common second line HAART regimen given was lopinavir/ritonavir (LPV/r)-based combination. Kaplan-Meier analysis showed that patients on LPV/r demonstrated longer median survival times than patients on indinavir/ritonavir (IDV/r) based combination (p<0.001). The commonest reason for a treatment to fail with second line HAART was non-adherence. Based on Cox regression analysis, other predictor factors for treatment failure with second line HAART regimen were age and mode of HIV transmission.

Keywords: Adherence, antiretroviral therapy, second line, treatment failure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2713
3543 Investigation of I/Q Imbalance in Coherent Optical OFDM System

Authors: R. S. Fyath, Mustafa A. B. Al-Qadi

Abstract:

The inphase/quadrature (I/Q) amplitude and phase imbalance effects are studied in coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. An analytical model for the I/Q imbalance is developed and supported by simulation results. The results indicate that the I/Q imbalance degrades the BER performance considerably.

Keywords: Coherent detection, I/Q imbalance, OFDM, optical communications

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2562