Search results for: adaptive filter and average filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6277

Search results for: adaptive filter and average filter

5167 Perceptual Image Coding by Exploiting Internal Generative Mechanism

Authors: Kuo-Cheng Liu

Abstract:

In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.

Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain

Procedia PDF Downloads 226
5166 Vine Growers' Climate Change Adaptation Strategies in Hungary

Authors: Gabor Kiraly

Abstract:

Wine regions are based on equilibria between climate, soil, grape varieties, and farming expertise that define the special character and quality of local vine farming and wine production. Changes in climate conditions may increase risk of destabilizing this equilibrium. Adaptation decisions, including adjusting practices, processes and capitals in response to climate change stresses – may reduce this risk. However, farmers’ adaptive behavior are subject to a wide range of factors and forces such as links between climate change implications and production, farm - scale adaptive capacity and other external forces that might hinder them to make efficient response to climate change challenges. This paper will aim to study climate change adaptation practices and strategies of grape growers in a way of applying a complex and holistic approach involving theories, methods and tools both from environmental and social sciences. It will introduce the field of adaptation studies as an evidence - based discourse by presenting an overview of examples from wine regions where adaptation studies have already reached an advanced stage. This will serve as a theoretical background for a preliminary research with the aim to examine the feasibility and applicability of such a research approach in the Hungarian context.

Keywords: climate change, adaptation, viticulture, Hungary

Procedia PDF Downloads 217
5165 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 86
5164 Scour Depth Prediction around Bridge Piers Using Neuro-Fuzzy and Neural Network Approaches

Authors: H. Bonakdari, I. Ebtehaj

Abstract:

The prediction of scour depth around bridge piers is frequently considered in river engineering. One of the key aspects in efficient and optimum bridge structure design is considered to be scour depth estimation around bridge piers. In this study, scour depth around bridge piers is estimated using two methods, namely the Adaptive Neuro-Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN). Therefore, the effective parameters in scour depth prediction are determined using the ANN and ANFIS methods via dimensional analysis, and subsequently, the parameters are predicted. In the current study, the methods’ performances are compared with the nonlinear regression (NLR) method. The results show that both methods presented in this study outperform existing methods. Moreover, using the ratio of pier length to flow depth, ratio of median diameter of particles to flow depth, ratio of pier width to flow depth, the Froude number and standard deviation of bed grain size parameters leads to optimal performance in scour depth estimation.

Keywords: adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN), bridge pier, scour depth, nonlinear regression (NLR)

Procedia PDF Downloads 202
5163 EWMA and MEWMA Control Charts for Monitoring Mean and Variance in Industrial Processes

Authors: L. A. Toro, N. Prieto, J. J. Vargas

Abstract:

There are many control charts for monitoring mean and variance. Among these, the X y R, X y S, S2 Hotteling and Shewhart control charts, for mentioning some, are widely used for monitoring mean a variance in industrial processes. In particular, the Shewhart charts are based on the information about the process contained in the current observation only and ignore any information given by the entire sequence of points. Moreover, that the Shewhart chart is a control chart without memory. Consequently, Shewhart control charts are found to be less sensitive in detecting smaller shifts, particularly smaller than 1.5 times of the standard deviation. These kind of small shifts are important in many industrial applications. In this study and effective alternative to Shewhart control chart was implemented. In case of univariate process an Exponentially Moving Average (EWMA) control chart was developed and Multivariate Exponentially Moving Average (MEWMA) control chart in case of multivariate process. Both of these charts were based on memory and perform better that Shewhart chart while detecting smaller shifts. In these charts, information the past sample is cumulated up the current sample and then the decision about the process control is taken. The mentioned characteristic of EWMA and MEWMA charts, are of the paramount importance when it is necessary to control industrial process, because it is possible to correct or predict problems in the processes before they come to a dangerous limit.

Keywords: control charts, multivariate exponentially moving average (MEWMA), exponentially moving average (EWMA), industrial control process

Procedia PDF Downloads 334
5162 Formation Flying Design Applied for an Aurora Borealis Monitoring Mission

Authors: Thais Cardoso Franco, Caio Nahuel Sousa Fagonde, Willer Gomes dos Santos

Abstract:

Aurora Borealis is an optical phenomenon composed of luminous events observed in the night skies in the polar regions resulting from disturbances in the magnetosphere due to the impact of solar wind particles with the Earth's upper atmosphere, channeled by the Earth's magnetic field, which causes atmospheric molecules to become excited and emit electromagnetic spectrum, leading to the display of lights in the sky. However, there are still different implications of this phenomenon under study: high intensity auroras are often accompanied by geomagnetic storms that cause blackouts on Earth and impair the transmission of signals from the Global Navigation Satellite Systems (GNSS). Auroras are also known to occur on other planets and exoplanets, so the activity is an indication of active space weather conditions that can aid in learning about the planetary environment. In order to improve understanding of the phenomenon, this research aims to design a satellite formation flying solution for collecting and transmitting data for monitoring aurora borealis in northern hemisphere, an approach that allows studying the event with multipoint data collection in a reduced time interval, in order to allow analysis from the beginning of the phenomenon until its decline. To this end, the ideal number of satellites, the spacing between them, as well as the ideal topology to be used will be analyzed. From an orbital study, approaches from different altitudes, eccentricities and inclinations will also be considered. Given that at large relative distances between satellites in formation, controllers tend to fail, a study on the efficiency of nonlinear adaptive control methods from the point of view of position maintenance and propellant consumption will be carried out. The main orbital perturbations considered in the simulation: non-homogeneity terrestrial, atmospheric drag, gravitational action of the Sun and the Moon, accelerations due to solar radiation pressure and relativistic effects.

Keywords: formation flying, nonlinear adaptive control method, aurora borealis, adaptive SDRE method

Procedia PDF Downloads 11
5161 Using Data Mining Techniques to Evaluate the Different Factors Affecting the Academic Performance of Students at the Faculty of Information Technology in Hashemite University in Jordan

Authors: Feras Hanandeh, Majdi Shannag

Abstract:

This research studies the different factors that could affect the Faculty of Information Technology in Hashemite University students’ accumulative average. The research paper verifies the student information, background, their academic records, and how this information will affect the student to get high grades. The student information used in the study is extracted from the student’s academic records. The data mining tools and techniques are used to decide which attribute(s) will affect the student’s accumulative average. The results show that the most important factor which affects the students’ accumulative average is the student Acceptance Type. And we built a decision tree model and rules to determine how the student can get high grades in their courses. The overall accuracy of the model is 44% which is accepted rate.

Keywords: data mining, classification, extracting rules, decision tree

Procedia PDF Downloads 398
5160 Perfomance of PAPR Reduction in OFDM System for Wireless Communications

Authors: Alcardo Alex Barakabitze, Saddam Aziz, Muhammad Zubair

Abstract:

The Orthogonal Frequency Division Multiplexing (OFDM) is a special form of multicarrier transmission that splits the total transmission bandwidth into a number of orthogonal and non-overlapping subcarriers and transmit the collection of bits called symbols in parallel using these subcarriers. In this paper, we explore the Peak to Average Power Reduction (PAPR) problem in OFDM systems. We provide the performance analysis of CCDF and BER through MATLAB simulations.

Keywords: bit error ratio (BER), OFDM, peak to average power reduction (PAPR), sub-carriers

Procedia PDF Downloads 522
5159 A Modeling Approach for Blockchain-Oriented Information Systems Design

Authors: Jiaqi Yan, Yani Shi

Abstract:

The blockchain technology is regarded as the most promising technology that has the potential to trigger a technological revolution. However, besides the bitcoin industry, we have not yet seen a large-scale application of blockchain in those domains that are supposed to be impacted, such as supply chain, financial network, and intelligent manufacturing. The reasons not only lie in the difficulties of blockchain implementation, but are also root in the challenges of blockchain-oriented information systems design. As the blockchain members are self-interest actors that belong to organizations with different existing information systems. As they expect different information inputs and outputs of the blockchain application, a common language protocol is needed to facilitate communications between blockchain members. Second, considering the decentralization of blockchain organization, there is not any central authority to organize and coordinate the business processes. Thus, the information systems built on blockchain should support more adaptive business process. This paper aims to address these difficulties by providing a modeling approach for blockchain-oriented information systems design. We will investigate the information structure of distributed-ledger data with conceptual modeling techniques and ontology theories, and build an effective ontology mapping method for the inter-organization information flow and blockchain information records. Further, we will study the distributed-ledger-ontology based business process modeling to support adaptive enterprise on blockchain.

Keywords: blockchain, ontology, information systems modeling, business process

Procedia PDF Downloads 414
5158 Adaptive Swarm Balancing Algorithms for Rare-Event Prediction in Imbalanced Healthcare Data

Authors: Jinyan Li, Simon Fong, Raymond Wong, Mohammed Sabah, Fiaidhi Jinan

Abstract:

Clinical data analysis and forecasting have make great contributions to disease control, prevention and detection. However, such data usually suffer from highly unbalanced samples in class distributions. In this paper, we target at the binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat-inspired algorithm, and combine both of them with the synthetic minority over-sampling technique (SMOTE) for processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reveal that while the performance improvements obtained by the former methods are not scalable to larger data scales, the later one, which we call Adaptive Swarm Balancing Algorithms, leads to significant efficiency and effectiveness improvements on large datasets. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. Leading to more credible performances of the classifier, and shortening the running time compared with the brute-force method.

Keywords: Imbalanced dataset, meta-heuristic algorithm, SMOTE, big data

Procedia PDF Downloads 424
5157 Women Empowerment, Joint Income Ownership and Planning for Building Household Resilience on Climate Change: The Case of Kilimanjaro Region, Tanzania

Authors: S. I. Mwasha, Z. Robinson, M. Musgrave

Abstract:

Communities, especially in the global south, have been reported to have low adaptive capacity to cope with climate change impacts. As an attempt to improve adaptive capacity, most studies have focused on understanding the access of the household resources which can contribute to resilience against changes. However, little attention has been shown in uncovering how the household resources could be used and their implications to resilience against weather related shocks. By using a case study qualitative study, this project analyzed the trends in livelihoods practices and their implication to social equity. The study was done in three different villages within Kilimanjaro region. Each in different agro ecological zone. Two focus group discussions in two agro-ecological zones were done, one for women and another one for men except in the third zone where focus group participant were combined together (due to unforeseen circumstances). In the focus group discussion, several participatory rural appraisal tools were used to understand trend in crops and animal production and the use in which it is made: climate trends, soil fertility, trees and other livelihoods resources. Data were analyzed using thematic network analysis. Using an amalgam of magnitude (to note weather comments made were positive or negative) and descriptive coding (to note the topic), six basic themes were identified under social equity: individual ownership, family ownership, love and respect, women no education, women access to education as well as women access to loans. The results implied that despite mum and dad in the family providing labor in the agro pastoral activities, there were separations on who own what, as well as individual obligations in the family. Dad owned mostly income creating crops and mum, food crops. therefore, men controlled the economy which made some of them become arrogant and spend money to meet their interests sometimes not taking care of the family. Separation in ownership was reported to contribute to conflicts in the household as well as causing controversy on the use income is spent. Men were reported to use income to promote matriarchy system. However, as women were capacitated through access to education and loans they become closer to their husband and get access to own and plan the income together for the interest of the family. Joint ownership and planning on the household resources were reported to be important if families have to better adapt to climate change. The aim of this study is not to show women empowerment and joint ownership and planning as only remedy for low adaptive capacity. There is the need to understand other practices that either directly or indirectly impacts environmental integrity, food security and economic development for household resilience against changing climate.

Keywords: adaptive capacity, climate change, resilience, women empowerment

Procedia PDF Downloads 149
5156 Epidemiological profile of Tuberculosis Disease in Meknes, Morocco. Descriptive analysis, 2016-2020

Authors: Authors: A. Lakhal, M. Bahalou, A. Khattabi

Abstract:

Introduction: Tuberculosis is one of the world's deadliest infectious diseases. In Morocco, a total of 30,636 cases of Tuberculosis, all forms combined, were reported in 2015, representing an incidence of 89 cases per 100,000 population. The number of deaths from tuberculosis (TB) was 656 cases. In the prefecture of Meknes, its incidence remains high compared to the national level. The objective of this work is to describe the epidemiological profile of tuberculosis in the prefecture of Meknes. Methods: It is a descriptive analysis of TB cases reported between 2016 and 2020 at the regional diagnostic center of tuberculosis and respiratory diseases. We performed analysis by using Microsoft Excel and EpiInfo 7. Results: Epidemiological data from 2016 to 2020 report a total of 4100 new cases of all forms of tuberculosis, with an average of 820 new cases per year. The median age is 32 years. There is a clear male predominance, on average 58% of cases are male and 42% female. The incidence rate of bacteriologically confirmed tuberculosis per 100,000 inhabitants has increased from 35 cases per 100,000 inhabitants in 2016 to 39.4 cases per 100,000 inhabitants in 2020. The confirmation rate for pulmonary tuberculosis decreased from 84% in 2016 to 75% in 2020. Pulmonary involvement predominates by an average of 46%, followed by lymph node involvement 29%and pleural involvement by an average of 10%. Digestive, osteoarticular, genitourinary, and meningeal involvement occurs in 8% of cases. Primary tuberculosis infection occurs in an average of 0.5% of cases. The proportion of HIV-TB co-infections was 2.8 in 2020. Conclusion: The incidence of tuberculosis in Meknes remains high compared to the national level. Thus, it is imperative to reinforce the earlier detection; improve the contact tracing, detection methods of cases for their confirmation and treatment, and to reduce the proportion of the lost to follow up as well.

Keywords: tuberculosis, epidemiological profile, meknes, morocco

Procedia PDF Downloads 142
5155 Voluntary Water Intake of Flavored Water in Euhydrated Horses

Authors: Brianna M. Soule, Jesslyn A. Bryk-Lucy, Linda M. Ritchie

Abstract:

Colic, defined as abdominal pain in the horse, has several known predisposing factors. Decreased water intake has been shown to predispose equines to impaction colic. The objective of this study was to determine if offering flavored water (sweet feed or banana extract) would increase voluntary water intake in horses to serve as an assessable, noninvasive method for farm managers, veterinarians, or owners to decrease the risk of impaction colic. An a priori power analysis, which was conducted using G*Power version 3.1.9.7, indicated that the minimum sample size required to achieve 80% power for detecting a large effect at a significance level of α = .05 was 19 horses for a one-way repeated measures ANOVA with three treatment levels and assuming a non-sphericity correction of ε=0.5. After a three-day control period, 21 horses were randomly divided into two sequences and offered either banana or sweet feed flavored water. Horses always had a bucket of unflavored water available. A repeated measure study design was used to measure water consumption of each horse over a 62-hour period. A one-way repeated measures ANOVA was conducted to determine whether there were statistically significant differences among the means for the three-day average water intake (ml/kg). Although not statistically significant (F(2, 38) = 1.28, p = .290, partial η2 = .063), the three-day average water intake was largest for banana flavored water (M = 53.51, SD = 9.25 ml/kg), followed by sweet feed (M = 52.93, SD = 11.99 ml/kg), and, finally, unflavored water (M = 50.40, SD = 10.82 ml/kg). Paired-samples t-tests were used to determine whether there was a statistically significant difference between the three-day average water intake (ml/kg) for flavored versus unflavored water. The average unflavored water intake (M = 29.3 ml/kg, SD = 8.9) over the measurement period was greater than the banana flavored water (M = 27.7 ml/kg, SD = 9.8), but the average consumption of the sweet feed flavored water (M = 30.4 ml/kg, SD = 14.6) was greater than unflavored water (M = 24.3 ml/kg, SD = 11.4). None of these differences in average intake were statistically significant (p > .244). Future research is warranted to determine if other flavors significantly increase voluntary water intake in horses.

Keywords: colic, equine, equine science, water intake, flavored water, horses, equine management, equine health, horse health, horse health care management, colic prevention

Procedia PDF Downloads 123
5154 Model Averaging for Poisson Regression

Authors: Zhou Jianhong

Abstract:

Model averaging is a desirable approach to deal with model uncertainty, which, however, has rarely been explored for Poisson regression. In this paper, we propose a model averaging procedure based on an unbiased estimator of the expected Kullback-Leibler distance for the Poisson regression. Simulation study shows that the proposed model average estimator outperforms some other commonly used model selection and model average estimators in some situations. Our proposed methods are further applied to a real data example and the advantage of this method is demonstrated again.

Keywords: model averaging, poission regression, Kullback-Leibler distance, statistics

Procedia PDF Downloads 500
5153 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management

Authors: Berk Ecer, Ebru Akcapinar Sezer

Abstract:

Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.

Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach

Procedia PDF Downloads 123
5152 An Adaptive Application of Emotionally Focused Couple Therapy with Trans and Nonbinary Couples

Authors: Reihaneh Mahdavishahri, Dumayi Gutierrez

Abstract:

Emotionally focused couple therapy (EFCT) is one of the most effective and evidence-based approaches to couple therapy. Yet, literature is scarce of its effective application with trans and non-binary couples. It is estimated that 1.4 million trans adults live in the United Stated, with about 40% of these individuals experiencing serious psychological distress within the past month. Trans and nonbinary adults are significantly likely to experience discrimination, harassment, family rejection, and relationship challenges throughout the course of their lives. As systemic therapists, applying an informed lens when working with trans and nonbinary couples can contribute to providing effective mental health care to these individuals. This paper aims to provide a comprehensive, intersectional, and culturally informed application of EFCT with trans and nonbinary couples. We will address the current literature on applications of EFCT with diverse couples, EFCT’s strengths and limitations on cultural humility, and the gaps within current systems of care for trans and nonbinary couples. We will then provide an adaptive application of EFCT to help trans, and nonbinary couples recover from potential attachment injuries in their relationships, intersecting gender minority stressors, and achieve healing and restoration in their interpersonal dynamics.

Keywords: attachment, culturally informed care, emotionally focused couple therapy, trans and nonbinary couples

Procedia PDF Downloads 58
5151 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 117
5150 Determination of Natural Gamma Radioactivity in Sand along the Black Sea Coastal Region of Giresun, North Turkey

Authors: A. Karadeniz, Belgin Kucukomeroglu

Abstract:

In this study natural gamma radioactivity levels are determined on sands along the coastal regions of Giresun/Turkey. The coast of Giresun about 290 km long in investigated to collect 101 sand samples. Natural and artificial radioactivity concentrations of sand samples were measured by using HPGe gamma spectrometry. The average activity concentrations of 238U, 232Th, 40K and 137Cs on sand samples of Giresun were found to be 10.83±2.92 Bq/kg, 21.28±3.22 Bq/kg, 6.42±1.06 Bq/kg, 230.94±10.67 Bq/kg respectively. The average activity concentrations for these radionuclides were compared with the reported data of other parts of Turkey and other countries. The average absorbed dose rate for Giresun was calculated to be 38.68 nGy/h respectively. This value is significantly lower than the World averaged value of 60 nGy/h. The external annual effective dose rate concentration in Giresun was found to be 0.047 mSv/y respectively. This result is much lower than the recommeded limit of 5 mSv/y. The external hazard dose rate for Giresun weas calculated to be 0.21 respectively. This result is much lower than the recommended limit of 1.0.

Keywords: concentration, radioactivity, Giresun, natural gamma radioactivity

Procedia PDF Downloads 376
5149 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 347
5148 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas

Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman

Abstract:

This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.

Keywords: doppler radar, FMCW, range detection, speed detection

Procedia PDF Downloads 378
5147 Distribution-Free Exponentially Weighted Moving Average Control Charts for Monitoring Process Variability

Authors: Chen-Fang Tsai, Shin-Li Lu

Abstract:

Distribution-free control chart is an oncoming area from the statistical process control charts in recent years. Some researchers have developed various nonparametric control charts and investigated the detection capability of these charts. The major advantage of nonparametric control charts is that the underlying process is not specifically considered the assumption of normality or any parametric distribution. In this paper, two nonparametric exponentially weighted moving average (EWMA) control charts based on nonparametric tests, namely NE-S and NE-M control charts, are proposed for monitoring process variability. Generally, weighted moving average (GWMA) control charts are extended by utilizing design and adjustment parameters for monitoring the changes in the process variability, namely NG-S and NG-M control charts. Statistical performance is also investigated on NG-S and NG-M control charts with run rules. Moreover, sensitivity analysis is performed to show the effects of design parameters under the nonparametric NG-S and NG-M control charts.

Keywords: Distribution-free control chart, EWMA control charts, GWMA control charts

Procedia PDF Downloads 251
5146 Radon-222 Concentration and Potential Risk to Workers of Al-Jalamid Phosphate Mines, North Province, Saudi Arabia

Authors: El-Said. I. Shabana, Mohammad S. Tayeb, Maher M. T. Qutub, Abdulraheem A. Kinsara

Abstract:

Usually, phosphate deposits contain 238U and 232Th in addition to their decay products. Due to their different pathways in the environment, the 238U/232Th activity concentration ratio usually found to be greater than unity in phosphate sediments. The presence of these radionuclides creates a potential need to control exposure of workers in the mining and processing activities of the phosphate minerals in accordance with IAEA safety standards. The greatest dose to workers comes from exposure to radon, especially 222Rn from the uranium series, and has to be controlled. In this regard, radon (222Rn) was measured in the atmosphere (indoor and outdoor) of Al-Jalamid phosphate-mines working area using a portable radon-measurement instrument RAD7, in a purpose of radiation protection. Radon was measured in 61 sites inside the open phosphate mines, the phosphate upgrading facility (offices and rooms of the workers, and in some open-air sites) and in the dwellings of the workers residence-village that lies at about 3 km from the mines working area. The obtained results indicated that the average indoor radon concentration was about 48.4 Bq/m3. Inside the upgrading facility, the average outdoor concentrations were 10.8 and 9.7 Bq/m3 in the concentrate piles and crushing areas, respectively. It was 12.3 Bq/m3 in the atmosphere of the open mines. These values are comparable with the global average values. Based on the average values, the annual effective dose due to radon inhalation was calculated and risk estimates have been done. The average annual effective dose to workers due to the radon inhalation was estimated by 1.32 mSv. The potential excess risk of lung cancer mortality that could be attributed to radon, when considering the lifetime exposure, was estimated by 53.0x10-4. The results have been discussed in detail.

Keywords: dosimetry, environmental monitoring, phosphate deposits, radiation protection, radon

Procedia PDF Downloads 253
5145 Rare Earth Element (REE) Geochemistry of Tepeköy Sandstones (Central Anatolia, Turkey)

Authors: Mehmet Yavuz Hüseyinca, Şuayip Küpeli

Abstract:

Sandstones from Upper Eocene - Oligocene Tepeköy formation (Member of Mezgit Group) that exposed on the eastern edge of Tuz Gölü (Salt Lake) were analyzed for their rare earth element (REE) contents. Average concentrations of ΣREE, ΣLREE (Total light rare earth elements) and ΣHREE (Total heavy rare earth elements) were determined as 31.37, 26.47 and 4.55 ppm respectively. These values are lower than UCC (Upper continental crust) which indicates grain size and/or CaO dilution effect. The chondrite-normalized REE pattern is characterized by the average ratios of (La/Yb)cn = 6.20, (La/Sm)cn = 4.06, (Gd/Lu)cn = 1.10, Eu/Eu* = 0.99 and Ce/Ce* = 0.94. Lower values of ΣLREE/ΣHREE (Average 5.97) and (La/Yb)cn suggest lower fractionation of overall REE. Moreover (La/Sm)cn and (Gd/Lu)cn ratios define less inclined LREE and almost flat HREE pattern when compared with UCC. Almost no Ce anomaly (Ce/Ce*) emphasizes that REE were originated from terrigenous material. Also depleted LREE and no Eu anomaly (Eu/Eu*) suggest an undifferentiated mafic provenance for the sandstones.

Keywords: central Anatolia, provenance, rare earth elements, REE, Tepeköy sandstone

Procedia PDF Downloads 444
5144 Computational Analyses of Persian Walnut Genetic Data: Notes on Genetic Diversity and Cultivar Phylogeny

Authors: Masoud Sheidaei, Melica Tabasi, Fahimeh Koohdar, Mona Sheidaei

Abstract:

Juglans regia L. is an economically important species of edible nuts. Iran is known as a center of origin of genetically rich walnut germplasm and expected to be found a large diversity within Iranian walnut populations. A detailed population genetic of local populations is useful for developing an optimal strategy for in situ conservation and can assist the breeders in crop improvement programs. Different phylogenetic studies have been carried out in this genus, but none has been concerned with genetic changes associated with geographical divergence and the identification of adaptive SNPs. Therefore, we carried out the present study to identify discriminating ITS nucleotides among Juglans species and also reveal association between ITS SNPs and geographical variables. We used different computations approaches like DAPC, CCA, and RDA analyses for the above-mentioned tasks. We also performed population genetics analyses for population effective size changes associated with the species expansion. The results obtained suggest that latitudinal distribution has a more profound effect on the species genetic changes. Similarly, multiple analytical approaches utilized for the identification of both discriminating DNA nucleotides/ SNPs almost produced congruent results. The SNPs with different phylogenetic importance were also identified by using a parsimony approach.

Keywords: Persian walnut, adaptive SNPs, data analyses, genetic diversity

Procedia PDF Downloads 108
5143 Inventory Optimization in Restaurant Supply Chain Outlets

Authors: Raja Kannusamy

Abstract:

The research focuses on reducing food waste in the restaurant industry. A study has been conducted on the chain of retail restaurant outlets. It has been observed that the food wastages are due to the inefficient inventory management systems practiced in the restaurant outlets. The major food items which are wasted more in quantity are being selected across the retail chain outlets. A moving average forecasting method has been applied for the selected food items so that their future demand could be predicted accurately and food wastage could be avoided. It has been found that the moving average prediction method helps in predicting forecasts accurately. The demand values obtained from the moving average method have been compared to the actual demand values and are found to be similar with minimum variations. The inventory optimization technique helps in reducing food wastage in restaurant supply chain outlets.

Keywords: food wastage, restaurant supply chain, inventory optimisation, demand forecasting

Procedia PDF Downloads 70
5142 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 383
5141 MARTI and MRSD: Newly Developed Isolation-Damping Devices with Adaptive Hardening for Seismic Protection of Structures

Authors: Murast Dicleli, Ali SalemMilani

Abstract:

In this paper, a summary of analytical and experimental studies into the behavior of a new hysteretic damper, designed for seismic protection of structures is presented. The Multi-directional Torsional Hysteretic Damper (MRSD) is a patented invention in which a symmetrical arrangement of identical cylindrical steel cores is so configured as to yield in torsion while the structure experiences planar movements due to earthquake shakings. The new device has certain desirable properties. Notably, it is characterized by a variable and controllable-via-design post-elastic stiffness. The mentioned property is a result of MRSD’s kinematic configuration which produces this geometric hardening, rather than being a secondary large-displacement effect. Additionally, the new system is capable of reaching high force and displacement capacities, shows high levels of damping, and very stable cyclic response. The device has gone through many stages of design refinement, multiple prototype verification tests and development of design guide-lines and computer codes to facilitate its implementation in practice. Practicality of the new device, as offspring of an academic sphere, is assured through extensive collaboration with industry in its final design stages, prototyping and verification test programs.

Keywords: seismic, isolation, damper, adaptive stiffness

Procedia PDF Downloads 442
5140 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items

Authors: Wen-Chung Wang, Xue-Lan Qiu

Abstract:

Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.

Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison

Procedia PDF Downloads 234
5139 Numerical Study of Mixed Convection Coupled to Radiation in a Square Cavity with a Lid-Driven

Authors: Belmiloud Mohamed Amine, Sad Chemloul Nord-Eddine

Abstract:

In this study we investigated numerically heat transfer by mixed convection coupled to radiation in a square cavity; the upper horizontal wall is movable. The purpose of this study is to see the influence of the emissivity and the varying of the Richardson number on the variation of the average Nusselt number. The vertical walls of the cavity are differentially heated, the left wall is maintained at a uniform temperature higher than the right wall, and the two horizontal walls are adiabatic. The finite volume method is used for solving the dimensionless governing equations. Emissivity values used in this study are ranged between 0 and 1, the Richardson number in the range 0.1 to10. The Rayleigh number is fixed to Ra = 10000 and the Prandtl number is maintained constant Pr = 0.71. Streamlines, isothermal lines and the average Nusselt number are presented according to the surface emissivity. The results of this study show that the Richardson number and emissivity affect the average Nusselt number.

Keywords: mixed convection, square cavity, wall emissivity, lid-driven, numerical study

Procedia PDF Downloads 316
5138 An Occupational Health Risk Assessment for Exposure to Benzene, Toluene, Ethylbenzene and Xylenes: A Case Study of Informal Traders in a Metro Centre (Taxi Rank) in South Africa

Authors: Makhosazana Dubazana

Abstract:

Many South Africans commuters use minibus taxis daily and are connected to the informal transport network through metro centres informally known as Taxi Ranks. Taxi ranks form part of an economic nexus for many informal traders, connecting them to commuters, their prime clientele. They work along designated areas along the periphery of the taxi rank and in between taxi lanes. Informal traders are therefore at risk of adverse health effects associated with the inhalation of exhaust fumes from minibus taxis. Of the exhaust emissions, benzene, toluene, ethylbenzene and xylenes (BTEX) have high toxicity. Purpose: The purpose of this study was to conduct a Human Health Risk Assessment for informal traders, looking at their exposure to BTEX compounds. Methods: The study was conducted in a subsection of a taxi rank which is representative of the entire taxi rank. This subsection has a daily average of 400 minibus taxi moving through it and an average of 60 informal traders working in it. In the health risk assessment, a questionnaire was conducted to understand the occupational behaviour of the informal traders. This was used to deduce the exposure scenarios and sampling locations. Three sampling campaigns were run for an average of 10 hours each covering the average working hours of traders. A gas chronographer was used for collecting continues ambient air samples at 15 min intervals. Results: Over the three sampling days, the average concentrations were, 8.46ppb, 0.63 ppb, 1.27ppb and 1.0ppb for benzene, toluene, ethylbenzene, and xylene respectively. The average cancer risk is 9.46E-03. In several cases, they were incidences of unacceptable risk for the cumulative exposure of all four BTEX compounds. Conclusion: This study adds to the body of knowledge on the Human Health Risk effects of urban BTEX pollution, furthermore focusing on the impact of urban BTEX on high risk personal such as informal traders, in Southern Africa.

Keywords: human health risk assessment, informal traders, occupational risk, urban BTEX

Procedia PDF Downloads 205