Search results for: variable step size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9962

Search results for: variable step size

9902 System Response of a Variable-Rate Aerial Application System

Authors: Daniel E. Martin, Chenghai Yang

Abstract:

Variable-rate aerial application systems are becoming more readily available; however, aerial applicators typically only use the systems for constant-rate application of materials, allowing the systems to compensate for upwind and downwind ground speed variations. Much of the resistance to variable-rate aerial application system adoption in the U.S. pertains to applicator’s trust in the systems to turn on and off automatically as desired. The objectives of this study were to evaluate a commercially available variable-rate aerial application system under field conditions to demonstrate both the response and accuracy of the system to desired application rate inputs. This study involved planting oats in a 35-acre fallow field during the winter months to establish a uniform green backdrop in early spring. A binary (on/off) prescription application map was generated and a variable-rate aerial application of glyphosate was made to the field. Airborne multispectral imagery taken before and two weeks after the application documented actual field deposition and efficacy of the glyphosate. When compared to the prescription application map, these data provided application system response and accuracy information. The results of this study will be useful for quantifying and documenting the response and accuracy of a commercially available variable-rate aerial application system so that aerial applicators can be more confident in their capabilities and the use of these systems can increase, taking advantage of all that aerial variable-rate technologies have to offer.

Keywords: variable-rate, aerial application, remote sensing, precision application

Procedia PDF Downloads 439
9901 On a Single Server Queue with Arrivals in Batches of Variable Size, Generalized Coxian-2 Service and Compulsory Server Vacations

Authors: Kailash C. Madan

Abstract:

We study the steady state behaviour of a batch arrival single server queue in which the first service with general service times is compulsory and the second service with general service times is optional. We term such a two phase service as generalized Coxian-2 service. Just after completion of a service the server must take a vacation of random length of time with general vacation times. We obtain steady state probability generating functions for the queue size as well as the steady state mean queue size at a random epoch of time in explicit and closed forms. Some particular cases of interest including some known results have been derived.

Keywords: batch arrivals, compound Poisson process, generalized Coxian-2 service, steady state

Procedia PDF Downloads 424
9900 Sensitivity Analysis of Movable Bed Roughness Formula in Sandy Rivers

Authors: Mehdi Fuladipanah

Abstract:

Sensitivity analysis as a technique is applied to determine influential input factors on model output. Variance-based sensitivity analysis method has more application compared to other methods because of including linear and non-linear models. In this paper, van Rijn’s movable bed roughness formula was selected to evaluate because of its reasonable results in sandy rivers. This equation contains four variables as: flow depth, sediment size,bBed form height and bed form length. These variable’s importance was determined using the first order of Fourier Amplitude Sensitivity Test. Sensitivity index was applied to evaluate importance of factors. The first order FAST based sensitivity indices test, explain 90% of the total variance that is indicating acceptance criteria of FAST application. More value of this index is indicating more important variable. Results show that bed form height, bed form length, sediment size and flow depth are more influential factors with sensitivity index: 32%, 24%, 19% and 15% respectively.

Keywords: sdensitivity analysis, variance, movable bed roughness formula, Sandy River

Procedia PDF Downloads 225
9899 Pharmaceutical Scale up for Solid Dosage Forms

Authors: A. Shashank Tiwari, S. P. Mahapatra

Abstract:

Scale-up is defined as the process of increasing batch size. Scale-up of a process viewed as a procedure for applying the same process to different output volumes. There is a subtle difference between these two definitions: batch size enlargement does not always translate into a size increase of the processing volume. In mixing applications, scale-up is indeed concerned with increasing the linear dimensions from the laboratory to the plant size. On the other hand, processes exist (e.g., tableting) where the term ‘scale-up’ simply means enlarging the output by increasing the speed. To complete the picture, one should point out special procedures where an increase of the scale is counterproductive and ‘scale-down’ is required to improve the quality of the product. In moving from Research and Development (R&D) to production scale, it is sometimes essential to have an intermediate batch scale. This is achieved at the so-called pilot scale, which is defined as the manufacturing of drug product by a procedure fully representative of and simulating that used for full manufacturing scale. This scale also makes it possible to produce enough products for clinical testing and to manufacture samples for marketing. However, inserting an intermediate step between R&D and production scales does not, in itself, guarantee a smooth transition. A well-defined process may generate a perfect product both in the laboratory and the pilot plant and then fail quality assurance tests in production.

Keywords: scale up, research, size, batch

Procedia PDF Downloads 373
9898 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 242
9897 The Effect of MOOC-Based Distance Education in Academic Engagement and Its Components on Kerman University Students

Authors: Fariba Dortaj, Reza Asadinejad, Akram Dortaj, Atena Baziyar

Abstract:

The aim of this study was to determine the effect of distance education (based on MOOC) on the components of academic engagement of Kerman PNU. The research was quasi-experimental method that cluster sampling with an appropriate volume was used in this study (one class in experimental group and one class in controlling group). Sampling method is single-stage cluster sampling. The statistical society is students of Kerman Payam Noor University, which) were selected 40 of them as sample (20 students in the control group and 20 students in experimental group). To test the hypothesis, it was used the analysis of univariate and Co-covariance to offset the initial difference (difference of control) in the experimental group and the control group. The instrument used in this study is academic engagement questionnaire of Zerang (2012) that contains component of cognitive, behavioral and motivational engagement. The results showed that there is no significant difference between mean scores of academic components of academic engagement in experimental group and the control group on the post-test, after elimination of the pre-test. The adjusted mean scores of components of academic engagement in the experimental group were higher than the adjusted average of scores after the test in the control group. The use of technology-based education in distance education has been effective in increasing cognitive engagement, motivational engagement and behavioral engagement among students. Experimental variable with the effect size 0.26, predicted 26% of cognitive engagement component variance. Experimental variable with the effect size 0.47, predicted 47% of the motivational engagement component variance. Experimental variable with the effect size 0.40, predicted 40% of behavioral engagement component variance. So teaching with technology (MOOC) has a positive impact on increasing academic engagement and academic performance of students in educational technology. The results suggest that technology (MOOC) is used to enrich the teaching of other lessons of PNU.

Keywords: educational technology, distance education, components of academic engagement, mooc technology

Procedia PDF Downloads 112
9896 Application of Single Subject Experimental Designs in Adapted Physical Activity Research: A Descriptive Analysis

Authors: Jiabei Zhang, Ying Qi

Abstract:

The purpose of this study was to develop a descriptive profile of the adapted physical activity research using single subject experimental designs. All research articles using single subject experimental designs published in the journal of Adapted Physical Activity Quarterly from 1984 to 2013 were employed as the data source. Each of the articles was coded in a subcategory of seven categories: (a) the size of sample; (b) the age of participants; (c) the type of disabilities; (d) the type of data analysis; (e) the type of designs, (f) the independent variable, and (g) the dependent variable. Frequencies, percentages, and trend inspection were used to analyze the data and develop a profile. The profile developed characterizes a small portion of research articles used single subject designs, in which most researchers used a small sample size, recruited children as subjects, emphasized learning and behavior impairments, selected visual inspection with descriptive statistics, preferred a multiple baseline design, focused on effects of therapy, inclusion, and strategy, and measured desired behaviors more often, with a decreasing trend over years.

Keywords: adapted physical activity research, single subject experimental designs, physical education, sport science

Procedia PDF Downloads 435
9895 Step into the Escalator’s Fractal Behavior by Using the Poincare Map

Authors: Ali Albadri

Abstract:

Step band in an escalator moves in a cyclic periodic pattern. Similarly, most if not all of the components and sub-assemblies in the escalator operate in the same way. If you mark up one step in the step band of an escalator and stand next to the escalator, on the incline, to watch the marked-up step when it passes by, you ask yourself, does the marked up step behaves exactly the same way during each revolution when it passes you by again and again? We can say that; there is some similarity in this example and the example when an astronomer watches planets in the sky, and he or she asks himself or herself, does each planet intersects the plan of observation in the same position for every pantry rotation? For a fact, we know for the answer to the second example is no, because scientist, astronomers, and mathematicians have proven that planets deviate from their paths to take new paths during their planetary moves, albeit with minimal change. But what about the answer to the question in the first example? considering that there is increase in the wear and tear of components with time in the step, in the step band, in the tracks and in many other places in the escalator. There is also the accumulation of fatigue in the components and sub-assemblies. This research is part of many studies which we are conducting to address the answer for the question in the first example. We have been using the fractal dimension as a quantities tool and the Poincare map as a qualitative tool. This study has shown that the fractal dimension value and the shape and distribution of the orbits in the Poincare map has significant correlation with the quality of the mechanical components and sub-assemblies in the escalator.

Keywords: fractal dimension, Poincare map, rugby ball orbit, worm orbit

Procedia PDF Downloads 27
9894 Enhancing Throughput for Wireless Multihop Networks

Authors: K. Kalaiarasan, B. Pandeeswari, A. Arockia John Francis

Abstract:

Wireless, Multi-hop networks consist of one or more intermediate nodes along the path that receive and forward packets via wireless links. The backpressure algorithm provides throughput optimal routing and scheduling decisions for multi-hop networks with dynamic traffic. Xpress, a cross-layer backpressure architecture was designed to reach the capacity of wireless multi-hop networks and it provides well coordination between layers of network by turning a mesh network into a wireless switch. Transmission over the network is scheduled using a throughput-optimal backpressure algorithm. But this architecture operates much below their capacity due to out-of-order packet delivery and variable packet size. In this paper, we present Xpress-T, a throughput optimal backpressure architecture with TCP support designed to reach maximum throughput of wireless multi-hop networks. Xpress-T operates at the IP layer, and therefore any transport protocol, including TCP, can run on top of Xpress-T. The proposed design not only avoids bottlenecks but also handles out-of-order packet delivery and variable packet size, optimally load-balances traffic across them when needed, improving fairness among competing flows. Our simulation results shows that Xpress-T gives 65% more throughput than Xpress.

Keywords: backpressure scheduling and routing, TCP, congestion control, wireless multihop network

Procedia PDF Downloads 491
9893 Optimization of Gold Mining Parameters by Cyanidation

Authors: Della Saddam Housseyn

Abstract:

Gold, the quintessential noble metal, is one of the most popular metals today, given its ever-increasing cost in the international market. The Amesmessa gold deposit is one of the gold-producing deposits. The first step in our job is to analyze the ore (considered rich ore). Mineralogical and chemical analysis has shown that the general constitution of the ore is quartz in addition to other phases such as Al2O3, Fe2O3, CaO, dolomite. The second step consists of all the leaching tests carried out in rolling bottles. These tests were carried out on 14 samples to determine the maximum recovery rate and the optimum consumption of reagent (NaCN and CaO). Tests carried out on a pulp density at 50% solid, 500 ppm cyanide concentration and particle size less than 0.6 mm at alkaline pH gave a recovery rate of 94.37%.

Keywords: cyanide, DRX, FX, gold, leaching, rate of recovery, SAA

Procedia PDF Downloads 144
9892 Adaptive Multipath Mitigation Acquisition Approach for Global Positioning System Software Receivers

Authors: Animut Meseret Simachew

Abstract:

Parallel Code Phase Search Acquisition (PCSA) Algorithm has been considered as a promising method in GPS software receivers for detection and estimation of the accurate correlation peak between the received Global Positioning System (GPS) signal and locally generated replicas. GPS signal acquisition in highly dense multipath environments is the main research challenge. In this work, we proposed a robust variable step-size (RVSS) PCSA algorithm based on fast frequency transform (FFT) filtering technique to mitigate short time delay multipath signals. Simulation results reveal the effectiveness of the proposed algorithm over the conventional PCSA algorithm. The proposed RVSS-PCSA algorithm equalizes the received carrier wiped-off signal with locally generated C/A code.

Keywords: adaptive PCSA, detection and estimation, GPS signal acquisition, GPS software receiver

Procedia PDF Downloads 93
9891 Modified Step Size Patch Array Antenna for UWB Wireless Applications

Authors: Hamid Aslani, Ahmed Radwan

Abstract:

In this paper, a single element microstrip antenna is presented for UWB applications by using techniques as partial ground plane and modified the shape of the patch. The antenna is properly designed to have a compact size and constant gain against frequency. The simulated results have done using two EM software and show good agreement with the measured results for the fabricated antenna. Then a designing of two elements patch antenna array for UWB in the frequency band of 3.1-10 GHz is presented in this paper. The array is constructed by means of feeding two omni-directional modified circular patch elements with a modified power divider. Experimental results show that the array has a stable radiation pattern and low return loss over a broad bandwidth of 64% (3.1–10 GHz). Due to its planar profile, physically compact size, wide impedance bandwidth, directive performance over a wide bandwidth proposed antenna is a good candidate for portable UWB applications and other UWB integrated circuits.

Keywords: ultra wide band, radiation performance, microstrip antenna, size miniaturized antenna

Procedia PDF Downloads 228
9890 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study

Authors: Anna Ching-Yu Wong

Abstract:

Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.

Keywords: textbook savings, open textbooks, textbook costs assessment, open access

Procedia PDF Downloads 41
9889 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction

Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé

Abstract:

One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.

Keywords: input variable disposition, machine learning, optimization, performance, time series prediction

Procedia PDF Downloads 63
9888 The Impact of Size of the Regional Economic Blocs to the Country’s Flows of Trade: Evidence from COMESA, EAC and Tanzania

Authors: Mosses E. Lufuke, Lorna M. Kamau

Abstract:

This paper attempted to assess whether the size of the regional economic bloc has an impact to the flow of trade to a particular country. Two different sized blocs (COMESA and EAC) and one country (Tanzania) have been used as the point of references. Using the results from of the analyses, the paper also was anticipated to establish whether it was rational for Tanzania to withdraw its membership from COMESA (the larger bloc) to join EAC (the small one). Gravity model has been used to estimate the relationship between the variables, from which the bilateral trade flows between Tanzania and the eighteen member countries of the two blocs (COMESA and EAC) was employed for the time between 2000 and 2013. In the model, the dummy variable for regional bloc (bloc) at which the Tanzania trade partner countries belong are also added to the model to understand which trade bloc exhibit higher trade flow with Tanzania. From the findings, it was noted that over the period of study (2000-2013) Tanzania acknowledged more than 257% of trade volume in EAC than in COMESA. Conclusive, it was noted that the flow of trade is explained by many other variables apart from the size of regional bloc; and that the size by itself offer insufficient evidence in causality relationship. The paper therefore remain neutral on such staggered switching decision since more analyses are required to establish the country’s trade flow, especially when if it had been in multiple membership of COMESA and EAC.

Keywords: economic bloc, flow of trade, size of bloc, switching

Procedia PDF Downloads 221
9887 Transcriptional Profiling of Developing Ovules in Litchi chinensis

Authors: Ashish Kumar Pathak, Ritika Sharma, Vishal Nath, Sudhir Pratap Singh, Rakesh Tuli

Abstract:

Litchi is a sub-tropical fruit crop with genotypes bearing delicious juicy fruits with variable seed size (bold to rudimentary size). Small seed size is a desirable trait in litchi, as it increases consumer acceptance and fruit processing. The biochemical activities in mid- stage ovules (e.g. 16, 20, 24 and 28 days after anthesis) determine the fate of seed and fruit development in litchi. Comprehensive ovule-specific transcriptome analysis was performed in two litchi genotypes with contrasting seed size to gain molecular insight on determinants of seed fates in litchi fruits. The transcriptomic data was de-novo assembled in 1,39,608 trinity transcripts, out of which 6,325 trinity transcripts were differentially expressed between the two contrasting genotypes. Differential transcriptional pattern was found among ovule development stages in contrasting litchi genotypes. The putative genes for salicylic acid, jasmonic acid and brassinosteroid pathway were down-regulated in ovules of small-seeded litchi. Embryogenesis, cell expansion, seed size and stress related trinity transcripts exhibited altered expression in small-seeded genotype. The putative regulators of seed maturation and seed storage were down-regulated in small-seed genotype.

Keywords: Litchi, seed, transcriptome, defence

Procedia PDF Downloads 207
9886 Variable Mapping: From Bibliometrics to Implications

Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek

Abstract:

Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.

Keywords: bibliometrics, literature review, science mapping, variable mapping

Procedia PDF Downloads 80
9885 A Transform Domain Function Controlled VSSLMS Algorithm for Sparse System Identification

Authors: Cemil Turan, Mohammad Shukri Salman

Abstract:

The convergence rate of the least-mean-square (LMS) algorithm deteriorates if the input signal to the filter is correlated. In a system identification problem, this convergence rate can be improved if the signal is white and/or if the system is sparse. We recently proposed a sparse transform domain LMS-type algorithm that uses a variable step-size for a sparse system identification. The proposed algorithm provided high performance even if the input signal is highly correlated. In this work, we investigate the performance of the proposed TD-LMS algorithm for a large number of filter tap which is also a critical issue for standard LMS algorithm. Additionally, the optimum value of the most important parameter is calculated for all experiments. Moreover, the convergence analysis of the proposed algorithm is provided. The performance of the proposed algorithm has been compared to different algorithms in a sparse system identification setting of different sparsity levels and different number of filter taps. Simulations have shown that the proposed algorithm has prominent performance compared to the other algorithms.

Keywords: adaptive filtering, sparse system identification, TD-LMS algorithm, VSSLMS algorithm

Procedia PDF Downloads 321
9884 Literature Review of Empirical Studies on the Psychological Processes of End-of-Life Cancer Patients

Authors: Kimiyo Shimomai, Mihoko Harada

Abstract:

This study is a literature review of the psychological reactions that occur in end-of-life cancer patients who are nearing death. It searched electronic databases and selected literature related to psychological studies of end-of-life patients. There was no limit on the search period, and the search was conducted until the second week of December 2021. The keywords were specified as “death and dying”, “terminal illness”, “end-of-life”, “palliative care”, “psycho-oncology” and “research”. These literatures referred to Holly (2017): Comprehensive Systematic Review for Advanced Practice Nursing, P268 Figure 10.3 to ensure quality. These literatures were selected with a dissertation score of 4 or 5. The review was conducted in two stages with reference to the procedure of George (2002). First, these references were searched for keywords in the database, and then relevant references were selected from the psychology and nursing studies of end-of-life patients. The number of literatures analyzed was 76 for overseas and 17 for domestic. As for the independent variables, "physical variable" was the most common in 36 literatures (66.7%), followed by "psychological variable" in 35 literatures (64.8%), "spiritual variable" in 21 literatures (38%), and "social variable" in 17 literatures. (31.5%), "Variables related to medical care / treatment" were 16 literatures (29.6%). To summarize the relationship between these independent variables and the dependent variable, when the dependent variable is "psychological variable", the independent variables are "psychological variable", "social variable", and "physical variable". Among the independent variables, the physical variables were the most common. The psychological responses that occur in end-stage cancer patients who are nearing death are mutually influenced by psychological, social, and physical variables. Therefore, it supported the "total pain" advocated by Cicely Saunders.

Keywords: cancer patient, end-of-life, literature review, psychological process

Procedia PDF Downloads 97
9883 A Comparison Study: Infant and Children’s Clothing Size Charts in South Korea and UK

Authors: Hye-Won Lim, Tom Cassidy, Tracy Cassidy

Abstract:

Infant and children’s body shapes are changing constantly while they are growing up into adults and are also distinctive physically between countries. For this reason, optimum size charts which can represent body sizes and shapes of infants and children are required. In this study, investigations of current size charts in South Korea and UK (n=50 each) were conducted for understanding and figuring out the sizing perspectives of the clothing manufacturers. The size charts of the two countries were collected randomly from online shopping websites and those size charts’ average measurements were compared with both national sizing surveys (SizeKorea and Shape GB). The size charts were also classified by age, gender, clothing type, fitting, and other factors. In addition, the key measurement body parts of size charts of each country were determined and those will be suggested for new size charts and sizing system development.

Keywords: infant clothing, children’s clothing, body shapes, size charts

Procedia PDF Downloads 285
9882 International Financial Reporting Standard Adoption and Value Relevance of Earnings in Listed Consumer Goods Companies in Nigerian

Authors: Muktar Haruna

Abstract:

This research work examines the International Financial Reporting Standard (IFRS) adoption and value relevance of earnings of listed consumer goods companies in the Nigerian. The population of the study comprises 22 listed consumer goods companies, out of which 15 were selected as sample size of the study. The scope of the study is a 12-year period covering from 2006 to 2018. Secondary data from the annual report of sampled companies were used, which consists of earnings per share (EPS), the book value of equity per share (BVE) as independent variables; firm size (FSZ) as a control variable, and market share price of sampled companies from Nigerian stock exchange as dependent variable. Multiple regressions were used to analyze the data. The results of the study showed that IFRS did not improve the value relevance of earnings after the adoption, which translates to a decrease in value relevance of accounting numbers in the post-adoption period. The major recommendation is that the Nigerian Reporting Council should ensure full compliance to all provisions of IFRS and provide uniformity in the presentation of non-current assets in the statement of financial position, where some present only net current assets leaving individual figures for current assets and liabilities invisible.

Keywords: IFRS, adoption, value relevance, earning per share, book value of equity per share

Procedia PDF Downloads 123
9881 Bayesian Variable Selection in Quantile Regression with Application to the Health and Retirement Study

Authors: Priya Kedia, Kiranmoy Das

Abstract:

There is a rich literature on variable selection in regression setting. However, most of these methods assume normality for the response variable under consideration for implementing the methodology and establishing the statistical properties of the estimates. In many real applications, the distribution for the response variable may be non-Gaussian, and one might be interested in finding the best subset of covariates at some predetermined quantile level. We develop dynamic Bayesian approach for variable selection in quantile regression framework. We use a zero-inflated mixture prior for the regression coefficients, and consider the asymmetric Laplace distribution for the response variable for modeling different quantiles of its distribution. An efficient Gibbs sampler is developed for our computation. Our proposed approach is assessed through extensive simulation studies, and real application of the proposed approach is also illustrated. We consider the data from health and retirement study conducted by the University of Michigan, and select the important predictors when the outcome of interest is out-of-pocket medical cost, which is considered as an important measure for financial risk. Our analysis finds important predictors at different quantiles of the outcome, and thus enhance our understanding on the effects of different predictors on the out-of-pocket medical cost.

Keywords: variable selection, quantile regression, Gibbs sampler, asymmetric Laplace distribution

Procedia PDF Downloads 121
9880 Effect of Variable Fluxes on Optimal Flux Distribution in a Metabolic Network

Authors: Ehsan Motamedian

Abstract:

Finding all optimal flux distributions of a metabolic model is an important challenge in systems biology. In this paper, a new algorithm is introduced to identify all alternate optimal solutions of a large scale metabolic network. The algorithm reduces the model to decrease computations for finding optimal solutions. The algorithm was implemented on the Escherichia coli metabolic model to find all optimal solutions for lactate and acetate production. There were more optimal flux distributions when acetate production was optimized. The model was reduced from 1076 to 80 variable fluxes for lactate while it was reduced to 91 variable fluxes for acetate. These 11 more variable fluxes resulted in about three times more optimal flux distributions. Variable fluxes were from 12 various metabolic pathways and most of them belonged to nucleotide salvage and extra cellular transport pathways.

Keywords: flux variability, metabolic network, mixed-integer linear programming, multiple optimal solutions

Procedia PDF Downloads 395
9879 Effects of Stirring Time and Reinforcement Preheating on the Porosity of Particulate Periwinkle Shell-Aluminium 6063 Metal Matrix Composite (PPS-ALMMC) Produced by Two-Step Casting

Authors: Reginald Umunakwe, Obinna Chibuzor Okoye, Uzoma Samuel Nwigwe, Damilare John Olaleye, Akinlabi Oyetunji

Abstract:

The potential for the development of PPS-AlMMCs as light weight material for industrial applications was investigated. Periwinkle shells were milled and the density of the particles determined. Particulate periwinkle shell of particle size 75µm was used to reinforce aluminium 6063 alloy at 10wt% filler loading using two-step stir casting technique. The composite materials were stirred for five minutes in a semi-solid state and the stirring time varied as 3, 6 and 9 minutes at above the liquidus temperature. A specimen was also produced with pre-heated filler. The effect of variation in stirring time and reinforcement pre-heating on the porosity of the composite materials was investigated. The results of the analysis show that a composition of reinforcement pre-heating and stirring for 3 minutes produced a composite material with the lowest porosity of 1.05%.

Keywords: composites, periwinkle shell, two-step casting, porosity

Procedia PDF Downloads 320
9878 Effect of Aggregate Size on Mechanical Behavior of Passively Confined Concrete Subjected to 3D Loading

Authors: Ibrahim Ajani Tijani, C. W. Lim

Abstract:

Limited studies have examined the effect of size on the mechanical behavior of confined concrete subjected to 3-dimensional (3D) test. With the novel 3D testing system to produce passive confinement, concrete cubes were tested to examine the effect of size on stress-strain behavior of the specimens. The effect of size on 3D stress-strain relationship was scrutinized and compared to the stress-strain relationship available in the literature. It was observed that the ultimate stress and the corresponding strain was related to the confining rigidity and size. The size shows a significant effect on the intersection stress and a new model was proposed for the intersection stress based on the conceptual design of the confining plates.

Keywords: concrete, aggregate size, size effect, 3D compression, passive confinement

Procedia PDF Downloads 177
9877 Mathematical Reconstruction of an Object Image Using X-Ray Interferometric Fourier Holography Method

Authors: M. K. Balyan

Abstract:

The main principles of X-ray Fourier interferometric holography method are discussed. The object image is reconstructed by the mathematical method of Fourier transformation. The three methods are presented – method of approximation, iteration method and step by step method. As an example the complex amplitude transmission coefficient reconstruction of a beryllium wire is considered. The results reconstructed by three presented methods are compared. The best results are obtained by means of step by step method.

Keywords: dynamical diffraction, hologram, object image, X-ray holography

Procedia PDF Downloads 361
9876 The Effect of Connections Form on Seismic Behavior of Portal Frames

Authors: Kiavash Heidarzadeh

Abstract:

The seismic behavior of portal frames is mainly based on the shape of their joints. In these structures, vertical and inclined connections are the two general forms of connections. The shapes of connections can make differences in seismic responses of portal frames. Hence, in this paper, for the first step, the non-linear performance of portal frames with vertical and inclined connections has been investigated by monotonic analysis. Also, the effect of section sizes is considered in this analysis. For comparison, hysteresis curves have been evaluated for two model frames with different forms of connections. Each model has three various sizes of the column and beam. Other geometrical parameters have been considered constant. In the second step, for every model, an appropriate size of sections has been selected from the previous step. Next, the seismic behavior of each model has been analyzed by the time history method under three near-fault earthquake records. Finite element ABAQUS software is used for simulation and analysis of samples. Outputs show that connections form can impact on reaction forces of portal frames under earthquake loads. Also, it is understood that the load capacity in frames with vertical connections is more than the frames with inclined connections.

Keywords: inclined connections, monotonic, portal frames, seismic behavior, time history, vertical connections

Procedia PDF Downloads 195
9875 Size Reduction of Images Using Constraint Optimization Approach for Machine Communications

Authors: Chee Sun Won

Abstract:

This paper presents the size reduction of images for machine-to-machine communications. Here, the salient image regions to be preserved include the image patches of the key-points such as corners and blobs. Based on a saliency image map from the key-points and their image patches, an axis-aligned grid-size optimization is proposed for the reduction of image size. To increase the size-reduction efficiency the aspect ratio constraint is relaxed in the constraint optimization framework. The proposed method yields higher matching accuracy after the size reduction than the conventional content-aware image size-reduction methods.

Keywords: image compression, image matching, key-point detection and description, machine-to-machine communication

Procedia PDF Downloads 381
9874 Exploring Data Leakage in EEG Based Brain-Computer Interfaces: Overfitting Challenges

Authors: Khalida Douibi, Rodrigo Balp, Solène Le Bars

Abstract:

In the medical field, applications related to human experiments are frequently linked to reduced samples size, which makes the training of machine learning models quite sensitive and therefore not very robust nor generalizable. This is notably the case in Brain-Computer Interface (BCI) studies, where the sample size rarely exceeds 20 subjects or a few number of trials. To address this problem, several resampling approaches are often used during the data preparation phase, which is an overly critical step in a data science analysis process. One of the naive approaches that is usually applied by data scientists consists in the transformation of the entire database before the resampling phase. However, this can cause model’ s performance to be incorrectly estimated when making predictions on unseen data. In this paper, we explored the effect of data leakage observed during our BCI experiments for device control through the real-time classification of SSVEPs (Steady State Visually Evoked Potentials). We also studied potential ways to ensure optimal validation of the classifiers during the calibration phase to avoid overfitting. The results show that the scaling step is crucial for some algorithms, and it should be applied after the resampling phase to avoid data leackage and improve results.

Keywords: data leackage, data science, machine learning, SSVEP, BCI, overfitting

Procedia PDF Downloads 119
9873 Deepnic, A Method to Transform Each Variable into Image for Deep Learning

Authors: Nguyen J. M., Lucas G., Brunner M., Ruan S., Antonioli D.

Abstract:

Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip.

Keywords: tabular data, deep learning, perfect trees, NICS

Procedia PDF Downloads 51