Search results for: time step size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24426

Search results for: time step size

24276 Some Factors Affecting to Farm Size of Duck Farming

Authors: Veronica Sri Lestari, Ahmad Ramadhan Siregar

Abstract:

The purpose of this research was to know some factors affecting farm size of duck farming (case study in Pinrang district, South Sulawesi). This research was conducted in 2013. Total sample was 45 duck farmers which were selected from 6 regions in Mattiro Sompe sub district, Pinrang district, South Sulawesi province through stratified random sampling. Data were collected through interviews using questionnaires and observation. Multiple regression equation was used to analyze the data. Dependent variable was duck population, while age of respondents, farming experience, land size, education, and income level as independent variables. This research revealed that R2 was 0.920. Simultaneously, age of respondents, farming experience, land size, education, and income level significantly influenced farm size of duck farming (P < 1%). Only income influenced farm size of duck farming (P < 1%).

Keywords: duck, dry system, factors, farm-size

Procedia PDF Downloads 504
24275 Synthesis and Functionalization of Gold Nanostars for ROS Production

Authors: H. D. Duong, J. I. Rhee

Abstract:

In this work, gold nanoparticles in star shape (called gold nanostars, GNS) were synthesized and coated by N-(3-aminopropyl) methacrylamide hydrochloride (PA) and mercaptopropionic acid (MPA) for functionalizing their surface by amine and carboxyl groups and then investigated for ROS production. The GNS with big size and multi-tips seem to be superior in singlet oxygen production as compared with that of small GNS and less tips. However, the functioned GNS in small size could also enhance efficiency of singlet oxygen production about double as compared with that of the intact GNS. In combination with methylene blue (MB+), the functioned GNS could enhance the singlet oxygen production of MB+ after 1h of LED750 irradiation and no difference between small size and big size in this reaction was observed. In combination with 5-aminolevulinic acid (ALA), only GNS coated PA could enhance the singlet oxygen production of ALA and the small size of GNS coated PA was a little higher effect than that of the bigger size. However, GNS coated MPA with small size had strong effect on hydroxyl radical production of ALA.

Keywords: 5-aminolevulinic acid, gold nanostars, methylene blue, ROS production

Procedia PDF Downloads 351
24274 On Four Models of a Three Server Queue with Optional Server Vacations

Authors: Kailash C. Madan

Abstract:

We study four models of a three server queueing system with Bernoulli schedule optional server vacations. Customers arriving at the system one by one in a Poisson process are provided identical exponential service by three parallel servers according to a first-come, first served queue discipline. In model A, all three servers may be allowed a vacation at one time, in Model B at the most two of the three servers may be allowed a vacation at one time, in model C at the most one server is allowed a vacation, and in model D no server is allowed a vacation. We study steady the state behavior of the four models and obtain steady state probability generating functions for the queue size at a random point of time for all states of the system. In model D, a known result for a three server queueing system without server vacations is derived.

Keywords: a three server queue, Bernoulli schedule server vacations, queue size distribution at a random epoch, steady state

Procedia PDF Downloads 296
24273 Self-Assembled Tin Particles Made by Plasma-Induced Dewetting

Authors: Han Joo Choe, Soon-Ho Kwon, Jung-Joong Lee

Abstract:

Tin particles of various size and distribution were self-assembled by plasma treating tin film deposited on silicon oxide substrates. Plasma treatment was conducted using an inductively coupled plasma (ICP) source. A range of ICP power and topographic templated substrates were evaluated to observe changes in particle size and particle distribution. Scanning electron microscopy images of the particles were analyzed using computer software. The evolution of tin film dewetting into particles initiated from the hole nucleation in grain boundaries. Increasing ICP power during plasma treatment produced larger number of particles per area and smaller particle size and particle-size distribution. Topographic templates were also effective in positioning and controlling the size of the particles. By combining the effects of ICP power and topographic templates, particles of similar size and well-ordered distribution were obtained.

Keywords: dewetting, particles, plasma, tin

Procedia PDF Downloads 256
24272 Principal Component Analysis Applied to the Electric Power Systems – Practical Guide; Practical Guide for Algorithms

Authors: John Morales, Eduardo Orduña

Abstract:

Currently the Principal Component Analysis (PCA) theory has been used to develop algorithms regarding to Electric Power Systems (EPS). In this context, this paper presents a practical tutorial of this technique detailed their concept, on-line and off-line mathematical foundations, which are necessary and desirables in EPS algorithms. Thus, features of their eigenvectors which are very useful to real-time process are explained, showing how it is possible to select these parameters through a direct optimization. On the other hand, in this work in order to show the application of PCA to off-line and on-line signals, an example step to step using Matlab commands is presented. Finally, a list of different approaches using PCA is presented, and some works which could be analyzed using this tutorial are presented.

Keywords: practical guide; on-line; off-line, algorithms, faults

Procedia PDF Downloads 563
24271 The Effect of Non-Normality on CB-SEM and PLS-SEM Path Estimates

Authors: Z. Jannoo, B. W. Yap, N. Auchoybur, M. A. Lazim

Abstract:

The two common approaches to Structural Equation Modeling (SEM) are the Covariance-Based SEM (CB-SEM) and Partial Least Squares SEM (PLS-SEM). There is much debate on the performance of CB-SEM and PLS-SEM for small sample size and when distributions are non-normal. This study evaluates the performance of CB-SEM and PLS-SEM under normality and non-normality conditions via a simulation. Monte Carlo Simulation in R programming language was employed to generate data based on the theoretical model with one endogenous and four exogenous variables. Each latent variable has three indicators. For normal distributions, CB-SEM estimates were found to be inaccurate for small sample size while PLS-SEM could produce the path estimates. Meanwhile, for a larger sample size, CB-SEM estimates have lower variability compared to PLS-SEM. Under non-normality, CB-SEM path estimates were inaccurate for small sample size. However, CB-SEM estimates are more accurate than those of PLS-SEM for sample size of 50 and above. The PLS-SEM estimates are not accurate unless sample size is very large.

Keywords: CB-SEM, Monte Carlo simulation, normality conditions, non-normality, PLS-SEM

Procedia PDF Downloads 410
24270 Various Modification of Electrochemical Barrier Layer Thinning of Anodic Aluminum Oxide

Authors: W. J. Stępniowski, W. Florkiewicz, M. Norek, M. Michalska-Domańska, E. Kościuczyk, T. Czujko

Abstract:

In this paper, two options of anodic alumina barrier layer thinning have been demonstrated. The approaches varied with the duration of the voltage step. It was found that too long step of the barrier layer thinning process leads to chemical etching of the nanopores on their top. At the bottoms pores are not fully opened what is disadvantageous for further applications in nanofabrication. On the other hand, while the duration of the voltage step is controlled by the current density (value of the current density cannot exceed 75% of the value recorded during previous voltage step) the pores are fully opened. However, pores at the bottom obtained with this procedure have smaller diameter, nevertheless this procedure provides electric contact between the bare aluminum (substrate) and electrolyte, what is suitable for template assisted electrodeposition, one of the most cost-efficient synthesis method in nanotechnology.

Keywords: anodic aluminum oxide, anodization, barrier layer thinning, nanopores

Procedia PDF Downloads 322
24269 Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning

Authors: Larry Powell, Seth Polsley, Drew Casey, Tracy Hammond

Abstract:

Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95.

Keywords: time window, peak/valley segmentation, feature extraction, beginner swimming, activity recognition

Procedia PDF Downloads 123
24268 System Identification and Controller Design for a DC Electrical Motor

Authors: Armel Asongu Nkembi, Ahmad Fawad

Abstract:

The aim of this paper is to determine in a concise way the transfer function that characterizes a DC electrical motor with a helix. In practice it can be obtained by applying a particular input to the system and then, based on the observation of its output, determine an approximation to the transfer function of the system. In our case, we use a step input and find the transfer function parameters that give the simulated first-order time response. The simulation of the system is done using MATLAB/Simulink. In order to determine the parameters, we assume a first order system and use the Broida approximation to determine the parameters and then its Mean Square Error (MSE). Furthermore, we design a PID controller for the control process first in the continuous time domain and tune it using the Ziegler-Nichols open loop process. We then digitize the controller to obtain a digital controller since most systems are implemented using computers, which are digital in nature.

Keywords: transfer function, step input, MATLAB, Simulink, DC electrical motor, PID controller, open-loop process, mean square process, digital controller, Ziegler-Nichols

Procedia PDF Downloads 55
24267 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —

Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno

Abstract:

STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.

Keywords: rule induction, decision table, missing data, noise

Procedia PDF Downloads 396
24266 Models to Calculate Lattice Spacing, Melting Point and Lattice Thermal Expansion of Ga₂Se₃ Nanoparticles

Authors: Mustafa Saeed Omar

Abstract:

The formula which contains the maximum increase of mean bond length, melting entropy and critical particle radius is used to calculate lattice volume in nanoscale size crystals of Ga₂Se₃. This compound belongs to the binary group of III₂VI₃. The critical radius is calculated from the values of the first surface atomic layer height which is equal to 0.336nm. The size-dependent mean bond length is calculated by using an equation-free from fitting parameters. The size-dependent lattice parameter then is accordingly used to calculate the size-dependent lattice volume. The lattice size in the nanoscale region increases to about 77.6 A³, which is up to four times of its bulk state value 19.97 A³. From the values of the nanosize scale dependence of lattice volume, the nanoscale size dependence of melting temperatures is calculated. The melting temperature decreases with the nanoparticles size reduction, it becomes zero when the radius reaches to its critical value. Bulk melting temperature for Ga₂Se₃, for example, has values of 1293 K. From the size-dependent melting temperature and mean bond length, the size-dependent lattice thermal expansion is calculated. Lattice thermal expansion decreases with the decrease of nanoparticles size and reaches to its minimum value as the radius drops down to about 5nm.

Keywords: Ga₂Se₃, lattice volume, lattice thermal expansion, melting point, nanoparticles

Procedia PDF Downloads 169
24265 Web Map Service for Fragmentary Rockfall Inventory

Authors: M. Amparo Nunez-Andres, Nieves Lantada

Abstract:

One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.

Keywords: geological risk, web mapping, WMS, rockfalls

Procedia PDF Downloads 160
24264 Applying a Noise Reduction Method to Reveal Chaos in the River Flow Time Series

Authors: Mohammad H. Fattahi

Abstract:

Chaotic analysis has been performed on the river flow time series before and after applying the wavelet based de-noising techniques in order to investigate the noise content effects on chaotic nature of flow series. In this study, 38 years of monthly runoff data of three gauging stations were used. Gauging stations were located in Ghar-e-Aghaj river basin, Fars province, Iran. The noise level of time series was estimated with the aid of Gaussian kernel algorithm. This step was found to be crucial in preventing removal of the vital data such as memory, correlation and trend from the time series in addition to the noise during de-noising process.

Keywords: chaotic behavior, wavelet, noise reduction, river flow

Procedia PDF Downloads 468
24263 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 137
24262 Wavelets Contribution on Textual Data Analysis

Authors: Habiba Ben Abdessalem

Abstract:

The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.

Keywords: textual data, wavelet, denoising, contingency table

Procedia PDF Downloads 277
24261 Static Priority Approach to Under-Frequency Based Load Shedding Scheme in Islanded Industrial Networks: Using the Case Study of Fatima Fertilizer Company Ltd - FFL

Authors: S. H. Kazmi, T. Ahmed, K. Javed, A. Ghani

Abstract:

In this paper static scheme of under-frequency based load shedding is considered for chemical and petrochemical industries with islanded distribution networks relying heavily on the primary commodity to ensure minimum production loss, plant downtime or critical equipment shutdown. A simplistic methodology is proposed for in-house implementation of this scheme using underfrequency relays and a step by step guide is provided including the techniques to calculate maximum percentage overloads, frequency decay rates, time based frequency response and frequency based time response of the system. Case study of FFL electrical system is utilized, presenting the actual system parameters and employed load shedding settings following the similar series of steps. The arbitrary settings are then verified for worst overload conditions (loss of a generation source in this case) and comprehensive system response is then investigated.

Keywords: islanding, under-frequency load shedding, frequency rate of change, static UFLS

Procedia PDF Downloads 486
24260 An Experimental Study on the Effect of Operating Parameters during the Micro-Electro-Discharge Machining of Ni Based Alloy

Authors: Asma Perveen, M. P. Jahan

Abstract:

Ni alloys have managed to cover wide range of applications such as automotive industries, oil gas industries, and aerospace industries. However, these alloys impose challenges while using conventional machining technologies. On the other hand, Micro-Electro-Discharge machining (micro-EDM) is a non-conventional machining method that uses controlled sparks energy to remove material irrespective of the materials hardness. There has been always a huge interest from the industries for developing optimum methodology and parameters in order to enhance the productivity of micro-EDM in terms of reducing machining time and tool wear for different alloys. Therefore, the aims of this study are to investigate the effects of the micro-EDM process parameters, in order to find their optimal values. The input process parameters include voltage, capacitance, and electrode rotational speed, whereas the output parameters considered are machining time, entrance diameter of hole, overcut, tool wear, and crater size. The surface morphology and element characterization are also investigated with the use of SEM and EDX analysis. The experimental result indicates the reduction of machining time with the increment of discharge energy. Discharge energy also contributes to the enlargement of entrance diameter as well as overcut. In addition, tool wears show reduction with the increase of discharge energy. Moreover, crater size is found to be increased in size along with the increment of discharge energy.

Keywords: micro holes, micro EDM, Ni Alloy, discharge energy

Procedia PDF Downloads 274
24259 Analysis of Dust Particles in Snow Cover in the Surroundings of the City of Ostrava: Particle Size Distribution, Zeta Potential and Heavy Metal Content

Authors: Roman Marsalek

Abstract:

In this paper, snow samples containing dust particles from several sampling points around the city of Ostrava were analyzed. The pH values of sampled snow were measured and solid particles analyzed. Particle size, zeta potential and content of selected heavy metals were determined in solid particles. The pH values of most samples lay in the slightly acid region. Mean values of particle size ranged from 290.5 to 620.5 nm. Zeta potential values varied between -5 and -26.5 mV. The following heavy metal concentration ranges were found: copper 0.08-0.75 mg/g, lead 0.05-0.9 mg/g, manganese 0.45-5.9 mg/g and iron 25.7-280.46 mg/g. The highest values of copper and lead were found in the vicinity of busy crossroads, and on the contrary, the highest levels of manganese and iron were detected close to a large steelworks. The proportion between pH values, zeta potentials, particle sizes and heavy metal contents was established. Zeta potential decreased with rising pH values and, simultaneously, heavy metal content in solid particles increased. At the same time, higher metal content corresponded to lower particle size.

Keywords: dust, snow, zeta potential, particles size distribution, heavy metals

Procedia PDF Downloads 368
24258 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 358
24257 Government Size and Economic Growth: Testing the Non-Linear Hypothesis for Nigeria

Authors: R. Santos Alimi

Abstract:

Using time-series techniques, this study empirically tested the validity of existing theory which stipulates there is a nonlinear relationship between government size and economic growth; such that government spending is growth-enhancing at low levels but growth-retarding at high levels, with the optimal size occurring somewhere in between. This study employed three estimation equations. First, for the size of government, two measures are considered as follows: (i) share of total expenditures to gross domestic product, (ii) share of recurrent expenditures to gross domestic product. Second, the study adopted real GDP (without government expenditure component), as a variant measure of economic growth other than the real total GDP, in estimating the optimal level of government expenditure. The study is based on annual Nigeria country-level data for the period 1970 to 2012. Estimation results show that the inverted U-shaped curve exists for the two measures of government size and the estimated optimum shares are 19.81% and 10.98%, respectively. Finally, with the adoption of real GDP (without government expenditure component), the optimum government size was found to be 12.58% of GDP. Our analysis shows that the actual share of government spending on average (2000 - 2012) is about 13.4%.This study adds to the literature confirming that the optimal government size exists not only for developed economies but also for developing economy like Nigeria. Thus, a public intervention threshold level that fosters economic growth is a reality; beyond this point economic growth should be left in the hands of the private sector. This finding has a significant implication for the appraisal of government spending and budgetary policy design.

Keywords: public expenditure, economic growth, optimum level, fully modified OLS

Procedia PDF Downloads 420
24256 Quintic Spline Solution of Fourth-Order Parabolic Equations Arising in Beam Theory

Authors: Reza Mohammadi, Mahdieh Sahebi

Abstract:

We develop a method based on polynomial quintic spline for numerical solution of fourth-order non-homogeneous parabolic partial differential equation with variable coefficient. By using polynomial quintic spline in off-step points in space and finite difference in time directions, we obtained two three level implicit methods. Stability analysis of the presented method has been carried out. We solve four test problems numerically to validate the derived method. Numerical comparison with other methods shows the superiority of presented scheme.

Keywords: fourth-order parabolic equation, variable coefficient, polynomial quintic spline, off-step points

Procedia PDF Downloads 352
24255 Determination Power and Sample Size Zero-Inflated Negative Binomial Dependent Death Rate of Age Model (ZINBD): Regression Analysis Mortality Acquired Immune Deficiency De ciency Syndrome (AIDS)

Authors: Mohd Asrul Affendi Bin Abdullah

Abstract:

Sample size calculation is especially important for zero inflated models because a large sample size is required to detect a significant effect with this model. This paper verify how to present percentage of power approximation for categorical and then extended to zero inflated models. Wald test was chosen to determine power sample size of AIDS death rate because it is frequently used due to its approachability and its natural for several major recent contribution in sample size calculation for this test. Power calculation can be conducted when covariates are used in the modeling ‘excessing zero’ data and assist categorical covariate. Analysis of AIDS death rate study is used for this paper. Aims of this study to determine the power of sample size (N = 945) categorical death rate based on parameter estimate in the simulation of the study.

Keywords: power sample size, Wald test, standardize rate, ZINBDR

Procedia PDF Downloads 436
24254 Hydrodynamics Study on Planing Hull with and without Step Using Numerical Solution

Authors: Koe Han Beng, Khoo Boo Cheong

Abstract:

The rising interest of stepped hull design has been led by the demand of more efficient high-speed boat. At the same time, the need of accurate prediction method for stepped planing hull is getting more important. By understanding the flow at high Froude number is the key in designing a practical step hull, the study surrounding stepped hull has been done mainly in the towing tank which is time-consuming and costly for initial design phase. Here the feasibility of predicting hydrodynamics of high-speed planing hull both with and without step using computational fluid dynamics (CFD) with the volume of fluid (VOF) methodology is studied in this work. First the flow around the prismatic body is analyzed, the force generated and its center of pressure are compared with available experimental and empirical data from the literature. The wake behind the transom on the keel line as well as the quarter beam buttock line are then compared with the available data, this is important since the afterbody flow of stepped hull is subjected from the wake of the forebody. Finally the calm water performance prediction of a conventional planing hull and its stepped version is then analyzed. Overset mesh methodology is employed in solving the dynamic equilibrium of the hull. The resistance, trim, and heave are then compared with the experimental data. The resistance is found to be predicted well and the dynamic equilibrium solved by the numerical method is deemed to be acceptable. This means that computational fluid dynamics will be very useful in further study on the complex flow around stepped hull and its potential usage in the design phase.

Keywords: planing hulls, stepped hulls, wake shape, numerical simulation, hydrodynamics

Procedia PDF Downloads 282
24253 Effect of Particles Size and Volume Fraction Concentration on the Thermal Conductivity and Thermal Diffusivity of Al2O3 Nanofluids Measured Using Transient Hot–Wire Laser Beam Deflection Technique

Authors: W. Mahmood Mat Yunus, Faris Mohammed Ali, Zainal Abidin Talib

Abstract:

In this study we present new data for the thermal conductivity enhancement in four nanofluids containing 11, 25, 50, 63 nm diameter aluminum oxide (Al2O3) nanoparticles in distilled water. The nanofluids were prepared using single step method (i.e. by dispersing nanoparticle directly in base fluid) which was gathered in ultrasonic device for approximately 7 hours. The transient hot-wire laser beam displacement technique was used to measure the thermal conductivity and thermal diffusivity of the prepared nanofluids. The thermal conductivity and thermal diffusivity were obtained by fitting the experimental data to the numerical data simulated for aluminum oxide in distilled water. The results show that the thermal conductivity and thermal diffusivity of nanofluids increases in non-linear behavior as the particle size increases. While, the thermal conductivity and thermal diffusivity of Al2O3 nanofluids was observed increasing linearly with concentration as the volume fraction concentration increases. We believe that the interfacial layer between solid/fluid is the main factor for the enhancement of thermal conductivity and thermal diffusivity of Al2O3 nanofluids in the present work.

Keywords: transient hot wire-laser beam technique, Al2O3 nanofluid, particle size, volume fraction concentration

Procedia PDF Downloads 553
24252 Analysis and Comparison of Prototypes of an Ergometric Step in a Multidisciplinary Design Process

Authors: M. B. Ricardo De Oliveira, A. Borghi-Silva, L. Di Thommazo, D. Braatz

Abstract:

Prototypes can be understood as representations of a product concept. Furthermore, prototyping consists in an important stage in product development and results in better team communication, decision making, testing and problem solving through feedback. Although there are several methods of prototyping suggested by recent studies for designers to choose from, some methods present different advantages, such as cost and time reduction, performance and fidelity, which should be taken in account during a product development project. In this multidisciplinary study, involving areas of physiotherapy, engineering and computer science (hardware and software), we compared four developed prototypes of an ergometric step: a virtual prototype, a 3D printed prototype, a bricolage prototype and a prototype manufactured by a third-party company. These prototypes were evaluated in a comparative-qualitative approach for their contribution to the concept’s maturation of the product, the different prototyping methods used and the advantages and disadvantages of each one based on the product’s design specifications (performance, safety, materials, cost, maintenance, usability, ergonomics and portability). Our results indicated that despite prototypes show overall advantages, all of them have limitations, thus being crucial to have different methods of testing and interacting with the product. Additionally, virtual and 3D printed prototypes were essential at early stages of the project due to their low-cost and high-fidelity representation of the product, while the prototype manufactured by a third-party company and bricolage prototype introduced functional tests in real scenarios, allowing more detailed evaluations. This study also resulted in a patent for an ergometric step.

Keywords: Product Design, Product Development, Prototypes, Step

Procedia PDF Downloads 117
24251 Classification of EEG Signals Based on Dynamic Connectivity Analysis

Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović

Abstract:

In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.

Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients

Procedia PDF Downloads 214
24250 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: convolutional neural networks, coffee bean, peaberry, sorting, support vector machine

Procedia PDF Downloads 144
24249 The Magic Bullet in Africa: Exploring an Alternative Theoretical Model

Authors: Daniel Nkrumah

Abstract:

The Magic Bullet theory was a popular media effect theory that defined the power of the mass media in altering beliefs and perceptions of its audiences. However, following the People's Choice study, the theory was said to have been disproved and was supplanted by the Two-Step Flow Theory. This paper examines the relevance of the Magic Bullet theory in Africa and establishes whether it is still relevant in Africa's media spaces and societies. Using selected cases on the continent, it adopts a grounded theory approach and explores a new theoretical model that attempts to enforce an argument that the Two-Step Flow theory though important and valid, was ill-conceived as a direct replacement to the Magic Bullet theory.

Keywords: magic bullet theory, two-step flow theory, media effects, african media

Procedia PDF Downloads 127
24248 Role of Cellulose Fibers in Tuning the Microstructure and Crystallographic Phase of α-Fe₂O₃ and α-FeOOH Nanoparticles

Authors: Indu Chauhan, Bhupendra S. Butola, Paritosh Mohanty

Abstract:

It is very well known that properties of material changes as their size approach to nanoscale level due to the high surface area to volume ratio. However, in last few decades, a tenet ‘structure dictates function’ is quickly being adopted by researchers working with nanomaterials. The design and exploitation of nanoparticles with tailored shape and size has become one of the primary goals of materials science researchers to expose the properties of nanostructures. To date, various methods, including soft/hard template/surfactant assisted route hydrothermal reaction, seed mediated growth method, capping molecule-assisted synthesis, polyol process, etc. have been adopted to synthesize the nanostructures with controlled size and shape and monodispersity. However controlling the shape and size of nanoparticles is an ultimate challenge of modern material research. In particular, many efforts have been devoted to rational and skillful control of hierarchical and complex nanostructures. Thus in our research work, role of cellulose in manipulating the nanostructures has been discussed. Nanoparticles of α-Fe₂O₃ (diameter ca. 15 to 130 nm) were immobilized on the cellulose fiber surface by a single step in situ hydrothermal method. However, nanoflakes of α-FeOOH having thickness ca. ~25 nm and length ca. ~250 nm were obtained by the same method in absence of cellulose fibers. A possible nucleation and growth mechanism of the formation of nanostructures on cellulose fibers have been proposed. The covalent bond formation between the cellulose fibers and nanostructures has been discussed with supporting evidence from the spectroscopic and other analytical studies such as Fourier transform infrared spectroscopy and X-ray photoelectron spectroscopy. The role of cellulose in manipulating the nanostructures has been discussed.

Keywords: cellulose fibers, α-Fe₂O₃, α-FeOOH, hydrothermal, nanoflakes, nanoparticles

Procedia PDF Downloads 150
24247 Language Shapes Thought: An Experimental Study on English and Mandarin Native Speakers' Sequencing of Size

Authors: Hsi Wei

Abstract:

Does the language we speak affect the way we think? This question has been discussed for a long time from different aspects. In this article, the issue is examined with an experiment on how speakers of different languages tend to do different sequencing when it comes to the size of general objects. An essential difference between the usage of English and Mandarin is the way we sequence the size of places or objects. In English, when describing the location of something we may say, for example, ‘The pen is inside the trashcan next to the tree at the park.’ In Mandarin, however, we would say, ‘The pen is at the park next to the tree inside the trashcan.’ It’s clear that generally English use the sequence of small to big while Mandarin the opposite. Therefore, the experiment was conducted to test if the difference of the languages affects the speakers’ ability to do the different sequencing. There were two groups of subjects; one consisted of English native speakers, another of Mandarin native speakers. Within the experiment, three nouns were showed as a group to the subjects as their native languages. Before they saw the nouns, they would first get an instruction of ‘big to small’, ‘small to big’, or ‘repeat’. Therefore, the subjects had to sequence the following group of nouns as the instruction they get or simply repeat the nouns. After completing every sequencing and repetition in their minds, they pushed a button as reaction. The repetition design was to gather the mere reading time of the person. As the result of the experiment showed, English native speakers reacted more quickly to the sequencing of ‘small to big’; on the other hand, Mandarin native speakers reacted more quickly to the sequence ‘big to small’. To conclude, this study may be of importance as a support for linguistic relativism that the language we speak do shape the way we think.

Keywords: language, linguistic relativism, size, sequencing

Procedia PDF Downloads 281