Search results for: time step size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23687

Search results for: time step size

23537 Some Factors Affecting to Farm Size of Duck Farming

Authors: Veronica Sri Lestari, Ahmad Ramadhan Siregar

Abstract:

The purpose of this research was to know some factors affecting farm size of duck farming (case study in Pinrang district, South Sulawesi). This research was conducted in 2013. Total sample was 45 duck farmers which were selected from 6 regions in Mattiro Sompe sub district, Pinrang district, South Sulawesi province through stratified random sampling. Data were collected through interviews using questionnaires and observation. Multiple regression equation was used to analyze the data. Dependent variable was duck population, while age of respondents, farming experience, land size, education, and income level as independent variables. This research revealed that R2 was 0.920. Simultaneously, age of respondents, farming experience, land size, education, and income level significantly influenced farm size of duck farming (P < 1%). Only income influenced farm size of duck farming (P < 1%).

Keywords: duck, dry system, factors, farm-size

Procedia PDF Downloads 465
23536 Synthesis and Functionalization of Gold Nanostars for ROS Production

Authors: H. D. Duong, J. I. Rhee

Abstract:

In this work, gold nanoparticles in star shape (called gold nanostars, GNS) were synthesized and coated by N-(3-aminopropyl) methacrylamide hydrochloride (PA) and mercaptopropionic acid (MPA) for functionalizing their surface by amine and carboxyl groups and then investigated for ROS production. The GNS with big size and multi-tips seem to be superior in singlet oxygen production as compared with that of small GNS and less tips. However, the functioned GNS in small size could also enhance efficiency of singlet oxygen production about double as compared with that of the intact GNS. In combination with methylene blue (MB+), the functioned GNS could enhance the singlet oxygen production of MB+ after 1h of LED750 irradiation and no difference between small size and big size in this reaction was observed. In combination with 5-aminolevulinic acid (ALA), only GNS coated PA could enhance the singlet oxygen production of ALA and the small size of GNS coated PA was a little higher effect than that of the bigger size. However, GNS coated MPA with small size had strong effect on hydroxyl radical production of ALA.

Keywords: 5-aminolevulinic acid, gold nanostars, methylene blue, ROS production

Procedia PDF Downloads 320
23535 Principal Component Analysis Applied to the Electric Power Systems – Practical Guide; Practical Guide for Algorithms

Authors: John Morales, Eduardo Orduña

Abstract:

Currently the Principal Component Analysis (PCA) theory has been used to develop algorithms regarding to Electric Power Systems (EPS). In this context, this paper presents a practical tutorial of this technique detailed their concept, on-line and off-line mathematical foundations, which are necessary and desirables in EPS algorithms. Thus, features of their eigenvectors which are very useful to real-time process are explained, showing how it is possible to select these parameters through a direct optimization. On the other hand, in this work in order to show the application of PCA to off-line and on-line signals, an example step to step using Matlab commands is presented. Finally, a list of different approaches using PCA is presented, and some works which could be analyzed using this tutorial are presented.

Keywords: practical guide; on-line; off-line, algorithms, faults

Procedia PDF Downloads 531
23534 Self-Assembled Tin Particles Made by Plasma-Induced Dewetting

Authors: Han Joo Choe, Soon-Ho Kwon, Jung-Joong Lee

Abstract:

Tin particles of various size and distribution were self-assembled by plasma treating tin film deposited on silicon oxide substrates. Plasma treatment was conducted using an inductively coupled plasma (ICP) source. A range of ICP power and topographic templated substrates were evaluated to observe changes in particle size and particle distribution. Scanning electron microscopy images of the particles were analyzed using computer software. The evolution of tin film dewetting into particles initiated from the hole nucleation in grain boundaries. Increasing ICP power during plasma treatment produced larger number of particles per area and smaller particle size and particle-size distribution. Topographic templates were also effective in positioning and controlling the size of the particles. By combining the effects of ICP power and topographic templates, particles of similar size and well-ordered distribution were obtained.

Keywords: dewetting, particles, plasma, tin

Procedia PDF Downloads 226
23533 The Effect of Non-Normality on CB-SEM and PLS-SEM Path Estimates

Authors: Z. Jannoo, B. W. Yap, N. Auchoybur, M. A. Lazim

Abstract:

The two common approaches to Structural Equation Modeling (SEM) are the Covariance-Based SEM (CB-SEM) and Partial Least Squares SEM (PLS-SEM). There is much debate on the performance of CB-SEM and PLS-SEM for small sample size and when distributions are non-normal. This study evaluates the performance of CB-SEM and PLS-SEM under normality and non-normality conditions via a simulation. Monte Carlo Simulation in R programming language was employed to generate data based on the theoretical model with one endogenous and four exogenous variables. Each latent variable has three indicators. For normal distributions, CB-SEM estimates were found to be inaccurate for small sample size while PLS-SEM could produce the path estimates. Meanwhile, for a larger sample size, CB-SEM estimates have lower variability compared to PLS-SEM. Under non-normality, CB-SEM path estimates were inaccurate for small sample size. However, CB-SEM estimates are more accurate than those of PLS-SEM for sample size of 50 and above. The PLS-SEM estimates are not accurate unless sample size is very large.

Keywords: CB-SEM, Monte Carlo simulation, normality conditions, non-normality, PLS-SEM

Procedia PDF Downloads 371
23532 An Experimental Study on the Effect of Operating Parameters during the Micro-Electro-Discharge Machining of Ni Based Alloy

Authors: Asma Perveen, M. P. Jahan

Abstract:

Ni alloys have managed to cover wide range of applications such as automotive industries, oil gas industries, and aerospace industries. However, these alloys impose challenges while using conventional machining technologies. On the other hand, Micro-Electro-Discharge machining (micro-EDM) is a non-conventional machining method that uses controlled sparks energy to remove material irrespective of the materials hardness. There has been always a huge interest from the industries for developing optimum methodology and parameters in order to enhance the productivity of micro-EDM in terms of reducing machining time and tool wear for different alloys. Therefore, the aims of this study are to investigate the effects of the micro-EDM process parameters, in order to find their optimal values. The input process parameters include voltage, capacitance, and electrode rotational speed, whereas the output parameters considered are machining time, entrance diameter of hole, overcut, tool wear, and crater size. The surface morphology and element characterization are also investigated with the use of SEM and EDX analysis. The experimental result indicates the reduction of machining time with the increment of discharge energy. Discharge energy also contributes to the enlargement of entrance diameter as well as overcut. In addition, tool wears show reduction with the increase of discharge energy. Moreover, crater size is found to be increased in size along with the increment of discharge energy.

Keywords: micro holes, micro EDM, Ni Alloy, discharge energy

Procedia PDF Downloads 245
23531 Static Priority Approach to Under-Frequency Based Load Shedding Scheme in Islanded Industrial Networks: Using the Case Study of Fatima Fertilizer Company Ltd - FFL

Authors: S. H. Kazmi, T. Ahmed, K. Javed, A. Ghani

Abstract:

In this paper static scheme of under-frequency based load shedding is considered for chemical and petrochemical industries with islanded distribution networks relying heavily on the primary commodity to ensure minimum production loss, plant downtime or critical equipment shutdown. A simplistic methodology is proposed for in-house implementation of this scheme using underfrequency relays and a step by step guide is provided including the techniques to calculate maximum percentage overloads, frequency decay rates, time based frequency response and frequency based time response of the system. Case study of FFL electrical system is utilized, presenting the actual system parameters and employed load shedding settings following the similar series of steps. The arbitrary settings are then verified for worst overload conditions (loss of a generation source in this case) and comprehensive system response is then investigated.

Keywords: islanding, under-frequency load shedding, frequency rate of change, static UFLS

Procedia PDF Downloads 456
23530 Models to Calculate Lattice Spacing, Melting Point and Lattice Thermal Expansion of Ga₂Se₃ Nanoparticles

Authors: Mustafa Saeed Omar

Abstract:

The formula which contains the maximum increase of mean bond length, melting entropy and critical particle radius is used to calculate lattice volume in nanoscale size crystals of Ga₂Se₃. This compound belongs to the binary group of III₂VI₃. The critical radius is calculated from the values of the first surface atomic layer height which is equal to 0.336nm. The size-dependent mean bond length is calculated by using an equation-free from fitting parameters. The size-dependent lattice parameter then is accordingly used to calculate the size-dependent lattice volume. The lattice size in the nanoscale region increases to about 77.6 A³, which is up to four times of its bulk state value 19.97 A³. From the values of the nanosize scale dependence of lattice volume, the nanoscale size dependence of melting temperatures is calculated. The melting temperature decreases with the nanoparticles size reduction, it becomes zero when the radius reaches to its critical value. Bulk melting temperature for Ga₂Se₃, for example, has values of 1293 K. From the size-dependent melting temperature and mean bond length, the size-dependent lattice thermal expansion is calculated. Lattice thermal expansion decreases with the decrease of nanoparticles size and reaches to its minimum value as the radius drops down to about 5nm.

Keywords: Ga₂Se₃, lattice volume, lattice thermal expansion, melting point, nanoparticles

Procedia PDF Downloads 141
23529 Analysis of Dust Particles in Snow Cover in the Surroundings of the City of Ostrava: Particle Size Distribution, Zeta Potential and Heavy Metal Content

Authors: Roman Marsalek

Abstract:

In this paper, snow samples containing dust particles from several sampling points around the city of Ostrava were analyzed. The pH values of sampled snow were measured and solid particles analyzed. Particle size, zeta potential and content of selected heavy metals were determined in solid particles. The pH values of most samples lay in the slightly acid region. Mean values of particle size ranged from 290.5 to 620.5 nm. Zeta potential values varied between -5 and -26.5 mV. The following heavy metal concentration ranges were found: copper 0.08-0.75 mg/g, lead 0.05-0.9 mg/g, manganese 0.45-5.9 mg/g and iron 25.7-280.46 mg/g. The highest values of copper and lead were found in the vicinity of busy crossroads, and on the contrary, the highest levels of manganese and iron were detected close to a large steelworks. The proportion between pH values, zeta potentials, particle sizes and heavy metal contents was established. Zeta potential decreased with rising pH values and, simultaneously, heavy metal content in solid particles increased. At the same time, higher metal content corresponded to lower particle size.

Keywords: dust, snow, zeta potential, particles size distribution, heavy metals

Procedia PDF Downloads 340
23528 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 316
23527 Various Modification of Electrochemical Barrier Layer Thinning of Anodic Aluminum Oxide

Authors: W. J. Stępniowski, W. Florkiewicz, M. Norek, M. Michalska-Domańska, E. Kościuczyk, T. Czujko

Abstract:

In this paper, two options of anodic alumina barrier layer thinning have been demonstrated. The approaches varied with the duration of the voltage step. It was found that too long step of the barrier layer thinning process leads to chemical etching of the nanopores on their top. At the bottoms pores are not fully opened what is disadvantageous for further applications in nanofabrication. On the other hand, while the duration of the voltage step is controlled by the current density (value of the current density cannot exceed 75% of the value recorded during previous voltage step) the pores are fully opened. However, pores at the bottom obtained with this procedure have smaller diameter, nevertheless this procedure provides electric contact between the bare aluminum (substrate) and electrolyte, what is suitable for template assisted electrodeposition, one of the most cost-efficient synthesis method in nanotechnology.

Keywords: anodic aluminum oxide, anodization, barrier layer thinning, nanopores

Procedia PDF Downloads 301
23526 Government Size and Economic Growth: Testing the Non-Linear Hypothesis for Nigeria

Authors: R. Santos Alimi

Abstract:

Using time-series techniques, this study empirically tested the validity of existing theory which stipulates there is a nonlinear relationship between government size and economic growth; such that government spending is growth-enhancing at low levels but growth-retarding at high levels, with the optimal size occurring somewhere in between. This study employed three estimation equations. First, for the size of government, two measures are considered as follows: (i) share of total expenditures to gross domestic product, (ii) share of recurrent expenditures to gross domestic product. Second, the study adopted real GDP (without government expenditure component), as a variant measure of economic growth other than the real total GDP, in estimating the optimal level of government expenditure. The study is based on annual Nigeria country-level data for the period 1970 to 2012. Estimation results show that the inverted U-shaped curve exists for the two measures of government size and the estimated optimum shares are 19.81% and 10.98%, respectively. Finally, with the adoption of real GDP (without government expenditure component), the optimum government size was found to be 12.58% of GDP. Our analysis shows that the actual share of government spending on average (2000 - 2012) is about 13.4%.This study adds to the literature confirming that the optimal government size exists not only for developed economies but also for developing economy like Nigeria. Thus, a public intervention threshold level that fosters economic growth is a reality; beyond this point economic growth should be left in the hands of the private sector. This finding has a significant implication for the appraisal of government spending and budgetary policy design.

Keywords: public expenditure, economic growth, optimum level, fully modified OLS

Procedia PDF Downloads 388
23525 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —

Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno

Abstract:

STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.

Keywords: rule induction, decision table, missing data, noise

Procedia PDF Downloads 366
23524 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 100
23523 Classification of EEG Signals Based on Dynamic Connectivity Analysis

Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović

Abstract:

In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.

Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients

Procedia PDF Downloads 170
23522 Web Map Service for Fragmentary Rockfall Inventory

Authors: M. Amparo Nunez-Andres, Nieves Lantada

Abstract:

One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.

Keywords: geological risk, web mapping, WMS, rockfalls

Procedia PDF Downloads 133
23521 Hydrodynamics Study on Planing Hull with and without Step Using Numerical Solution

Authors: Koe Han Beng, Khoo Boo Cheong

Abstract:

The rising interest of stepped hull design has been led by the demand of more efficient high-speed boat. At the same time, the need of accurate prediction method for stepped planing hull is getting more important. By understanding the flow at high Froude number is the key in designing a practical step hull, the study surrounding stepped hull has been done mainly in the towing tank which is time-consuming and costly for initial design phase. Here the feasibility of predicting hydrodynamics of high-speed planing hull both with and without step using computational fluid dynamics (CFD) with the volume of fluid (VOF) methodology is studied in this work. First the flow around the prismatic body is analyzed, the force generated and its center of pressure are compared with available experimental and empirical data from the literature. The wake behind the transom on the keel line as well as the quarter beam buttock line are then compared with the available data, this is important since the afterbody flow of stepped hull is subjected from the wake of the forebody. Finally the calm water performance prediction of a conventional planing hull and its stepped version is then analyzed. Overset mesh methodology is employed in solving the dynamic equilibrium of the hull. The resistance, trim, and heave are then compared with the experimental data. The resistance is found to be predicted well and the dynamic equilibrium solved by the numerical method is deemed to be acceptable. This means that computational fluid dynamics will be very useful in further study on the complex flow around stepped hull and its potential usage in the design phase.

Keywords: planing hulls, stepped hulls, wake shape, numerical simulation, hydrodynamics

Procedia PDF Downloads 260
23520 Quintic Spline Solution of Fourth-Order Parabolic Equations Arising in Beam Theory

Authors: Reza Mohammadi, Mahdieh Sahebi

Abstract:

We develop a method based on polynomial quintic spline for numerical solution of fourth-order non-homogeneous parabolic partial differential equation with variable coefficient. By using polynomial quintic spline in off-step points in space and finite difference in time directions, we obtained two three level implicit methods. Stability analysis of the presented method has been carried out. We solve four test problems numerically to validate the derived method. Numerical comparison with other methods shows the superiority of presented scheme.

Keywords: fourth-order parabolic equation, variable coefficient, polynomial quintic spline, off-step points

Procedia PDF Downloads 320
23519 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: convolutional neural networks, coffee bean, peaberry, sorting, support vector machine

Procedia PDF Downloads 116
23518 Determination Power and Sample Size Zero-Inflated Negative Binomial Dependent Death Rate of Age Model (ZINBD): Regression Analysis Mortality Acquired Immune Deficiency De ciency Syndrome (AIDS)

Authors: Mohd Asrul Affendi Bin Abdullah

Abstract:

Sample size calculation is especially important for zero inflated models because a large sample size is required to detect a significant effect with this model. This paper verify how to present percentage of power approximation for categorical and then extended to zero inflated models. Wald test was chosen to determine power sample size of AIDS death rate because it is frequently used due to its approachability and its natural for several major recent contribution in sample size calculation for this test. Power calculation can be conducted when covariates are used in the modeling ‘excessing zero’ data and assist categorical covariate. Analysis of AIDS death rate study is used for this paper. Aims of this study to determine the power of sample size (N = 945) categorical death rate based on parameter estimate in the simulation of the study.

Keywords: power sample size, Wald test, standardize rate, ZINBDR

Procedia PDF Downloads 409
23517 Language Shapes Thought: An Experimental Study on English and Mandarin Native Speakers' Sequencing of Size

Authors: Hsi Wei

Abstract:

Does the language we speak affect the way we think? This question has been discussed for a long time from different aspects. In this article, the issue is examined with an experiment on how speakers of different languages tend to do different sequencing when it comes to the size of general objects. An essential difference between the usage of English and Mandarin is the way we sequence the size of places or objects. In English, when describing the location of something we may say, for example, ‘The pen is inside the trashcan next to the tree at the park.’ In Mandarin, however, we would say, ‘The pen is at the park next to the tree inside the trashcan.’ It’s clear that generally English use the sequence of small to big while Mandarin the opposite. Therefore, the experiment was conducted to test if the difference of the languages affects the speakers’ ability to do the different sequencing. There were two groups of subjects; one consisted of English native speakers, another of Mandarin native speakers. Within the experiment, three nouns were showed as a group to the subjects as their native languages. Before they saw the nouns, they would first get an instruction of ‘big to small’, ‘small to big’, or ‘repeat’. Therefore, the subjects had to sequence the following group of nouns as the instruction they get or simply repeat the nouns. After completing every sequencing and repetition in their minds, they pushed a button as reaction. The repetition design was to gather the mere reading time of the person. As the result of the experiment showed, English native speakers reacted more quickly to the sequencing of ‘small to big’; on the other hand, Mandarin native speakers reacted more quickly to the sequence ‘big to small’. To conclude, this study may be of importance as a support for linguistic relativism that the language we speak do shape the way we think.

Keywords: language, linguistic relativism, size, sequencing

Procedia PDF Downloads 250
23516 Wavelets Contribution on Textual Data Analysis

Authors: Habiba Ben Abdessalem

Abstract:

The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.

Keywords: textual data, wavelet, denoising, contingency table

Procedia PDF Downloads 255
23515 A CMOS Capacitor Array for ESPAR with Fast Switching Time

Authors: Jin-Sup Kim, Se-Hwan Choi, Jae-Young Lee

Abstract:

A 8-bit CMOS capacitor array is designed for using in electrically steerable passive array radiator (ESPAR). The proposed capacitor array shows the fast response time in rising and falling characteristics. Compared to other works in silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technologies, it shows a comparable tuning range and switching time with low power consumption. Using the 0.18um CMOS, the capacitor array features a tuning range of 1.5 to 12.9 pF at 2.4GHz. Including the 2X4 decoder for control interface, the Chip size is 350um X 145um. Current consumption is about 80 nA at 1.8 V operation.

Keywords: CMOS capacitor array, ESPAR, SOI, SOS, switching time

Procedia PDF Downloads 565
23514 Analysis and Comparison of Prototypes of an Ergometric Step in a Multidisciplinary Design Process

Authors: M. B. Ricardo De Oliveira, A. Borghi-Silva, L. Di Thommazo, D. Braatz

Abstract:

Prototypes can be understood as representations of a product concept. Furthermore, prototyping consists in an important stage in product development and results in better team communication, decision making, testing and problem solving through feedback. Although there are several methods of prototyping suggested by recent studies for designers to choose from, some methods present different advantages, such as cost and time reduction, performance and fidelity, which should be taken in account during a product development project. In this multidisciplinary study, involving areas of physiotherapy, engineering and computer science (hardware and software), we compared four developed prototypes of an ergometric step: a virtual prototype, a 3D printed prototype, a bricolage prototype and a prototype manufactured by a third-party company. These prototypes were evaluated in a comparative-qualitative approach for their contribution to the concept’s maturation of the product, the different prototyping methods used and the advantages and disadvantages of each one based on the product’s design specifications (performance, safety, materials, cost, maintenance, usability, ergonomics and portability). Our results indicated that despite prototypes show overall advantages, all of them have limitations, thus being crucial to have different methods of testing and interacting with the product. Additionally, virtual and 3D printed prototypes were essential at early stages of the project due to their low-cost and high-fidelity representation of the product, while the prototype manufactured by a third-party company and bricolage prototype introduced functional tests in real scenarios, allowing more detailed evaluations. This study also resulted in a patent for an ergometric step.

Keywords: Product Design, Product Development, Prototypes, Step

Procedia PDF Downloads 88
23513 Effect of Particles Size and Volume Fraction Concentration on the Thermal Conductivity and Thermal Diffusivity of Al2O3 Nanofluids Measured Using Transient Hot–Wire Laser Beam Deflection Technique

Authors: W. Mahmood Mat Yunus, Faris Mohammed Ali, Zainal Abidin Talib

Abstract:

In this study we present new data for the thermal conductivity enhancement in four nanofluids containing 11, 25, 50, 63 nm diameter aluminum oxide (Al2O3) nanoparticles in distilled water. The nanofluids were prepared using single step method (i.e. by dispersing nanoparticle directly in base fluid) which was gathered in ultrasonic device for approximately 7 hours. The transient hot-wire laser beam displacement technique was used to measure the thermal conductivity and thermal diffusivity of the prepared nanofluids. The thermal conductivity and thermal diffusivity were obtained by fitting the experimental data to the numerical data simulated for aluminum oxide in distilled water. The results show that the thermal conductivity and thermal diffusivity of nanofluids increases in non-linear behavior as the particle size increases. While, the thermal conductivity and thermal diffusivity of Al2O3 nanofluids was observed increasing linearly with concentration as the volume fraction concentration increases. We believe that the interfacial layer between solid/fluid is the main factor for the enhancement of thermal conductivity and thermal diffusivity of Al2O3 nanofluids in the present work.

Keywords: transient hot wire-laser beam technique, Al2O3 nanofluid, particle size, volume fraction concentration

Procedia PDF Downloads 519
23512 Removal of Copper from Wastewaters by Nano-Micro Bubble Ion Flotation

Authors: R. Ahmadi, A. Khodadadi, M. Abdollahi

Abstract:

The removal of copper from a dilute synthetic wastewater (10 mg/L) was studied by ion flotation at laboratory scale. Anionic sodium dodecyl sulfate (SDS) was used as a collector and ethanol as a frother. Different parameters such as pH, collector and frother concentrations, foam height and bubble size distribution (multi bubble ion flotation) were tested to determine the optimum flotation conditions in a Denver type flotation machine. To see into the effect of bubbles size distribution in this paper, a nano-micro bubble generator was designed. The nano and microbubbles that are generated in this way were combined with normal size bubbles generated mechanically. Under the optimum conditions (concentration of SDS: 192mg/l, ethanol: 0.5%v/v, pH value: 4 and froth height=12.5 cm) the best removal obtained for the system Cu/SDS with a dry foam (water recovery: 15.5%) was 85.6%. Coalescence of nano-microbubbles with bubbles of normal size belonging to mechanical flotation cell improved the removal of Cu to a maximum floatability of 92.8% and reduced the water recovery to a 13.1%.The flotation time decreased considerably at 37.5% when the multi bubble ion flotation was used.

Keywords: froth flotation, copper, water treatment, optimization, recycling

Procedia PDF Downloads 471
23511 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction

Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé

Abstract:

One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.

Keywords: input variable disposition, machine learning, optimization, performance, time series prediction

Procedia PDF Downloads 67
23510 A Study of Microglitches in Hartebeesthoek Radio Pulsars

Authors: Onuchukwu Chika Christian, Chukwude Augustine Ejike

Abstract:

We carried out a statistical analyse of microglitches events on a sample of radio pulsars. The distribution of microglitch events in frequency (ν) and first frequency derivatives ν˙ indicates that the size of a microglitch and sign combinations of events in ν and ν˙ are purely randomized. Assuming that the probability of a given size of a microglitch event occurring scales inversely as the absolute size of the event in both ν and ν˙, we constructed a cumulative distribution function (CDF) for the absolute sizes of microglitches. In most of the pulsars, the theoretical CDF matched the observed values. This is an indication that microglitches in pulsar may be interpreted as an avalanche process in which angular momentum is transferred erratically from the flywheel-like superfliud interior to the slowly decelerating solid crust. Analysis of the waiting time indicates that it is purely Poisson distributed with mean microglitch rate <γ> ∼ 0.98year^−1 for all the pulsars in our sample and <γ> / <∆T> ∼ 1. Correlation analysis, showed that the relative absolute size of microglitch event strongly with the rotation period of the pulsar with correlation coefficient r ∼ 0.7 and r ∼ 0.5 respectively for events in ν and ν˙. The mean glitch rate and number of microglitches (Ng) showed some dependence on spin down rate (r ∼ −0.6) and the characteristic age of the pulsar (τ) with (r ∼ −0.4/− 0.5).

Keywords: method-data analysis, star, neutron-pulsar, general

Procedia PDF Downloads 427
23509 Invention of Novel Technique of Process Scale Up by Using Solid Dosage Form

Authors: Shashank Tiwari, S. P. Mahapatra

Abstract:

The aim of this technique is to reduce the steps of process scales up, save time & cost of the industries. This technique will minimise the steps of process scale up. The new steps are, Novel Lab Scale, Novel Lab Scale Trials, Novel Trial Batches, Novel Exhibit Batches, Novel Validation Batches. In these steps, it is not divided to validation batches in three parts but the data of trials batches, Exhibit Batches and Validation batches are use and compile for production and used for validation. It also increases the batch size of the trial, exhibit batches. The new size of trials batches is not less than fifty Thousand, the exhibit batches increase up to two lack and the validation batches up to five lack. After preparing the batches all their data & drugs use for stability & maintain the validation record and compile data for the technology transfer in production department for preparing the marketed size batches.

Keywords: batches, technique, preparation, scale up, validation

Procedia PDF Downloads 322
23508 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data

Procedia PDF Downloads 306