Search results for: number of order
21861 Effect of the Average Kits Birth Weight and of the Number of Born Alive per Liter on the Milk Production of Algerian Rabbit Raised in Aures Area
Abstract:
In order to characterize rabbits does of an Aures local population raised in Algeria; a study of their milk yield was realized in the experimental rabbitry of El Hadj Lakhdhar University. Milk production of does was measured every day during the days following 215 parturitions. It was estimated by weighing the female before and after the single daily suckling (10-15 min between the 2 weighing operations). The various calculated parameters were the quantity of milk produced per day, per week and the total quantity produced in 21 days, as well as the intake of milk by young rabbits. The analysis concerned the effects of the number of successive litters (3 classes: 1 to 3 and more) and of the average number of the number of young rabbits suckled per litter (6 classes: from 1-2 kits to more than 6). During the 21 days of controlled lactation, the average litter size was 6±3. The rabbits of the Aures area produced on average 2544.34±747 g in 21 days that is 121 g of milk/day or 21g of milk/kit/day. The milk yield increased from 526, 1035, 1240, and 2801g to 760, 1365, 1715 and 3840 for week 1, 2, 3 and the total period of lactation respectively. Nevertheless, milk production available per kit and per day decreased linearly with kits number in the litter for each of the 3 weeks considered. On the other hand the milk yield was not affected by the weight at birth of kits.Keywords: milk production, litter size, rabbit, Aures area, Algeria
Procedia PDF Downloads 52121860 Number Sense Proficiency and Problem Solving Performance of Grade Seven Students
Authors: Laissa Mae Francisco, John Rolex Ingreso, Anna Krizel Menguito, Criselda Robrigado, Rej Maegan Tuazon
Abstract:
This study aims to determine and describe the existing relationship between number sense proficiency and problem-solving performance of grade seven students from Victorino Mapa High School, Manila. A paper pencil exam containing of 50-item number sense test and 5-item problem-solving test which measures their number sense proficiency and problem-solving performance adapted from McIntosh, Reys, and Bana were used as the research instruments. The data obtained from this study were interpreted and analyzed using the Pearson – Product Moment Coefficient of Correlation to determine the relationship between the two variables. It was found out that students who were low in number sense proficiency tend to be the students with poor problem-solving performance and students with medium number sense proficiency are most likely to have an average problem-solving performance. Likewise, students with high number sense proficiency are those who do excellently in problem-solving performance.Keywords: number sense, performance, problem solving, proficiency
Procedia PDF Downloads 43721859 Construction and Analysis of Partially Balanced Sudoku Design of Prime Order
Authors: Abubakar Danbaba
Abstract:
Sudoku squares have been widely used to design an experiment where each treatment occurs exactly once in each row, column or sub-block. For some experiments, the size of row (or column or sub-block) may be larger than the number of treatments. Since each treatment appears only once in each row (column or sub-block) with an additional empty cell such designs are partially balanced Sudoku designs (PBSD) with NP-complete structures. This paper proposed methods for constructing PBSD of prime order of treatments by a modified Kronecker product and swap of matrix row (or column) in cyclic order. In addition, linear model and procedure for the analysis of data for PBSD are proposed.Keywords: sudoku design, partial sudoku, NP-complete, Kronecker product, row and column swap
Procedia PDF Downloads 27221858 Entropy Production in Mixed Convection in a Horizontal Porous Channel Using Darcy-Brinkman Formulation
Authors: Amel Tayari, Atef Eljerry, Mourad Magherbi
Abstract:
The paper reports a numerical investigation of the entropy generation analysis due to mixed convection in laminar flow through a channel filled with porous media. The second law of thermodynamics is applied to investigate the entropy generation rate. The Darcy-Brinkman Model is employed. The entropy generation due to heat transfer and friction dissipations has been determined in mixed convection by solving numerically the continuity, momentum and energy equations, using a control volume finite element method. The effects of Darcy number, modified Brinkman number and the Rayleigh number on averaged entropy generation and averaged Nusselt number are investigated. The Rayleigh number varied between 103 ≤ Ra ≤ 105 and the modified Brinkman number ranges between 10-5 ≤ Br≤ 10-1 with fixed values of porosity and Reynolds number at 0.5 and 10 respectively. The Darcy number varied between 10-6 ≤ Da ≤10.Keywords: entropy generation, porous media, heat transfer, mixed convection, numerical methods, darcy, brinkman
Procedia PDF Downloads 41021857 Numerical Study of the Breakdown of Surface Divergence Based Models for Interfacial Gas Transfer Velocity at Large Contamination Levels
Authors: Yasemin Akar, Jan G. Wissink, Herlina Herlina
Abstract:
The effect of various levels of contamination on the interfacial air–water gas transfer velocity is studied by Direct Numerical Simulation (DNS). The interfacial gas transfer is driven by isotropic turbulence, introduced at the bottom of the computational domain, diffusing upwards. The isotropic turbulence is generated in a separate, concurrently running the large-eddy simulation (LES). The flow fields in the main DNS and the LES are solved using fourth-order discretisations of convection and diffusion. To solve the transport of dissolved gases in water, a fifth-order-accurate WENO scheme is used for scalar convection combined with a fourth-order central discretisation for scalar diffusion. The damping effect of the surfactant contamination on the near surface (horizontal) velocities in the DNS is modelled using horizontal gradients of the surfactant concentration. An important parameter in this model, which corresponds to the level of contamination, is ReMa⁄We, where Re is the Reynolds number, Ma is the Marangoni number, and We is the Weber number. It was previously found that even small levels of contamination (ReMa⁄We small) lead to a significant drop in the interfacial gas transfer velocity KL. It is known that KL depends on both the Schmidt number Sc (ratio of the kinematic viscosity and the gas diffusivity in water) and the surface divergence β, i.e. K_L∝√(β⁄Sc). Previously it has been shown that this relation works well for surfaces with low to moderate contamination. However, it will break down for β close to zero. To study the validity of this dependence in the presence of surface contamination, simulations were carried out for ReMa⁄We=0,0.12,0.6,1.2,6,30 and Sc = 2, 4, 8, 16, 32. First, it will be shown that the scaling of KL with Sc remains valid also for larger ReMa⁄We. This is an important result that indicates that - for various levels of contamination - the numerical results obtained at low Schmidt numbers are also valid for significantly higher and more realistic Sc. Subsequently, it will be shown that - with increasing levels of ReMa⁄We - the dependency of KL on β begins to break down as the increased damping of near surface fluctuations results in an increased damping of β. Especially for large levels of contamination, this damping is so severe that KL is found to be underestimated significantly.Keywords: contamination, gas transfer, surfactants, turbulence
Procedia PDF Downloads 30021856 Optimization Method of the Number of Berth at Bus Rapid Transit Stations Based on Passenger Flow Demand
Authors: Wei Kunkun, Cao Wanyang, Xu Yujie, Qiao Yuzhi, Liu Yingning
Abstract:
The reasonable design of bus parking spaces can improve the traffic capacity of the station and reduce traffic congestion. In order to reasonably determine the number of berths at BRT (Bus Rapid Transit) stops, it is based on the actual bus rapid transit station observation data, scheduling data, and passenger flow data. Optimize the number of station berths from the perspective of optimizing the balance of supply and demand at the site. Combined with the classical capacity calculation model, this paper first analyzes the important factors affecting the traffic capacity of BRT stops by using SPSS PRO and MATLAB programming software, namely the distribution of BRT stops and the distribution of BRT stop time. Secondly, the method of calculating the number of the classic human capital management (HCM) model is optimized based on the actual passenger demand of the station, and the method applicable to the actual number of station berths is proposed. Taking Gangding Station of Zhongshan Avenue Bus Rapid Transit Corridor in Guangzhou as an example, based on the calculation method proposed in this paper, the number of berths of sub-station 1, sub-station 2 and sub-station 3 is 2, which reduces the road space of the station by 33.3% compared with the previous berth 3 of each sub-station, and returns to social vehicles. Therefore, under the condition of ensuring the passenger flow demand of BRT stations, the road space of the station is reduced, and the road is returned to social vehicles, the traffic capacity of social vehicles is improved, and the traffic capacity and efficiency of the BRT corridor system are improved as a whole.Keywords: urban transportation, bus rapid transit station, HCM model, capacity, number of berths
Procedia PDF Downloads 9521855 X-Ray Dynamical Diffraction 'Third Order Nonlinear Renninger Effect'
Authors: Minas Balyan
Abstract:
Nowadays X-ray nonlinear diffraction and nonlinear effects are investigated due to the presence of the third generation synchrotron sources and XFELs. X-ray third order nonlinear dynamical diffraction is considered as well. Using the nonlinear model of the usual visible light optics the third-order nonlinear Takagi’s equations for monochromatic waves and the third-order nonlinear time-dependent dynamical diffraction equations for X-ray pulses are obtained by the author in previous papers. The obtained equations show, that even if the Fourier-coefficients of the linear and the third order nonlinear susceptibilities are zero (forbidden reflection), the dynamical diffraction in the nonlinear case is related to the presence in the nonlinear equations the terms proportional to the zero order and the second order nonzero Fourier coefficients of the third order nonlinear susceptibility. Thus, in the third order nonlinear Bragg diffraction case a nonlinear analogue of the well-known Renninger effect takes place. In this work, the 'third order nonlinear Renninger effect' is considered theoretically.Keywords: Bragg diffraction, nonlinear Takagi’s equations, nonlinear Renninger effect, third order nonlinearity
Procedia PDF Downloads 38521854 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming
Authors: William Chung
Abstract:
This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems
Procedia PDF Downloads 16621853 Evaluation of Calendula officinalis L. Flower Dry Weight, Flower Diameter, and Number of Flower in Plant Variabilities under Effect of Compost and Nitrogen Different Levels in Four Harvest
Authors: Amin Rezazadeh, Parisa Farahpour, Arezoo Rezazadeh, Morteza Sam Deliri
Abstract:
In order to investigate the effects of nitrogen and compost different levels on qualitative and quantitative performance of Calendula officinalis L. herb, an experiment was carried out in the research field of Chalous Azad University in 2011-2012. The experiment was done in factorial form as a randomized complete block design, in three replicates. Treatments consisted of nitrogen and compost. Considered nitrogen levels consisted of N0=0, N1=50, N2=100 kg/ha and compost levels were including C0=0, C1=6, C2=12 ton/ha. Investigated characteristics consisted of flower dry weight, number of flowers in plant, flower diameter. The results showed, nitrogen and compost treatments had statistically significant influence (p ≤ 0.01) on studied characteristics. Flower dry weight, flower diameter and number of flower in plant characteristics has been studied in four harvest; as, the performance of these characteristics had increasing procedure from the first harvest up to the forth harvest; and, in the fourth harvest, it has reached to its` maximum level. As, up to the forth harvest, the maximum flower dry weight, flower diameter and number of flower in plant obtained by C1× N2 (C1=6 ton/ha compost and N2=100 kg/ha nitrogen) treatment.Keywords: calendula, compost, nitrogen, flavonoid
Procedia PDF Downloads 38621852 Kinematic Hardening Parameters Identification with Respect to Objective Function
Authors: Marina Franulovic, Robert Basan, Bozidar Krizan
Abstract:
Constitutive modelling of material behaviour is becoming increasingly important in prediction of possible failures in highly loaded engineering components, and consequently, optimization of their design. In order to account for large number of phenomena that occur in the material during operation, such as kinematic hardening effect in low cycle fatigue behaviour of steels, complex nonlinear material models are used ever more frequently, despite of the complexity of determination of their parameters. As a method for the determination of these parameters, genetic algorithm is good choice because of its capability to provide very good approximation of the solution in systems with large number of unknown variables. For the application of genetic algorithm to parameter identification, inverse analysis must be primarily defined. It is used as a tool to fine-tune calculated stress-strain values with experimental ones. In order to choose proper objective function for inverse analysis among already existent and newly developed functions, the research is performed to investigate its influence on material behaviour modelling.Keywords: genetic algorithm, kinematic hardening, material model, objective function
Procedia PDF Downloads 33221851 Training a Neural Network to Segment, Detect and Recognize Numbers
Authors: Abhisek Dash
Abstract:
This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.Keywords: convolutional neural networks, OCR, text detection, text segmentation
Procedia PDF Downloads 16121850 Teaching the Binary System via Beautiful Facts from the Real Life
Authors: Salem Ben Said
Abstract:
In recent times the decimal number system to which we are accustomed has received serious competition from the binary number system. In this note, an approach is suggested to teaching and learning the binary number system using examples from the real world. More precisely, we will demonstrate the utility of the binary system in describing the optimal strategy to win the Chinese Nim game, and in telegraphy by decoding the hidden message on Perseverance’s Mars parachute written in the language of binary system. Finally, we will answer the question, “why do modern computers prefer the ternary number system instead of the binary system?”. All materials are provided in a format that is conductive to classroom presentation and discussion.Keywords: binary number system, Nim game, telegraphy, computers prefer the ternary system
Procedia PDF Downloads 18621849 Estimating Current Suicide Rates Using Google Trends
Authors: Ladislav Kristoufek, Helen Susannah Moat, Tobias Preis
Abstract:
Data on the number of people who have committed suicide tends to be reported with a substantial time lag of around two years. We examine whether online activity measured by Google searches can help us improve estimates of the number of suicide occurrences in England before official figures are released. Specifically, we analyse how data on the number of Google searches for the terms “depression” and “suicide” relate to the number of suicides between 2004 and 2013. We find that estimates drawing on Google data are significantly better than estimates using previous suicide data alone. We show that a greater number of searches for the term “depression” is related to fewer suicides, whereas a greater number of searches for the term “suicide” is related to more suicides. Data on suicide related search behaviour can be used to improve current estimates of the number of suicide occurrences.Keywords: nowcasting, search data, Google Trends, official statistics
Procedia PDF Downloads 35721848 Analysis and Comparison of Asymmetric H-Bridge Multilevel Inverter Topologies
Authors: Manel Hammami, Gabriele Grandi
Abstract:
In recent years, multilevel inverters have become more attractive for single-phase photovoltaic (PV) systems, due to their known advantages over conventional H-bridge pulse width-modulated (PWM) inverters. They offer improved output waveforms, smaller filter size, lower total harmonic distortion (THD), higher output voltages and others. The most common multilevel converter topologies, presented in literature, are the neutral-point-clamped (NPC), flying capacitor (FC) and Cascaded H-Bridge (CHB) converters. In both NPC and FC configurations, the number of components drastically increases with the number of levels what leads to complexity of the control strategy, high volume, and cost. Whereas, increasing the number of levels in case of the cascaded H-bridge configuration is a flexible solution. However, it needs isolated power sources for each stage, and it can be applied to PV systems only in case of PV sub-fields. In order to improve the ratio between the number of output voltage levels and the number of components, several hybrids and asymmetric topologies of multilevel inverters have been proposed in the literature such as the FC asymmetric H-bridge (FCAH) and the NPC asymmetric H-bridge (NPCAH) topologies. Another asymmetric multilevel inverter configuration that could have interesting applications is the cascaded asymmetric H-bridge (CAH), which is based on a modular half-bridge (two switches and one capacitor, also called level doubling network, LDN) cascaded to a full H-bridge in order to double the output voltage level. This solution has the same number of switches as the above mentioned AH configurations (i.e., six), and just one capacitor (as the FCAH). CAH is becoming popular, due to its simple, modular and reliable structure, and it can be considered as a retrofit which can be added in series to an existing H-Bridge configuration in order to double the output voltage levels. In this paper, an original and effective method for the analysis of the DC-link voltage ripple is given for single-phase asymmetric H-bridge multilevel inverters based on level doubling network (LDN). Different possible configurations of the asymmetric H-Bridge multilevel inverters have been considered and the analysis of input voltage and current are analytically determined and numerically verified by Matlab/Simulink for the case of cascaded asymmetric H-bridge multilevel inverters. A comparison between FCAH and the CAH configurations is done on the basis of the analysis of the DC and voltage ripple for the DC source (i.e., the PV system). The peak-to-peak DC and voltage ripple amplitudes are analytically calculated over the fundamental period as a function of the modulation index. On the basis of the maximum peak-to-peak values of low frequency and switching ripple voltage components, the DC capacitors can be designed. Reference is made to unity output power factor, as in case of most of the grid-connected PV generation systems. Simulation results will be presented in the full paper in order to prove the effectiveness of the proposed developments in all the operating conditions.Keywords: asymmetric inverters, dc-link voltage, level doubling network, single-phase multilevel inverter
Procedia PDF Downloads 20721847 Numerical Study of Mixed Convection Coupled to Radiation in a Square Cavity with a Lid-Driven
Authors: Belmiloud Mohamed Amine, Sad Chemloul Nord-Eddine
Abstract:
In this study we investigated numerically heat transfer by mixed convection coupled to radiation in a square cavity; the upper horizontal wall is movable. The purpose of this study is to see the influence of the emissivity and the varying of the Richardson number on the variation of the average Nusselt number. The vertical walls of the cavity are differentially heated, the left wall is maintained at a uniform temperature higher than the right wall, and the two horizontal walls are adiabatic. The finite volume method is used for solving the dimensionless governing equations. Emissivity values used in this study are ranged between 0 and 1, the Richardson number in the range 0.1 to10. The Rayleigh number is fixed to Ra = 10000 and the Prandtl number is maintained constant Pr = 0.71. Streamlines, isothermal lines and the average Nusselt number are presented according to the surface emissivity. The results of this study show that the Richardson number and emissivity affect the average Nusselt number.Keywords: mixed convection, square cavity, wall emissivity, lid-driven, numerical study
Procedia PDF Downloads 34621846 Numerical Study of Laminar Separation Bubble Over an Airfoil Using γ-ReθT SST Turbulence Model on Moderate Reynolds Number
Authors: Younes El Khchine, Mohammed Sriti
Abstract:
A parametric study has been conducted to analyse the flow around S809 airfoil of wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds number by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using γ-Reθt turbulence model. Two-dimensional study was conducted for the chord Reynolds number of 1×105 and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of laminar separation bubble and aerodynamic performances of wind turbine. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerate transition process and the turbulent reattachment point move closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase of the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to moves upstream to leading edge of the airfoil that causes earlier laminar separation.Keywords: laminar separation bubble, turbulence intensity, S809 airfoil, transition model, Reynolds number
Procedia PDF Downloads 8521845 Number of Necessary Parameters for Parametrization of Stabilizing Controllers for two times two RHinf Systems
Authors: Kazuyoshi Mori
Abstract:
In this paper, we consider the number of parameters for the parametrization of stabilizing controllers for RHinf systems with size 2 × 2. Fortunately, any plant of this model can admit doubly coprime factorization. Thus we can use the Youla parameterization to parametrize the stabilizing contollers . However, Youla parameterization does not give itself the minimal number of parameters. This paper shows that the minimal number of parameters is four. As a result, we show that the Youla parametrization naturally gives the parameterization of stabilizing controllers with minimal numbers.Keywords: RHinfo, parameterization, number of parameters, multi-input, multi-output systems
Procedia PDF Downloads 40721844 Ferromagnetic Potts Models with Multi Site Interaction
Authors: Nir Schreiber, Reuven Cohen, Simi Haber
Abstract:
The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.Keywords: entropic sampling, lattice animals, phase transitions, Potts model
Procedia PDF Downloads 16021843 Impact of Working Capital Management Strategies on Firm's Value and Profitability
Authors: Jonghae Park, Daesung Kim
Abstract:
The impact of aggressive and conservative working capital‘s strategies on the value and profitability of the firms has been evaluated by applying the panel data regression analysis. The control variables used in the regression models are natural log of firm size, sales growth, and debt. We collected a panel of 13,988 companies listed on the Korea stock market covering the period 2000-2016. The major findings of this study are as follow: 1) We find a significant negative correlation between firm profitability and the number of days inventory (INV) and days accounts payable (AP). The firm’s profitability can also be improved by reducing the number of days of inventory and days accounts payable. 2) We also find a significant positive correlation between firm profitability and the number of days accounts receivable (AR) and cash ratios (CR). In other words, the cash is associated with high corporate profitability. 3) Tobin's analysis showed that only the number of days accounts receivable (AR) and cash ratios (CR) had a significant relationship. In conclusion, companies can increase profitability by reducing INV and increasing AP, but INV and AP did not affect corporate value. In particular, it is necessary to increase CA and decrease AR in order to increase Firm’s profitability and value.Keywords: working capital, working capital management, firm value, profitability
Procedia PDF Downloads 18921842 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets
Authors: Kothuri Sriraman, Mattupalli Komal Teja
Abstract:
In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm
Procedia PDF Downloads 34821841 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 25621840 Storage Assignment Strategies to Reduce Manual Picking Errors with an Emphasis on an Ageing Workforce
Authors: Heiko Diefenbach, Christoph H. Glock
Abstract:
Order picking, i.e., the order-based retrieval of items in a warehouse, is an important time- and cost-intensive process for many logistic systems. Despite the ongoing trend of automation, most order picking systems are still manual picker-to-parts systems, where human pickers walk through the warehouse to collect ordered items. Human work in warehouses is not free from errors, and order pickers may at times pick the wrong or the incorrect number of items. Errors can cause additional costs and significant correction efforts. Moreover, age might increase a person’s likelihood to make mistakes. Hence, the negative impact of picking errors might increase for an aging workforce currently witnessed in many regions globally. A significant amount of research has focused on making order picking systems more efficient. Among other factors, storage assignment, i.e., the assignment of items to storage locations (e.g., shelves) within the warehouse, has been subject to optimization. Usually, the objective is to assign items to storage locations such that order picking times are minimized. Surprisingly, there is a lack of research concerned with picking errors and respective prevention approaches. This paper hypothesize that the storage assignment of items can affect the probability of pick errors. For example, storing similar-looking items apart from one other might reduce confusion. Moreover, storing items that are hard to count or require a lot of counting at easy-to-access and easy-to-comprehend self heights might reduce the probability to pick the wrong number of items. Based on this hypothesis, the paper discusses how to incorporate error-prevention measures into mathematical models for storage assignment optimization. Various approaches with respective benefits and shortcomings are presented and mathematically modeled. To investigate the newly developed models further, they are compared to conventional storage assignment strategies in a computational study. The study specifically investigates how the importance of error prevention increases with pickers being more prone to errors due to age, for example. The results suggest that considering error-prevention measures for storage assignment can reduce error probabilities with only minor decreases in picking efficiency. The results might be especially relevant for an aging workforce.Keywords: an aging workforce, error prevention, order picking, storage assignment
Procedia PDF Downloads 20421839 Cavitating Jet Design for Enhanced Drilling Performance
Authors: Abdullah Ababtain, Mouhammad El Hassan, Hassan Assoum, Anas Sakout
Abstract:
In this paper, a brief literature review on cavitation jets is presented in order to introduce the cavitation mechanism, strategies to assess when cavitation occurs, and the factors that influence cavitation in cavitating jets. The objectivity of the cavitation number often used to predict cavitation is also discussed. The results show that cavitation cannot be foreseen just using the cavitation number. Therefore, more efforts are needed to innovate and develop a self-resonating jet geometry that would be maintains the flow and the pressure in the cavitation condition just earlier than the flow acts on the target that would be used in such operating conditions. This study focused on a particular aspect related to improving drilling efficiency and the rate of penetration (ROP). In addition, a discussion on the methods used to measure cavitation and the factors that affect cavitation occurrence will be discussed. Two different types of cavitation nozzles were designed and tested. It has been shown that the self-resonating cavitation nozzle presents greater performance than standard non-resonating nozzle. It is thus concluded that a self-resonating cavitation jet present a high potential for improving drilling performance.Keywords: cavitating jet, erosion, cavitation number, rate of penetration (ROP)
Procedia PDF Downloads 19521838 Measuring Parliamentarian: Towards Analysing Members of Parliaments in Malaysia
Authors: Rosyidah Muhamad
Abstract:
Democracies are premised on the idea that citizens can hold their leaders accountable for their actions by voting for or against them in regular elections. However, in order this ideal to be realized, citizens must possess a minimum amount of information about their leaders’ performance. Citizens should be made aware of the performance of their elected representatives. This study seeks to analyse this critical information with special reference to Malaysian Parliamentarians (MPs). We adopted several existence Parliamentary Performance model with special reference to their performance inside the parliament. Among indicators used by the Scholastic for analysing this performance are the number of bills proposed by parliamentarian, the number of proposals that would benefit their constituency, the number of speeches made by the parliamentarian during plenary and the percentage of laws passed among the proposals made by certain parliamentary. The broad goals of the study include the analysis of the capacity of a representative body to accommodate the diverse claims and demands that are made on it. We find that overall performances of MPs are average. This is due to not only the background characteristic of individuals MPs but also the limitation of the mechanism provides in the Parliament itself.Keywords: member of parliament, democracy, evaluation, Malaysia
Procedia PDF Downloads 22421837 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah
Abstract:
Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.Keywords: dimensionality reduction, hyperspectral image, semantic interpretation, spatial hypergraph
Procedia PDF Downloads 30621836 Evaluation of Milk Production of an Algerian Rabbit Population Raised in Aures Area
Authors: Moumen Souad, Melizi Mohamed
Abstract:
In order to characterize rabbits does of an Aures local population raised in Algeria, a study of their milk yield was realized in the experimental rabbitry of El Hadj Lakhdhar University. Milk production of does was measured every day during the days following 215 parturitions. It was estimated by weighing the female before and after the single daily suckling (10–15 min between the two weighing operations). The various calculated parameters were the quantity of milk produced per day, per week and the total quantity produced in 21 days, as well as the intake of milk by young rabbits. The analysis concerned the effects of the number of successive litters (3 classes: 1 to 3 and more) and of the average number of the number of young rabbits suckled per litter (6 classes: from 1-2 kits to more than 6). During the 21 days of controlled lactation, the average litter size was 6±3. The rabbits of the Aures area produced on average 2544.34±747 g in 21 days that is 121 g of milk/day or 21 g of milk/kit/day. The milk yield increased from 526, 1035, 1240 and 2801 g to 760, 1365, 1715 and 3840 for week 1, 2, 3 and the total period of lactation, respectively. Nevertheless, milk production available per kit and per day decreased linearly with kits number in the litter for each of the 3 weeks considered. On the other hand the milk yield was not affected by the weight at birth of kits.Keywords: milk production, litter size, rabbit, Aures area, Algeria
Procedia PDF Downloads 26321835 Fractional Order Differentiator Using Chebyshev Polynomials
Authors: Koushlendra Kumar Singh, Manish Kumar Bajpai, Rajesh Kumar Pandey
Abstract:
A discrete time fractional orderdifferentiator has been modeled for estimating the fractional order derivatives of contaminated signal. The proposed approach is based on Chebyshev’s polynomials. We use the Riemann-Liouville fractional order derivative definition for designing the fractional order SG differentiator. In first step we calculate the window weight corresponding to the required fractional order. Then signal is convoluted with this calculated window’s weight for finding the fractional order derivatives of signals. Several signals are considered for evaluating the accuracy of the proposed method.Keywords: fractional order derivative, chebyshev polynomials, signals, S-G differentiator
Procedia PDF Downloads 64821834 A Stable Method for Determination of the Number of Independent Components
Authors: Yuyan Yi, Jingyi Zheng, Nedret Billor
Abstract:
Independent component analysis (ICA) is one of the most commonly used blind source separation (BSS) techniques for signal pre-processing, such as noise reduction and feature extraction. The main parameter in the ICA method is the number of independent components (IC). Although there have been several methods for the determination of the number of ICs, it has not been given sufficient attentionto this important parameter. In this study, wereview the mostused methods fordetermining the number of ICs and providetheir advantages and disadvantages. Further, wepropose an improved version of column-wise ICAByBlock method for the determination of the number of ICs.To assess the performance of the proposed method, we compare the column-wise ICAbyBlock with several existing methods through different ICA methods by using simulated and real signal data. Results show that the proposed column-wise ICAbyBlock is an effective and stable method for determining the optimal number of components in ICA. This method is simple, and results can be demonstrated intuitively with good visualizations.Keywords: independent component analysis, optimal number, column-wise, correlation coefficient, cross-validation, ICAByblock
Procedia PDF Downloads 9921833 A Concept of Data Mining with XML Document
Authors: Akshay Agrawal, Anand K. Srivastava
Abstract:
The increasing amount of XML datasets available to casual users increases the necessity of investigating techniques to extract knowledge from these data. Data mining is widely applied in the database research area in order to extract frequent correlations of values from both structured and semi-structured datasets. The increasing availability of heterogeneous XML sources has raised a number of issues concerning how to represent and manage these semi structured data. In recent years due to the importance of managing these resources and extracting knowledge from them, lots of methods have been proposed in order to represent and cluster them in different ways.Keywords: XML, similarity measure, clustering, cluster quality, semantic clustering
Procedia PDF Downloads 37921832 Detection of PCD-Related Transcription Factors for Improving Salt Tolerance in Plant
Authors: A. Bahieldin, A. Atef, S. Edris, N. O. Gadalla, S. M. Hassan, M. A. Al-Kordy, A. M. Ramadan, A. S. M. Al- Hajar, F. M. El-Domyati
Abstract:
The idea of this work is based on a natural exciting phenomenon suggesting that suppression of genes related to the program cell death (or PCD) mechanism might help the plant cells to efficiently tolerate abiotic stresses. The scope of this work was the detection of PCD-related transcription factors (TFs) that might also be related to salt stress tolerance in plant. Two model plants, e.g., tobacco and Arabidopsis, were utilized in order to investigate this phenomenon. Occurrence of PCD was first proven by Evans blue staining and DNA laddering after tobacco leaf discs were treated with oxalic acid (OA) treatment (20 mM) for 24 h. A number of 31 TFs up regulated after 2 h and co-expressed with genes harboring PCD-related domains were detected via RNA-Seq analysis and annotation. These TFs were knocked down via virus induced gene silencing (VIGS), an RNA interference (RNAi) approach, and tested for their influence on triggering PCD machinery. Then, Arabidopsis SALK knocked out T-DNA insertion mutants in selected TFs analogs to those in tobacco were tested under salt stress (up to 250 mM NaCl) in order to detect the influence of different TFs on conferring salt tolerance in Arabidopsis. Involvement of a number of candidate abiotic-stress related TFs was investigated.Keywords: VIGS, PCD, RNA-Seq, transcription factors
Procedia PDF Downloads 274