Search results for: Rayleigh number.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3659

Search results for: Rayleigh number.

2879 Project Complexity Indices based on Topology Features

Authors: Amer A. Boushaala

Abstract:

The heuristic decision rules used for project scheduling will vary depending upon the project-s size, complexity, duration, personnel, and owner requirements. The concept of project complexity has received little detailed attention. The need to differentiate between easy and hard problem instances and the interest in isolating the fundamental factors that determine the computing effort required by these procedures inspired a number of researchers to develop various complexity measures. In this study, the most common measures of project complexity are presented. A new measure of project complexity is developed. The main privilege of the proposed measure is that, it considers size, shape and logic characteristics, time characteristics, resource demands and availability characteristics as well as number of critical activities and critical paths. The degree of sensitivity of the proposed measure for complexity of project networks has been tested and evaluated against the other measures of complexity of the considered fifty project networks under consideration in the current study. The developed measure showed more sensitivity to the changes in the network data and gives accurate quantified results when comparing the complexities of networks.

Keywords: Activity networks, Complexity index, Networkcomplexity measure, Network topology, Project Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
2878 Performance Degradation for the GLR Test-Statistics for Spatial Signal Detection

Authors: Olesya Bolkhovskaya, Alexander Maltsev

Abstract:

Antenna arrays are widely used in modern radio systems in sonar and communications. The solving of the detection problems of a useful signal on the background of noise is based on the GLRT method. There is a large number of problem which depends on the known a priori information. In this work, in contrast to the majority of already solved problems, it is used only difference  spatial properties of the signal and noise for detection. We are analyzing the influence of the degree of non-coherence of signal and noise unhomogeneity on the performance characteristics of different GLRT statistics. The description of the signal and noise is carried out by means of the spatial covariance matrices C in the cases of different number of known information. The partially coherent signalis is simulated as a plane wave with a random angle of incidence of the wave concerning a normal. Background noise is simulated as random process with uniform distribution function in each element. The results of investigation of degradation of performance characteristics for different cases are represented in this work.

Keywords: GLRT, Neumann-Pearson’s criterion, test-statistics, degradation, spatial processing, multielement antenna array

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1806
2877 Identification of Non-Lexicon Non-Slang Unigrams in Body-enhancement Medicinal UBE

Authors: Jatinderkumar R. Saini, Apurva A. Desai

Abstract:

Email has become a fast and cheap means of online communication. The main threat to email is Unsolicited Bulk Email (UBE), commonly called spam email. The current work aims at identification of unigrams in more than 2700 UBE that advertise body-enhancement drugs. The identification is based on the requirement that the unigram is neither present in dictionary, nor is a slang term. The motives of the paper are many fold. This is an attempt to analyze spamming behaviour and employment of wordmutation technique. On the side-lines of the paper, we have attempted to better understand the spam, the slang and their interplay. The problem has been addressed by employing Tokenization technique and Unigram BOW model. We found that the non-lexicon words constitute nearly 66% of total number of lexis of corpus whereas non-slang words constitute nearly 2.4% of non-lexicon words. Further, non-lexicon non-slang unigrams composed of 2 lexicon words, form more than 71% of the total number of such unigrams. To the best of our knowledge, this is the first attempt to analyze usage of non-lexicon non-slang unigrams in any kind of UBE.

Keywords: Body Enhancement, Lexicon, Medicinal, Slang, Unigram, Unsolicited Bulk e-mail (UBE)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1820
2876 Finite Element Analysis and Feasibility of Simple Stochastic Modeling in the Analysis of Fissuring in Grains during Soaking

Authors: Jonathan H. Perez, Fumihiko Tanaka, Daisuke Hamanaka, Toshitaka Uchino

Abstract:

A finite element analysis was conducted to determine the effect of moisture diffusion and hygroscopic swelling in rice. A parallel simple stochastic modeling was performed to predict the number of grains cracked as a result of moisture absorption and hygroscopic swelling. Rice grains were soaked in thermally (25 oC) controlled water and then tested for compressive stress. The destructive compressive stress tests revealed through compressive stress calculation that the peak force required to cause cracking in grains soaked in water reduced with time as soaking duration was extended. Results of the experiment showed that several grains had their value of the predicted compressive stress below the von Mises stress and were interpreted as grains which become cracked and/or broke during soaking. The technique developed in this experiment will facilitate the approximation of the number of grains which will crack during soaking.

Keywords: Cracking, Finite element analysis, hygroscopic swelling, moisture diffusion, von Mises stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
2875 Statistical and Land Planning Study of Tourist Arrivals in Greece during 2005-2016

Authors: Dimitra Alexiou

Abstract:

During the last 10 years, in spite of the economic crisis, the number of tourists arriving in Greece has increased, particularly during the tourist season from April to October. In this paper, the number of annual tourist arrivals is studied to explore their preferences with regard to the month of travel, the selected destinations, as well the amount of money spent. The collected data are processed with statistical methods, yielding numerical and graphical results. From the computation of statistical parameters and the forecasting with exponential smoothing, useful conclusions are arrived at that can be used by the Greek tourism authorities, as well as by tourist organizations, for planning purposes for the coming years. The results of this paper and the computed forecast can also be used for decision making by private tourist enterprises that are investing in Greece. With regard to the statistical methods, the method of Simple Exponential Smoothing of time series of data is employed. The search for a best forecast for 2017 and 2018 provides the value of the smoothing coefficient. For all statistical computations and graphics Microsoft Excel is used.

Keywords: Tourism, statistical methods, exponential smoothing, land spatial planning, economy, Microsoft Excel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 707
2874 Physical and Mechanical Phenomena Associated with Rock Failure in Brazilian Disc Specimens

Authors: Hamid Reza Nejati, Amin Nazerigivi, Ahmad Reza Sayadi

Abstract:

Failure mechanism of rocks is one of the fundamental aspects to study rock engineering stability. Rock is a material that contains flaws, initial damage, micro-cracks, etc. Failure of rock structure is largely due to tensile stress and was influenced by various parameters. In the present study, the effect of brittleness and loading rate on the physical and mechanical phenomena produced in rock during loading sequences is considered. For this purpose, Acoustic Emission (AE) technique is used to monitor fracturing process of three rock types (onyx marble, sandstone and soft limestone) with different brittleness and sandstone samples under different loading rate. The results of experimental tests revealed that brittleness and loading rate have a significant effect on the mode and number of induced fracture in rocks. An increase in rock brittleness increases the frequency of induced cracks, and the number of tensile fracture decreases when loading rate increases.

Keywords: Brittleness, loading rate, acoustic emission, tensile fracture, shear fracture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
2873 Mutation Rate for Evolvable Hardware

Authors: Emanuele Stomeo, Tatiana Kalganova, Cyrille Lambert

Abstract:

Evolvable hardware (EHW) refers to a selfreconfiguration hardware design, where the configuration is under the control of an evolutionary algorithm (EA). A lot of research has been done in this area several different EA have been introduced. Every time a specific EA is chosen for solving a particular problem, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade a lot of research has been carried out in order to identify the best parameters for the EA-s components for different “test-problems". However different researchers propose different solutions. In this paper the behaviour of mutation rate on (1+λ) evolution strategy (ES) for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies values of the logic cell inputs, the cell type (for example from AND to NOR) and the circuit output. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates used for the evolved circuits. The experimental results found provide the behaviour of the mutation rate to be used during evolution for the design and optimization of logic circuits. The researches on the best mutation rate during the last 40 years are also summarized.

Keywords: Evolvable hardware, mutation rate, evolutionarycomputation, design of logic circuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501
2872 Determination of Agricultural Characteristics of Smooth Bromegrass (Bromus inermis Leyss) Lines under Konya Regional Conditions

Authors: Abdullah Özköse, Ahmet Tamkoç

Abstract:

The present study was conducted to determine the yield and yield components of smooth bromegrass lines under the environmental conditions of the Konya region during the growing seasons between 2011 and 2013. The experiment was performed in the randomized complete block design (RCBD) with four replications. It was found that the selected lines had a statistically significant effect on all the investigated traits, except for the main stem length and the number of nodes in the main stem. According to the two-year average calculated for various parameters checked in the smooth bromegrass lines, the main stem length ranged from 71.6 cm to 79.1 cm, the main stem diameter from 2.12 mm from 2.70 mm, the number of nodes in the main stem from 3.2 to 3.7, the internode length from 11.6 cm to 18.9 cm, flag leaf length from 9.7 cm to 12.7 cm, flag leaf width from 3.58 cm to 6.04 mm, herbage yield from 221.3 kg da–1 to 354.7 kg da–1 and hay yield from 100.4 kg da–1 to 190.1 kg da–1. The study concluded that the smooth bromegrass lines differ in terms of yield and yield components. Therefore, it is very crucial to select suitable varieties of smooth bromegrass to obtain optimum yield.

Keywords: Semiarid region, smooth bromegrass, yield, yield components.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1227
2871 Dimension Reduction of Microarray Data Based on Local Principal Component

Authors: Ali Anaissi, Paul J. Kennedy, Madhu Goyal

Abstract:

Analysis and visualization of microarraydata is veryassistantfor biologists and clinicians in the field of diagnosis and treatment of patients. It allows Clinicians to better understand the structure of microarray and facilitates understanding gene expression in cells. However, microarray dataset is a complex data set and has thousands of features and a very small number of observations. This very high dimensional data set often contains some noise, non-useful information and a small number of relevant features for disease or genotype. This paper proposes a non-linear dimensionality reduction algorithm Local Principal Component (LPC) which aims to maps high dimensional data to a lower dimensional space. The reduced data represents the most important variables underlying the original data. Experimental results and comparisons are presented to show the quality of the proposed algorithm. Moreover, experiments also show how this algorithm reduces high dimensional data whilst preserving the neighbourhoods of the points in the low dimensional space as in the high dimensional space.

Keywords: Linear Dimension Reduction, Non-Linear Dimension Reduction, Principal Component Analysis, Biologists.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
2870 An Experimental Investigation on the Droplet Behavior Impacting a Hot Surface above the Leidenfrost Temperature

Authors: Khaleel Sami Hamdan, Dong-Eok Kim, Sang-Ki Moon

Abstract:

An appropriate model to predict the size of the droplets resulting from the break-up with the structures will help in a better understanding and modeling of the two-phase flow calculations in the simulation of a reactor core loss-of-coolant accident (LOCA). A droplet behavior impacting on a hot surface above the Leidenfrost temperature was investigated. Droplets of known size and velocity were impacted to an inclined plate of hot temperature, and the behavior of the droplets was observed by a high-speed camera. It was found that for droplets of Weber number higher than a certain value, the higher the Weber number of the droplet the smaller the secondary droplets. The COBRA-TF model over-predicted the measured secondary droplet sizes obtained by the present experiment. A simple model for the secondary droplet size was proposed using the mass conservation equation. The maximum spreading diameter of the droplets was also compared to previous correlations and a fairly good agreement was found. A better prediction of the heat transfer in the case of LOCA can be obtained with the presented model.

Keywords: Break-up, droplet, impact, inclined hot plate, Leidenfrost temperature, LOCA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2371
2869 Deterministic Random Number Generator Algorithm for Cryptosystem Keys

Authors: Adi A. Maaita, Hamza A. A. Al_Sewadi

Abstract:

One of the crucial parameters of digital cryptographic systems is the selection of the keys used and their distribution. The randomness of the keys has a strong impact on the system’s security strength being difficult to be predicted, guessed, reproduced, or discovered by a cryptanalyst. Therefore, adequate key randomness generation is still sought for the benefit of stronger cryptosystems. This paper suggests an algorithm designed to generate and test pseudo random number sequences intended for cryptographic applications. This algorithm is based on mathematically manipulating a publically agreed upon information between sender and receiver over a public channel. This information is used as a seed for performing some mathematical functions in order to generate a sequence of pseudorandom numbers that will be used for encryption/decryption purposes. This manipulation involves permutations and substitutions that fulfill Shannon’s principle of “confusion and diffusion”. ASCII code characters were utilized in the generation process instead of using bit strings initially, which adds more flexibility in testing different seed values. Finally, the obtained results would indicate sound difficulty of guessing keys by attackers.

Keywords: Cryptosystems, Information Security agreement, Key distribution, Random numbers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3432
2868 Low Molecular Weight Heparin during Pregnancy

Authors: Sihana Ahmeti Lika, Merita Dauti, Ledjan Malaj

Abstract:

The objective of this study is to analyze the prophylactic usage of low molecular weight heparin (LMWH) along pregnancy and the correlation between their usage and month/week of pregnancy, in the Department of Gynecology and Obstetrics, at Clinical Hospital in Tetovo. A retrospective study was undertaken during 01 January – 31 December 2012. Over of one year, the total number of patients was 4636. Among the 1447 (32.21%) pregnant women, 298 (20.59%) of them were prescribed LMWH. The majority of patients given LMWH, 119 (39.93%) were diagnosed hypercoagulable. The age group with the highest attendance was 25- 35, 141 patients (47.32%). For 195 (65.44%) patients, this was their first pregnancy. Earliest stage of using LMWH was the second month of pregnancy 4 (1.34%) cases. The most common patients were 70 women along the seventh month (23.49%), followed by 68 in the ninth month of pregnancy (22.81%). Women in the 28th gestational week, were found to be the most affected, a total of 55 (78.57%) were in that week. Clexane 2000 and Fraxiparine 0.3 were the most common for which low molecular weight heparin was prescribed. The number of patients which received Clexane 2000 was 84 (28.19%), followed by those with Fraxiparine 0.3 81 (27.18%). The administration of LMWH is associated with long hospitalization (median 14,6 days).

Keywords: Hypercoagulable state, low molecular weight heparin, month of pregnancy, pregnant women.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2642
2867 Effect of Impact Angle on Erosive Abrasive Wear of Ductile and Brittle Materials

Authors: Ergin Kosa, Ali Göksenli

Abstract:

Erosion and abrasion are wear mechanisms reducing the lifetime of machine elements like valves, pump and pipe systems. Both wear mechanisms are acting at the same time, causing a “Synergy” effect, which leads to a rapid damage of the surface. Different parameters are effective on erosive abrasive wear rate. In this study effect of particle impact angle on wear rate and wear mechanism of ductile and brittle materials was investigated. A new slurry pot was designed for experimental investigation. As abrasive particle, silica sand was used. Particle size was ranking between 200- 500 μm. All tests were carried out in a sand-water mixture of 20% concentration for four hours. Impact velocities of the particles were 4.76 m/s. As ductile material steel St 37 with Vickers Hardness Number (VHN) of 245 and quenched St 37 with 510 VHN was used as brittle material. After wear tests, morphology of the eroded surfaces were investigated for better understanding of the wear mechanisms acting at different impact angles by using Scanning Electron Microscope. The results indicated that wear rate of ductile material was higher than brittle material. Maximum wear rate was observed by ductile material at a particle impact angle of 300 and decreased further by an increase in attack angle. Maximum wear rate by brittle materials was by impact angle of 450 and decreased further up to 900. Ploughing was the dominant wear mechanism by ductile material. Microcracks on the surface were detected by ductile materials, which are nucleation centers for crater formation. Number of craters decreased and depth of craters increased by ductile materials by attack angle higher than 300. Deformation wear mechanism was observed by brittle materials. Number and depth of pits decreased by brittle materials by impact angles higher than 450. At the end it is concluded that wear rate could not be directly related to impact angle of particles due to the different reaction of ductile and brittle materials.

Keywords: Erosive wear, particle impact angle, silica sand, wear rate, ductile-brittle material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3023
2866 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator

Authors: Jaeyoung Lee

Abstract:

Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.

Keywords: Edge network, embedded network, MMA, matrix multiplication accelerator and semantic segmentation network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 467
2865 Speckle Reducing Contourlet Transform for Medical Ultrasound Images

Authors: P.S. Hiremath, Prema T. Akkasaligar, Sharan Badiger

Abstract:

Speckle noise affects all coherent imaging systems including medical ultrasound. In medical images, noise suppression is a particularly delicate and difficult task. A tradeoff between noise reduction and the preservation of actual image features has to be made in a way that enhances the diagnostically relevant image content. Even though wavelets have been extensively used for denoising speckle images, we have found that denoising using contourlets gives much better performance in terms of SNR, PSNR, MSE, variance and correlation coefficient. The objective of the paper is to determine the number of levels of Laplacian pyramidal decomposition, the number of directional decompositions to perform on each pyramidal level and thresholding schemes which yields optimal despeckling of medical ultrasound images, in particular. The proposed method consists of the log transformed original ultrasound image being subjected to contourlet transform, to obtain contourlet coefficients. The transformed image is denoised by applying thresholding techniques on individual band pass sub bands using a Bayes shrinkage rule. We quantify the achieved performance improvement.

Keywords: Contourlet transform, Despeckling, Pyramidal directionalfilter bank, Thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2446
2864 Prevalence and Fungicidal Activity of Endophytic Micromycetes of Plants in Kazakhstan

Authors: L. V. Ignatova, Y. V. Brazhnikova, T. D. Mukasheva, R. Zh. Berzhanova, A. A. Omirbekova

Abstract:

Endophytic microorganisms are presented in plants of different families growing in the foothills and piedmont plains of Trans-Ili Alatau. It was found that the maximum number of endophytic micromycetes is typical to the Fabaceae family. The number of microscopic fungi in the roots reached (145.9±5.9)×103 CFU/g of plant tissue; yeasts - (79.8±3.5)×102 CFU/g of plant tissue. Basically, endophytic microscopic fungi are typical for underground parts of plants. In contrast, yeasts more infected aboveground parts of plants. Small amount of micromycetes is typical to inflorescence and fruits. Antagonistic activity of selected micromycetes against Fusarium graminearum, Cladosporium sp., Phytophtora infestans and Botrytis cinerea phytopathogens was detected. Strains with a broad, narrow and limited range of action were identified. For further investigations Rh2 and T7 strains were selected, they are characterized by a broad spectrum of fungicidal activity and they formed the large inhibition zones against phytopathogens. Active antagonists are attributed to the Rhodotorula mucilaginosa and Beauveria bassiana species.

Keywords: Endophytic micromycetes, fungicidal activity, prevalence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2633
2863 Improved Network Construction Methods Based on Virtual Rails for Mobile Sensor Network

Authors: Noritaka Shigei, Kazuto Matsumoto, Yoshiki Nakashima, Hiromi Miyajima

Abstract:

Although Mobile Wireless Sensor Networks (MWSNs), which consist of mobile sensor nodes (MSNs), can cover a wide range of observation region by using a small number of sensor nodes, they need to construct a network to collect the sensing data on the base station by moving the MSNs. As an effective method, the network construction method based on Virtual Rails (VRs), which is referred to as VR method, has been proposed. In this paper, we propose two types of effective techniques for the VR method. They can prolong the operation time of the network, which is limited by the battery capabilities of MSNs and the energy consumption of MSNs. The first technique, an effective arrangement of VRs, almost equalizes the number of MSNs belonging to each VR. The second technique, an adaptive movement method of MSNs, takes into account the residual energy of battery. In the simulation, we demonstrate that each technique can improve the network lifetime and the combination of both techniques is the most effective.

Keywords: Wireless sensor network, mobile sensor node, relay of sensing data, virtual rail, residual energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
2862 Household Level Determinants of Rural-Urban Migration in Bangladesh

Authors: Shamima Akhter, Siegfried Bauer

Abstract:

The aim of this study is to analyze the migration  process of the rural population of Bangladesh. Heckman Probit model  with sample selection was applied in this paper to explore the  determinants of migration and intensity of migration at farm  household level. The farm survey was conducted in the central part of  Bangladesh on 160 farm households with migrant and on 154 farm  households without migrant including a total of 316 farm households.  The results from the applied model revealed that main determinants  of migration at farm household level are household age, economically  active males and females, number of young and old dependent  members in the household and agricultural land holding. On the other  hand the main determinants of intensity of migration are availability  of economically adult male in the household, number of young  dependents and agricultural land holding.

 

Keywords: Determinants, Heckman Probit Model, Migration, Rural- Urban.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3062
2861 Prediction of Dissolved Oxygen in Rivers Using a Wang-Mendel Method – Case Study of Au Sable River

Authors: Mahmoud R. Shaghaghian

Abstract:

Amount of dissolve oxygen in a river has a great direct affect on aquatic macroinvertebrates and this would influence on the region ecosystem indirectly. In this paper it is tried to predict dissolved oxygen in rivers by employing an easy Fuzzy Logic Modeling, Wang Mendel method. This model just uses previous records to estimate upcoming values. For this purpose daily and hourly records of eight stations in Au Sable watershed in Michigan, United States are employed for 12 years and 50 days period respectively. Calculations indicate that for long period prediction it is better to increase input intervals. But for filling missed data it is advisable to decrease the interval. Increasing partitioning of input and output features influence a little on accuracy but make the model too time consuming. Increment in number of input data also act like number of partitioning. Large amount of train data does not modify accuracy essentially, so, an optimum training length should be selected.

Keywords: Dissolved oxygen, Au Sable, fuzzy logic modeling, Wang Mendel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
2860 Improvement of Central Composite Design in Modeling and Optimization of Simulation Experiments

Authors: A. Nuchitprasittichai, N. Lerdritsirikoon, T. Khamsing

Abstract:

Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.

Keywords: Central composite design, CO2 liquefaction, Latin Hypercube Sampling, simulation – based optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
2859 Evaluation of Service Continuity in a Self-organizing IMS

Authors: Satoshi Komorita, Tsunehiko Chiba, Hidetoshi Yokota, Ashutosh Dutta, Christian Makaya, Subir Das, Dana Chee, F. Joe Lin, Henning Schulzrinne

Abstract:

The NGN (Next Generation Network), which can provide advanced multimedia services over an all-IP based network, has been the subject of much attention for years. While there have been tremendous efforts to develop its architecture and protocols, especially for IMS, which is a key technology of the NGN, it is far from being widely deployed. However, efforts to create an advanced signaling infrastructure realizing many requirements have resulted in a large number of functional components and interactions between those components. Thus, the carriers are trying to explore effective ways to deploy IMS while offering value-added services. As one such approach, we have proposed a self-organizing IMS. A self-organizing IMS enables IMS functional components and corresponding physical nodes to adapt dynamically and automatically based on situation such as network load and available system resources while continuing IMS operation. To realize this, service continuity for users is an important requirement when a reconfiguration occurs during operation. In this paper, we propose a mechanism that will provide service continuity to users and focus on the implementation and describe performance evaluation in terms of number of control signaling and processing time during reconfiguration

Keywords: IMS, SIP, Service Continuity, Self-organizing, and Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
2858 Network Coding with Buffer Scheme in Multicast for Broadband Wireless Network

Authors: Gunasekaran Raja, Ramkumar Jayaraman, Rajakumar Arul, Kottilingam Kottursamy

Abstract:

Broadband Wireless Network (BWN) is the promising technology nowadays due to the increased number of smartphones. Buffering scheme using network coding considers the reliability and proper degree distribution in Worldwide interoperability for Microwave Access (WiMAX) multi-hop network. Using network coding, a secure way of transmission is performed which helps in improving throughput and reduces the packet loss in the multicast network. At the outset, improved network coding is proposed in multicast wireless mesh network. Considering the problem of performance overhead, degree distribution makes a decision while performing buffer in the encoding / decoding process. Consequently, BuS (Buffer Scheme) based on network coding is proposed in the multi-hop network. Here the encoding process introduces buffer for temporary storage to transmit packets with proper degree distribution. The simulation results depend on the number of packets received in the encoding/decoding with proper degree distribution using buffering scheme.

Keywords: Encoding and decoding, buffer, network coding, degree distribution, broadband wireless networks, multicast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1741
2857 Software Reliability Prediction Model Analysis

Authors: L. Mirtskhulava, M. Khunjgurua, N. Lomineishvili, K. Bakuria

Abstract:

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Keywords: Exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
2856 Analyzing the Market Growth in API Economy Using Time-Evolving Model

Authors: Hiroki Yoshikai, Shin’ichi Arakawa, Tetsuya Takine, Masayuki Murata

Abstract:

API (Application Programming Interface) economy is expected to create new value by converting corporate services such as information processing and data provision into APIs and using these APIs to connect services. Understanding dynamics of a market of API economy under strategies of participants is crucial to fully maximize the values of API economy. To capture the behavior of a market in which the number of participants changes over time, we present a time-evolving market model for a platform in which API providers who provide APIs to service providers participate in addition to service providers and consumers. Then, we use the market model to clarify the role API providers play in expanding market participants and forming ecosystems. The results show that the platform with API providers increased the number of market participants by 67% and decreased the cost to develop services by 25% compared to the platform without API providers. Furthermore, during the expansion phase of the market, it is found that the profits of participants are mostly the same when 70% of the revenue from consumers is distributed to service providers and API providers. It is also found that, when the market is mature, the profits of the service provider and API provider will decrease significantly due to their competitions and the profit of the platform increases.

Keywords: API Economy, ecosystem, platform, API providers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 244
2855 Study on Electrohydrodynamic Capillary Instability with Heat and Mass Transfer

Authors: D. K. Tiwari, Mukesh Kumar Awasthi, G. S. Agrawal

Abstract:

The effect of an axial electric field on the capillary instability of a cylindrical interface in the presence of heat and mass transfer has been investigated using viscous potential flow theory. In viscous potential flow, the viscous term in Navier-Stokes equation vanishes as vorticity is zero but viscosity is not zero. Viscosity enters through normal stress balance in the viscous potential flow theory and tangential stresses are not considered. A dispersion relation that accounts for the growth of axisymmetric waves is derived and stability is discussed theoretically as well as numerically. Stability criterion is given by critical value of applied electric field as well as critical wave number. Various graphs have been drawn to show the effect of various physical parameters such as electric field, heat transfer capillary number, conductivity ratio, permittivity ratio on the stability of the system. It has been observed that the axial electric field and heat and mass transfer both have stabilizing effect on the stability of the system.

Keywords: Capillary instability, Viscous potential flow, Heat and mass transfer, Axial electric field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966
2854 Connected Vertex Cover in 2-Connected Planar Graph with Maximum Degree 4 is NP-complete

Authors: Priyadarsini P. L. K, Hemalatha T.

Abstract:

This paper proves that the problem of finding connected vertex cover in a 2-connected planar graph ( CVC-2 ) with maximum degree 4 is NP-complete. The motivation for proving this result is to give a shorter and simpler proof of NP-Completeness of TRA-MLC (the Top Right Access point Minimum-Length Corridor) problem [1], by finding the reduction from CVC-2. TRA-MLC has many applications in laying optical fibre cables for data communication and electrical wiring in floor plans.The problem of finding connected vertex cover in any planar graph ( CVC ) with maximum degree 4 is NP-complete [2]. We first show that CVC-2 belongs to NP and then we find a polynomial reduction from CVC to CVC-2. Let a graph G0 and an integer K form an instance of CVC, where G0 is a planar graph and K is an upper bound on the size of the connected vertex cover in G0. We construct a 2-connected planar graph, say G, by identifying the blocks and cut vertices of G0, and then finding the planar representation of all the blocks of G0, leading to a plane graph G1. We replace the cut vertices with cycles in such a way that the resultant graph G is a 2-connected planar graph with maximum degree 4. We consider L = K -2t+3 t i=1 di where t is the number of cut vertices in G1 and di is the number of blocks for which ith cut vertex is common. We prove that G will have a connected vertex cover with size less than or equal to L if and only if G0 has a connected vertex cover of size less than or equal to K.

Keywords: NP-complete, 2-Connected planar graph, block, cut vertex

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004
2853 Preferred Character Size for Oblique Angles

Authors: Photjanat Phimnom, Haruetai Lohasiriwat

Abstract:

In today’s world, the LED display has been used for presenting visual information under various circumstances. Such information is an important intermediary in the human information processing. Researchers have been investigated diverse factors that influence this process effectiveness. The letter size is undoubtedly one major factor that has been tested and recommended by many standards and guidelines. However, viewing information on the display from direct perpendicular position is a typical assumption whereas many actual events are required viewing from the angles. This current research aims to study the effect of oblique viewing angle and viewing distance on ability to recognize alphabet, number, and English word. The total of ten participants was volunteered to our 3 x 4 x 4 within subject study. Independent variables include three distance levels (2, 6, and 12 m), four oblique angles (0, 45, 60, 75 degree), and four target types (alphabet, number, short word, and long word). Following the method of constant stimuli our study suggests that the larger oblique angle, ranging from 0 to 75 degree from the line of sight, results in significant higher legibility threshold or larger font size required (p-value < 0.05). Viewing distance factor also shows to have significant effect on the threshold (p-value < 0.05). However, the effect from distance factor is expected to be confounded by the quality of the screen used in our experiment. Lastly, our results show that single alphabet as well as single number are recognized at significant lower threshold (smaller font size) as compared to both short and long words (p-value < 0.05). Therefore, it is recommended that when designs information to be presented on LED display, understanding of all possible ranges of oblique angle should be taken into account in order to specify the preferred letter size. Additionally, the recommendation of letter size for 100% legibility in our tested conditions is provided in the paper.

Keywords: Letter Size, Oblique Angle, Viewing Distance, Legibility Threshold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1330
2852 Influence of Mass Flow Rate on Forced Convective Heat Transfer through a Nanofluid Filled Direct Absorption Solar Collector

Authors: Salma Parvin, M. A. Alim

Abstract:

The convective and radiative heat transfer performance and entropy generation on forced convection through a direct absorption solar collector (DASC) is investigated numerically. Four different fluids, including Cu-water nanofluid, Al2O3-waternanofluid, TiO2-waternanofluid, and pure water are used as the working fluid. Entropy production has been taken into account in addition to the collector efficiency and heat transfer enhancement. Penalty finite element method with Galerkin’s weighted residual technique is used to solve the governing non-linear partial differential equations. Numerical simulations are performed for the variation of mass flow rate. The outcomes are presented in the form of isotherms, average output temperature, the average Nusselt number, collector efficiency, average entropy generation, and Bejan number. The results present that the rate of heat transfer and collector efficiency enhance significantly for raising the values of m up to a certain range.

Keywords: DASC, forced convection, mass flow rate, nanofluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 857
2851 Production of the Protein-Vitamin Complex from Wheat Germ

Authors: Gulmira Kenenbay, Urishbay Chomanov, Tamara Tultabayeva, Aruzhan Shoman

Abstract:

Wheat germ has a balanced amino acid composition of the protein, which is well digested by enzymes in the gastrointestinal tract of humans, a high content of vitamins, minerals and unsaturated acids. Introduction components grain food products will enrich their biologically important substances, giving these products a number of valuable properties and reducing their caloric. A complex natural system of substances in foods will help replenish the body's need of essential nutrients, increasing its resistance to the harmful effects of the environment, prolong life. In this regard, there was a need for the development of production technology of protein complexes from wheat germ and then applying them in food, particularly in the dairy industry. Experimental studies were conducted to determine the number of herbal supplements on the sensory characteristics of the product. Studies have been conducted to determine the optimal process parameters of water activity and moisture content of the investigational product. 

Keywords: Wheat germ, sensory characteristics of the product, water activity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1995
2850 Attribute Weighted Class Complexity: A New Metric for Measuring Cognitive Complexity of OO Systems

Authors: Dr. L. Arockiam, A. Aloysius

Abstract:

In general, class complexity is measured based on any one of these factors such as Line of Codes (LOC), Functional points (FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO) software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC) which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1. The main problem in EWCC metric is that, every attribute holds the same value but in general, cognitive load in understanding the different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity (AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand their data types. The proposed metric has been proved to be a better measure of complexity of class with attributes through the case studies and experiments

Keywords: Software Complexity, Attribute Weighted Class Complexity, Weighted Class Complexity, Data Type

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2121