Search results for: Distance Snake
110 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function
Authors: Anupama Pande, Vishik Goel
Abstract:
A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.
Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2411109 A Hybridization of Constructive Beam Search with Local Search for Far From Most Strings Problem
Authors: Sayyed R Mousavi
Abstract:
The Far From Most Strings Problem (FFMSP) is to obtain a string which is far from as many as possible of a given set of strings. All the input and the output strings are of the same length, and two strings are said to be far if their hamming distance is greater than or equal to a given positive integer. FFMSP belongs to the class of sequences consensus problems which have applications in molecular biology. The problem is NP-hard; it does not admit a constant-ratio approximation either, unless P = NP. Therefore, in addition to exact and approximate algorithms, (meta)heuristic algorithms have been proposed for the problem in recent years. On the other hand, in the recent years, hybrid algorithms have been proposed and successfully used for many hard problems in a variety of domains. In this paper, a new metaheuristic algorithm, called Constructive Beam and Local Search (CBLS), is investigated for the problem, which is a hybridization of constructive beam search and local search algorithms. More specifically, the proposed algorithm consists of two phases, the first phase is to obtain several candidate solutions via the constructive beam search and the second phase is to apply local search to the candidate solutions obtained by the first phase. The best solution found is returned as the final solution to the problem. The proposed algorithm is also similar to memetic algorithms in the sense that both use local search to further improve individual solutions. The CBLS algorithm is compared with the most recent published algorithm for the problem, GRASP, with significantly positive results; the improvement is by order of magnitudes in most cases.
Keywords: Bioinformatics, Far From Most Strings Problem, Hybrid metaheuristics, Matheuristics, Sequences consensus problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743108 Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis
Authors: Deng Zengming, Wang Mingjiang
Abstract:
As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.
Keywords: View synthesis, Gaussian mixture model, hybrid framework, fusion method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993107 Interaction Effect of Feed Rate and Cutting Speed in CNC-Turning on Chip Micro-Hardness of 304- Austenitic Stainless Steel
Authors: G. H. Senussi
Abstract:
The present work is concerned with the effect of turning process parameters (cutting speed, feed rate, and depth of cut) and distance from the center of work piece as input variables on the chip micro-hardness as response or output. Three experiments were conducted; they were used to investigate the chip micro-hardness behavior at diameter of work piece for 30[mm], 40[mm], and 50[mm]. Response surface methodology (R.S.M) is used to determine and present the cause and effect of the relationship between true mean response and input control variables influencing the response as a two or three dimensional hyper surface. R.S.M has been used for designing a three factor with five level central composite rotatable factors design in order to construct statistical models capable of accurate prediction of responses. The results obtained showed that the application of R.S.M can predict the effect of machining parameters on chip micro-hardness. The five level factorial designs can be employed easily for developing statistical models to predict chip micro-hardness by controllable machining parameters. Results obtained showed that the combined effect of cutting speed at it?s lower level, feed rate and depth of cut at their higher values, and larger work piece diameter can result increasing chi micro-hardness.Keywords: Machining Parameters, Chip Micro-Hardness, CNCMachining, 304-Austenic Stainless Steel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3284106 Ranking Genes from DNA Microarray Data of Cervical Cancer by a local Tree Comparison
Authors: Frank Emmert-Streib, Matthias Dehmer, Jing Liu, Max Muhlhauser
Abstract:
The major objective of this paper is to introduce a new method to select genes from DNA microarray data. As criterion to select genes we suggest to measure the local changes in the correlation graph of each gene and to select those genes whose local changes are largest. More precisely, we calculate the correlation networks from DNA microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to tumor progression. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth. This indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.
Keywords: Graph similarity, generalized trees, graph alignment, DNA microarray data, cervical cancer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753105 Unsteady Laminar Boundary Layer Forced Flow in the Region of the Stagnation Point on a Stretching Flat Sheet
Authors: A. T. Eswara
Abstract:
This paper analyses the unsteady, two-dimensional stagnation point flow of an incompressible viscous fluid over a flat sheet when the flow is started impulsively from rest and at the same time, the sheet is suddenly stretched in its own plane with a velocity proportional to the distance from the stagnation point. The partial differential equations governing the laminar boundary layer forced convection flow are non-dimensionalised using semi-similar transformations and then solved numerically using an implicit finitedifference scheme known as the Keller-box method. Results pertaining to the flow and heat transfer characteristics are computed for all dimensionless time, uniformly valid in the whole spatial region without any numerical difficulties. Analytical solutions are also obtained for both small and large times, respectively representing the initial unsteady and final steady state flow and heat transfer. Numerical results indicate that the velocity ratio parameter is found to have a significant effect on skin friction and heat transfer rate at the surface. Furthermore, it is exposed that there is a smooth transition from the initial unsteady state flow (small time solution) to the final steady state (large time solution).Keywords: Forced flow, Keller-box method, Stagnation point, Stretching flat sheet, Unsteady laminar boundary layer, Velocity ratio parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695104 Outsourcing the Front End of Innovation
Abstract:
The paper presents a new method for efficient innovation process management. Even though the innovation management methods, tools and knowledge are well established and documented in literature, most of the companies still do not manage it efficiently. Especially in SMEs the front end of innovation - problem identification, idea creation and selection - is often not optimally performed. Our eMIPS methodology represents a sort of "umbrella methodology" - a well-defined set of procedures, which can be dynamically adapted to the concrete case in a company. In daily practice, various methods (e.g. for problem identification and idea creation) can be applied, depending on the company's needs. It is based on the proactive involvement of the company's employees supported by the appropriate methodology and external experts. The presented phases are performed via a mixture of face-to-face activities (workshops) and online (eLearning) activities taking place in eLearning Moodle environment and using other e-communication channels. One part of the outcomes is an identified set of opportunities and concrete solutions ready for implementation. The other also very important result is connected to innovation competences for the participating employees related with concrete tools and methods for idea management. In addition, the employees get a strong experience for dynamic, efficient and solution oriented managing of the invention process. The eMIPS also represents a way of establishing or improving the innovation culture in the organization. The first results in a pilot company showed excellent results regarding the motivation of participants and also as to the results achieved.
Keywords: Creativity, distance learning, front end, innovation, problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2209103 Optimizing and Evaluating Performance Quality Control of the Production Process of Disposable Essentials Using Approach Vague Goal Programming
Authors: Hadi Gholizadeh, Ali Tajdin
Abstract:
To have effective production planning, it is necessary to control the quality of processes. This paper aims at improving the performance of the disposable essentials process using statistical quality control and goal programming in a vague environment. That is expressed uncertainty because there is always a measurement error in the real world. Therefore, in this study, the conditions are examined in a vague environment that is a distance-based environment. The disposable essentials process in Kach Company was studied. Statistical control tools were used to characterize the existing process for four factor responses including the average of disposable glasses’ weights, heights, crater diameters, and volumes. Goal programming was then utilized to find the combination of optimal factors setting in a vague environment which is measured to apply uncertainty of the initial information when some of the parameters of the models are vague; also, the fuzzy regression model is used to predict the responses of the four described factors. Optimization results show that the process capability index values for disposable glasses’ average of weights, heights, crater diameters and volumes were improved. Such increasing the quality of the products and reducing the waste, which will reduce the cost of the finished product, and ultimately will bring customer satisfaction, and this satisfaction, will mean increased sales.Keywords: Goal programming, quality control, vague environment, disposable glasses’ optimization, fuzzy regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1040102 Using Environmental Sensitivity Index (ESI) to Assess and Manage Environmental Risks of Pipelines in GIS Environment: A Case Study ofa Near Coastline and Fragile Ecosystem Located Pipeline
Authors: Jahangir Jafari, Nematollah Khorasani, Afshin Danehkar
Abstract:
Having a very many number of pipelines all over the country, Iran is one of the countries consists of various ecosystems with variable degrees of fragility and robusticity as well as geographical conditions. This study presents a state-of-the-art method to estimate environmental risks of pipelines by recommending rational equations including FES, URAS, SRS, RRS, DRS, LURS and IRS as well as FRS to calculate the risks. This study was carried out by a relative semi-quantitative approach based on land uses and HVAs (High-Value Areas). GIS as a tool was used to create proper maps regarding the environmental risks, land uses and distances. The main logic for using the formulas was the distance-based approaches and ESI as well as intersections. Summarizing the results of the study, a risk geographical map based on the ESIs and final risk score (FRS) was created. The study results showed that the most sensitive and so of high risk area would be an area comprising of mangrove forests located in the pipeline neighborhood. Also, salty lands were the most robust land use units in the case of pipeline failure circumstances. Besides, using a state-of-the-art method, it showed that mapping the risks of pipelines out with the applied method is of more reliability and convenience as well as relative comprehensiveness in comparison to present non-holistic methods for assessing the environmental risks of pipelines. The focus of the present study is “assessment" than that of “management". It is suggested that new policies are to be implemented to reduce the negative effects of the pipeline that has not yet been constructed completelyKeywords: ERM, ESI, ERA, Pipeline, Assalouyeh
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171101 Graph-based High Level Motion Segmentation using Normalized Cuts
Authors: Sungju Yun, Anjin Park, Keechul Jung
Abstract:
Motion capture devices have been utilized in producing several contents, such as movies and video games. However, since motion capture devices are expensive and inconvenient to use, motions segmented from captured data was recycled and synthesized to utilize it in another contents, but the motions were generally segmented by contents producers in manual. Therefore, automatic motion segmentation is recently getting a lot of attentions. Previous approaches are divided into on-line and off-line, where on-line approaches segment motions based on similarities between neighboring frames and off-line approaches segment motions by capturing the global characteristics in feature space. In this paper, we propose a graph-based high-level motion segmentation method. Since high-level motions consist of several repeated frames within temporal distances, we consider all similarities among all frames within the temporal distance. This is achieved by constructing a graph, where each vertex represents a frame and the edges between the frames are weighted by their similarity. Then, normalized cuts algorithm is used to partition the constructed graph into several sub-graphs by globally finding minimum cuts. In the experiments, the results using the proposed method showed better performance than PCA-based method in on-line and GMM-based method in off-line, as the proposed method globally segment motions from the graph constructed based similarities between neighboring frames as well as similarities among all frames within temporal distances.Keywords: Capture Devices, High-Level Motion, Motion Segmentation, Normalized Cuts
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1316100 Simulation of Lid Cavity Flow in Rectangular, Half-Circular and Beer Bucket Shapes using Quasi-Molecular Modeling
Authors: S. Kulsri, M. Jaroensutasinee, K. Jaroensutasinee
Abstract:
We developed a new method based on quasimolecular modeling to simulate the cavity flow in three cavity shapes: rectangular, half-circular and bucket beer in cgs units. Each quasi-molecule was a group of particles that interacted in a fashion entirely analogous to classical Newtonian molecular interactions. When a cavity flow was simulated, the instantaneous velocity vector fields were obtained by using an inverse distance weighted interpolation method. In all three cavity shapes, fluid motion was rotated counter-clockwise. The velocity vector fields of the three cavity shapes showed a primary vortex located near the upstream corners at time t ~ 0.500 s, t ~ 0.450 s and t ~ 0.350 s, respectively. The configurational kinetic energy of the cavities increased as time increased until the kinetic energy reached a maximum at time t ~ 0.02 s and, then, the kinetic energy decreased as time increased. The rectangular cavity system showed the lowest kinetic energy, while the half-circular cavity system showed the highest kinetic energy. The kinetic energy of rectangular, beer bucket and half-circular cavities fluctuated about stable average values 35.62 x 103, 38.04 x 103 and 40.80 x 103 ergs/particle, respectively. This indicated that the half-circular shapes were the most suitable shape for a shrimp pond because the water in shrimp pond flows best when we compared with rectangular and beer bucket shape.Keywords: Quasi-molecular modelling, particle modelling, lid driven cavity flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173199 Development of Rock Engineering System-Based Models for Tunneling Progress Analysis and Evaluation: Case Study of Tailrace Tunnel of Azad Power Plant Project
Authors: S. Golmohammadi, M. Noorian Bidgoli
Abstract:
Tunneling progress is a key parameter in the blasting method of tunneling. Taking measures to enhance tunneling advance can limit the progress distance without a supporting system, subsequently reducing or eliminating the risk of damage. This paper focuses on modeling tunneling progress using three main groups of parameters (tunneling geometry, blasting pattern, and rock mass specifications) based on the Rock Engineering Systems (RES) methodology. In the proposed models, four main effective parameters on tunneling progress are considered as inputs (RMR, Q-system, Specific charge of blasting, Area), with progress as the output. Data from 86 blasts conducted at the tailrace tunnel in the Azad Dam, western Iran, were used to evaluate the progress value for each blast. The results indicated that, for the 86 blasts, the progress of the estimated model aligns mostly with the measured progress. This paper presents a method for building the interaction matrix (statistical base) of the RES model. Additionally, a comparison was made between the results of the new RES-based model and a Multi-Linear Regression (MLR) analysis model. In the RES-based model, the effective parameters are RMR (35.62%), Q (28.6%), q (specific charge of blasting) (20.35%), and A (15.42%), respectively, whereas for MLR analysis, the main parameters are RMR, Q (system), q, and A. These findings confirm the superior performance of the RES-based model over the other proposed models.
Keywords: Rock Engineering Systems, tunneling progress, Multi Linear Regression, Specific charge of blasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14198 Design and Performance Analysis of One Dimensional Zero Cross-Correlation Coding Technique for a Fixed Wavelength Hopping SAC-OCDMA
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
This paper presents a SAC-OCDMA code with zero cross correlation property to minimize the Multiple Access Interface (MAI) as New Zero Cross Correlation code (NZCC), which is found to be more scalable compared to the other existing SAC-OCDMA codes. This NZCC code is constructed using address segment and data segment. In this work, the proposed NZCC code is implemented in an optical system using the Opti-System software for the spectral amplitude coded optical code-division multiple-access (SAC-OCDMA) scheme. The main contribution of the proposed NZCC code is the zero cross correlation, which reduces both the MAI and PIIN noises. The proposed NZCC code reveals properties of minimum cross-correlation, flexibility in selecting the code parameters and supports a large number of users, combined with high data rate and longer fiber length. Simulation results reveal that the optical code division multiple access system based on the proposed NZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower Bit Error Rates (BER) and longer travelling distance without any signal quality degradation, as compared to the former existing SAC-OCDMA codes.
Keywords: Cross Correlation, Optical Code Division Multiple Access, Spectral Amplitude Coding Optical Code Division Multiple Access, Multiple Access Interference, Phase Induced Intensity Noise, New Zero Cross Correlation code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 224397 An Adaptive Memetic Algorithm With Dynamic Population Management for Designing HIV Multidrug Therapies
Authors: Hassan Zarei, Ali Vahidian Kamyad, Sohrab Effati
Abstract:
In this paper, a mathematical model of human immunodeficiency virus (HIV) is utilized and an optimization problem is proposed, with the final goal of implementing an optimal 900-day structured treatment interruption (STI) protocol. Two type of commonly used drugs in highly active antiretroviral therapy (HAART), reverse transcriptase inhibitors (RTI) and protease inhibitors (PI), are considered. In order to solving the proposed optimization problem an adaptive memetic algorithm with population management (AMAPM) is proposed. The AMAPM uses a distance measure to control the diversity of population in genotype space and thus preventing the stagnation and premature convergence. Moreover, the AMAPM uses diversity parameter in phenotype space to dynamically set the population size and the number of crossovers during the search process. Three crossover operators diversify the population, simultaneously. The progresses of crossover operators are utilized to set the number of each crossover per generation. In order to escaping the local optima and introducing the new search directions toward the global optima, two local searchers assist the evolutionary process. In contrast to traditional memetic algorithms, the activation of these local searchers is not random and depends on both the diversity parameters in genotype space and phenotype space. The capability of AMAPM in finding optimal solutions compared with three popular metaheurestics is introduced.Keywords: HIV therapy design, memetic algorithms, adaptivealgorithms, nonlinear integer programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 162896 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 205195 An Investigation of Surface Texturing by Ultrasonic Impingement of Micro-Particles
Authors: Nagalingam Arun Prasanth, Ahmed Syed Adnan, S. H. Yeo
Abstract:
Surface topography plays a significant role in the functional performance of engineered parts. It is important to have a control on the surface geometry and understanding on the surface details to get the desired performance. Hence, in the current research contribution, a non-contact micro-texturing technique has been explored and developed. The technique involves ultrasonic excitation of a tool as a prime source of surface texturing for aluminum alloy workpieces. The specimen surface is polished first and is then immersed in a liquid bath containing 10% weight concentration of Ti6Al4V grade 5 spherical powders. A submerged slurry jet is used to recirculate the spherical powders under the ultrasonic horn which is excited at an ultrasonic frequency and amplitude of 40 kHz and 70 µm respectively. The distance between the horn and workpiece surface was remained fixed at 200 µm using a precision control stage. Texturing effects were investigated for different process timings of 1, 3 and 5 s. Thereafter, the specimens were cleaned in an ultrasonic bath for 5 mins to remove loose debris on the surface. The developed surfaces are characterized by optical and contact surface profiler. The optical microscopic images show a texture of circular spots on the workpiece surface indented by titanium spherical balls. Waviness patterns obtained from contact surface profiler supports the texturing effect produced from the proposed technique. Furthermore, water droplet tests were performed to show the efficacy of the proposed technique to develop hydrophilic surfaces and to quantify the texturing effect produced.
Keywords: Surface texturing, surface modification, topography, ultrasonic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 96494 Improving Similarity Search Using Clustered Data
Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong
Abstract:
This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.
Keywords: Visual search, deep learning, convolutional neural network, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 82793 The Design of a Vehicle Traffic Flow Prediction Model for a Gauteng Freeway Based on an Ensemble of Multi-Layer Perceptron
Authors: Tebogo Emma Makaba, Barnabas Ndlovu Gatsheni
Abstract:
The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.Keywords: Bagging ensemble methods, confusion matrix, multi-layer perceptron, vehicle traffic flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 177792 Morphological and Electrical Characterization of Polyacrylonitrile Nanofibers Synthesized Using Electrospinning Method for Electrical Application
Authors: Divyanka Sontakke, Arpit Thakre, D. K Shinde, Sujata Parmeshwaran
Abstract:
Electrospinning is the most widely utilized method to create nanofibers because of the direct setup, the capacity to mass-deliver consistent nanofibers from different polymers, and the ability to produce ultrathin fibers with controllable diameters. Smooth and much arranged ultrafine Polyacrylonitrile (PAN) nanofibers with diameters going from submicron to nanometer were delivered utilizing Electrospinning technique. PAN powder was used as a precursor to prepare the solution utilized as a part of this process. At the point when the electrostatic repulsion contradicted surface tension, a charged stream of polymer solution was shot out from the head of the spinneret and along these lines ultrathin nonwoven fibers were created. The effect of electrospinning parameter such as applied voltage, feed rate, concentration of polymer solution and tip to collector distance on the morphology of electrospun PAN nanofibers were investigated. The nanofibers were heat treated for carbonization to examine the changes in properties and composition to make for electrical application. Scanning Electron Microscopy (SEM) was performed before and after carbonization to study electrical conductivity and morphological characterization. The SEM images have shown the uniform fiber diameter and no beads formation. The average diameter of the PAN fiber observed 365nm and 280nm for flat plat and rotating drum collector respectively. The four probe strategy was utilized to inspect the electrical conductivity of the nanofibers and the electrical conductivity is significantly improved with increase in oxidation temperature exposed.
Keywords: Electrospinning, polyacrylonitrile carbon nanofibres, heat treatment, electrical conductivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 68891 Thermal Treatments and Characteristics Study On Unalloyed Structural (AISI 1140) Steel
Authors: S. S. Sharma, P. R. Prabhu, Rajagopal Chadaga
Abstract:
The main emphasis of metallurgists has been to process the materials to obtain the balanced mechanical properties for the given application. One of the processing routes to alter the properties is heat treatment. Nearly 90% of the structural applications are related to the medium carbon an alloyed steels and hence are regarded as structural steels. The major requirement in the conventional steel is to improve workability, toughness, hardness and grain refinement. In this view, it is proposed to study the mechanical and tribological properties of unalloyed structural (AISI 1140) steel with different thermal (heat) treatments like annealing, normalizing, tempering and hardening and compared with as brought (cold worked) specimen. All heat treatments are carried out in atmospheric condition. Hardening treatment improves hardness of the material, a marginal decrease in hardness value with improved ductility is observed in tempering. Annealing and normalizing improve ductility of the specimen. Normalized specimen shows ultimate ductility. Hardened specimen shows highest wear resistance in the initial period of slide wear where as above 25KM of sliding distance, as brought steel dominates the hardened specimen. Both mild and severe wear regions are observed. Microstructural analysis shows the existence of pearlitic structure in normalized specimen, lath martensitic structure in hardened, pearlitic, ferritic structure in annealed specimen.
Keywords: Annealing, hardness, heat treatment, normalizing, wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 211390 Evaluating Probable Bending of Frames for Near-Field and Far-Field Records
Authors: Majid Saaly, Shahriar Tavousi Tafreshi, Mehdi Nazari Afshar
Abstract:
Most reinforced concrete structures are designed only under heavy loads have large transverse reinforcement spacing values, and therefore suffer severe failure after intense ground movements. The main goal of this paper is to compare the shear- and axial failure of concrete bending frames available in Tehran using Incremental Dynamic Analysis (IDA) under near- and far-field records. For this purpose, IDA of 5, 10, and 15-story concrete structures were done under seven far-fault records and five near-faults records. The results show that in two-dimensional models of short-rise, mid-rise and high-rise reinforced concrete frames located on Type-3 soil, increasing the distance of the transverse reinforcement can increase the maximum inter-story drift ratio values up to 37%. According to the existing results on 5, 10, and 15-story reinforced concrete models located on Type-3 soil, records with characteristics such as fling-step and directivity create maximum drift values between floors more than far-fault earthquakes. The results indicated that in the case of seismic excitation modes under earthquake encompassing directivity or fling-step, the probability values of failure and failure possibility increasing rate values are much smaller than the corresponding values of far-fault earthquakes. However, in near-fault frame records, the probability of exceedance occurs at lower seismic intensities compared to far-fault records.
Keywords: Directivity, fling-step, fragility curve, IDA, inter story drift ratio.v
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36589 Influence of Combined Drill Coulters on Seedbed Compaction under Conservation Tillage Technologies
Authors: E. Šarauskis, L. Masilionyte, Z. Kriaučiūniene, K. Romaneckas
Abstract:
All over the world, including the Middle and East European countries, sustainable tillage and sowing technologies are applied increasingly broadly with a view to optimising soil resources, mitigating soil degradation processes, saving energy resources, preserving biological diversity, etc. As a result, altered conditions of tillage and sowing technological processes are faced inevitably. The purpose of this study is to determine the seedbed topsoil hardness when using a combined sowing coulter in different sustainable tillage technologies. The research involved a combined coulter consisting of two dissected blade discs and a shoe coulter. In order to determine soil hardness at the seedbed area, a multipenetrometer was used. It was found by experimental studies that in loosened soil, a combined sowing coulter equally suppresses the furrow bottom, walls and soil near the furrow; therefore, here, soil hardness was similar at all researched depths and no significant differences were established. In loosened and compacted (double-rolled) soil, the impact of a combined coulter on the hardness of seedbed soil surface was more considerable at a depth of 2 mm. Soil hardness at the furrow bottom and walls to a distance of up to 26 mm was 1.1 MPa. At a depth of 10 mm, the greatest hardness was established at the furrow bottom. In loosened and heavily compacted (rolled for 6 times) soil, at a depth of 2 and 10 mm a combined coulter most of all compacted the furrow bottom, which has a hardness of 1.8 MPa. At a depth of 20 mm, soil hardness within the whole investigated area varied insignificantly and fluctuated by around 2.0 MPa. The hardness of furrow walls and soil near the furrow was by approximately 1.0 MPa lower than that at the furrow bottomKeywords: Coulters design, seedbed, soil hardness, combined coulters, soil compaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 141388 The Effects of Shot and Grit Blasting Process Parameters on Steel Pipes Coating Adhesion
Authors: Saeed Khorasanizadeh
Abstract:
Adhesion strength of exterior or interior coating of steel pipes is too important. Increasing of coating adhesion on surfaces can increase the life time of coating, safety factor of transmitting line pipe and decreasing the rate of corrosion and costs. Preparation of steel pipe surfaces before doing the coating process is done by shot and grit blasting. This is a mechanical way to do it. Some effective parameters on that process, are particle size of abrasives, distance to surface, rate of abrasive flow, abrasive physical properties, shapes, selection of abrasive, kind of machine and its power, standard of surface cleanness degree, roughness, time of blasting and weather humidity. This search intended to find some better conditions which improve the surface preparation, adhesion strength and corrosion resistance of coating. So, this paper has studied the effect of varying abrasive flow rate, changing the abrasive particle size, time of surface blasting on steel surface roughness and over blasting on it by using the centrifugal blasting machine. After preparation of numbers of steel samples (according to API 5L X52) and applying epoxy powder coating on them, to compare strength adhesion of coating by Pull-Off test. The results have shown that, increasing the abrasive particles size and flow rate, can increase the steel surface roughness and coating adhesion strength but increasing the blasting time can do surface over blasting and increasing surface temperature and hardness too, change, decreasing steel surface roughness and coating adhesion strength.Keywords: surface preparation, abrasive particles, adhesionstrength
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907787 Tom Stoppard: The Amorality of the Artist
Authors: Majeed Mohammed Midhin, Clare Finburgh
Abstract:
To maintain a healthy balanced loyalty, whether to art or society, posits a debatable issue. The artist is always on the look out for the potential tension between those two realms. Therefore, one of the most painful dilemmas the artist finds is how to function in a society without sacrificing the aesthetic values of his/her work. In other words, the life-long awareness of failure which derives from the concept of the artist as caught between unflattering social realities and the need to invent genuine art forms becomes a fertilizing soil for the artists to be tackled. Thus, within the framework of this dilemma, the question of the responsibility of the artist and the relationship of the art to politics will be illuminating. To a larger extent, however, in drama, this dilemma is represented by the fictional characters of the play. The present paper tackles the idea of the amorality of the artist in selected plays by Tom Stoppard. However, Stoppard’s awareness of his situation as a refugee has led him to keep at a distance from politics. He tried hard to avoid any intervention into the realms of political debate, especially in his earliest work. On the one hand, it is not meant that he did not interest in politics as such, but rather he preferred to question it than to create a fixed ideological position. On the other hand, Stoppard’s refusal to intervene in politics is ascribed to his feeling of gratitude to Britain where he settled. As a result, Stoppard has frequently been criticized for a lack of political engagement and also for not leaning too much for the left when he does engage. His reaction to these public criticisms finds expression in his self-conscious statements which defensively stressed the artifice of his work. He, like Oscar Wilde thinks that the responsibility of the artist is devoted to the realm of his/her art. Consequently, his consciousness for the role of the artist is truly reflected in his two plays, Artist Descending a Staircase (1972) and Travesties (1974).
Keywords: Amorality, responsibility, politics, ideology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171486 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179585 Fault Classification of Double Circuit Transmission Line Using Artificial Neural Network
Authors: Anamika Jain, A. S. Thoke, R. N. Patel
Abstract:
This paper addresses the problems encountered by conventional distance relays when protecting double-circuit transmission lines. The problems arise principally as a result of the mutual coupling between the two circuits under different fault conditions; this mutual coupling is highly nonlinear in nature. An adaptive protection scheme is proposed for such lines based on application of artificial neural network (ANN). ANN has the ability to classify the nonlinear relationship between measured signals by identifying different patterns of the associated signals. One of the key points of the present work is that only current signals measured at local end have been used to detect and classify the faults in the double circuit transmission line with double end infeed. The adaptive protection scheme is tested under a specific fault type, but varying fault location, fault resistance, fault inception angle and with remote end infeed. An improved performance is experienced once the neural network is trained adequately, which performs precisely when faced with different system parameters and conditions. The entire test results clearly show that the fault is detected and classified within a quarter cycle; thus the proposed adaptive protection technique is well suited for double circuit transmission line fault detection & classification. Results of performance studies show that the proposed neural network-based module can improve the performance of conventional fault selection algorithms.
Keywords: Double circuit transmission line, Fault detection and classification, High impedance fault and Artificial Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 318784 Impact of Vehicle Travel Characteristics on Level of Service: A Comparative Analysis of Rural and Urban Freeways
Authors: Anwaar Ahmed, Muhammad Bilal Khurshid, Samuel Labi
Abstract:
The effect of trucks on the level of service is determined by considering passenger car equivalents (PCE) of trucks. The current version of Highway Capacity Manual (HCM) uses a single PCE value for all tucks combined. However, the composition of truck traffic varies from location to location; therefore, a single PCE value for all trucks may not correctly represent the impact of truck traffic at specific locations. Consequently, present study developed separate PCE values for single-unit and combination trucks to replace the single value provided in the HCM on different freeways. Site specific PCE values, were developed using concept of spatial lagging headways (that is the distance between rear bumpers of two vehicles in a traffic stream) measured from field traffic data. The study used data from four locations on a single urban freeway and three different rural freeways in Indiana. Three-stage-leastsquares (3SLS) regression techniques were used to generate models that predicted lagging headways for passenger cars, single unit trucks (SUT), and combination trucks (CT). The estimated PCE values for single-unit and combination truck for basic urban freeways (level terrain) were: 1.35 and 1.60, respectively. For rural freeways the estimated PCE values for single-unit and combination truck were: 1.30 and 1.45, respectively. As expected, traffic variables such as vehicle flow rates and speed have significant impacts on vehicle headways. Study results revealed that the use of separate PCE values for different truck classes can have significant influence on the LOS estimation.
Keywords: Level of Service, Capacity Analysis, Lagging Headway.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 209683 Central Finite Volume Methods Applied in Relativistic Magnetohydrodynamics: Applications in Disks and Jets
Authors: Raphael de Oliveira Garcia, Samuel Rocha de Oliveira
Abstract:
We have developed a new computer program in Fortran 90, in order to obtain numerical solutions of a system of Relativistic Magnetohydrodynamics partial differential equations with predetermined gravitation (GRMHD), capable of simulating the formation of relativistic jets from the accretion disk of matter up to his ejection. Initially we carried out a study on numerical methods of unidimensional Finite Volume, namely Lax-Friedrichs, Lax-Wendroff, Nessyahu-Tadmor method and Godunov methods dependent on Riemann problems, applied to equations Euler in order to verify their main features and make comparisons among those methods. It was then implemented the method of Finite Volume Centered of Nessyahu-Tadmor, a numerical schemes that has a formulation free and without dimensional separation of Riemann problem solvers, even in two or more spatial dimensions, at this point, already applied in equations GRMHD. Finally, the Nessyahu-Tadmor method was possible to obtain stable numerical solutions - without spurious oscillations or excessive dissipation - from the magnetized accretion disk process in rotation with respect to a central black hole (BH) Schwarzschild and immersed in a magnetosphere, for the ejection of matter in the form of jet over a distance of fourteen times the radius of the BH, a record in terms of astrophysical simulation of this kind. Also in our simulations, we managed to get substructures jets. A great advantage obtained was that, with the our code, we got simulate GRMHD equations in a simple personal computer.
Keywords: Finite Volume Methods, Central Schemes, Fortran 90, Relativistic Astrophysics, Jet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 232482 Pesticides Use in Rural Settings in Romania
Authors: Anca E. Gurzau, Alexandru Coman, Eugen S. Gurzau, Marinela Penes, Daniela Dumitrescu, DorinMarchean, Ioan Chera
Abstract:
The environment pollution with pesticides and heavy metals is a recognized problem nowadays, with extension to the global scale the tendency of amplification. Even with all the progress in the environmental field, both in the emphasize of the effect of the pollutants upon health, the linked studies environment-health are insufficient, not only in Romania but all over the world also. We aim to describe the particular situation in Romania regarding the uncontrolled use of pesticides, to identify and evaluate the risk zones for health and the environment in Romania, with the final goal of designing adequate programs for reduction and control of the risk sources. An exploratory study was conducted to determine the magnitude of the pesticide use problem in a population living in Saliste, a rural setting in Transylvania, Romania. The significant stakeholders in Saliste region were interviewed and a sample from the population living in Saliste area was selected to fill in a designed questionnaire. All the selected participants declared that they used pesticides in their activities for more than one purpose. They declared they annually applied pesticides for a period of time between 11 and 30 years, from 5 to 9 days per year on average, mainly on crops situated at some distance from the houses but high risk behavior was identified as the volunteers declared the use of pesticides in the backyard gardens, near their homes, where children were playing. The pesticide applicators did not have the necessary knowledge about safety and exposure. The health data must be correlated with exposure biomarkers in attempt to identify the possible health effects of the pesticides exposure. Future plans include educational campaigns to raise the awareness of the population on the danger of uncontrolled use of pesticides.Keywords: Pesticides, health effects, Romania, Saliste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181981 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data
Authors: Chen Chou, Feng-Tyan Lin
Abstract:
Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.
Keywords: Big Data, ITS, influence range, living area, central place theory, visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976