Search results for: Minkowski distance function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2958

Search results for: Minkowski distance function

348 A Survey of Various Algorithms for Vlsi Physical Design

Authors: Rajine Swetha R, B. Shekar Babu, Sumithra Devi K.A

Abstract:

Electronic Systems are the core of everyday lives. They form an integral part in financial networks, mass transit, telephone systems, power plants and personal computers. Electronic systems are increasingly based on complex VLSI (Very Large Scale Integration) integrated circuits. Initial electronic design automation is concerned with the design and production of VLSI systems. The next important step in creating a VLSI circuit is Physical Design. The input to the physical design is a logical representation of the system under design. The output of this step is the layout of a physical package that optimally or near optimally realizes the logical representation. Physical design problems are combinatorial in nature and of large problem sizes. Darwin observed that, as variations are introduced into a population with each new generation, the less-fit individuals tend to extinct in the competition of basic necessities. This survival of fittest principle leads to evolution in species. The objective of the Genetic Algorithms (GA) is to find an optimal solution to a problem .Since GA-s are heuristic procedures that can function as optimizers, they are not guaranteed to find the optimum, but are able to find acceptable solutions for a wide range of problems. This survey paper aims at a study on Efficient Algorithms for VLSI Physical design and observes the common traits of the superior contributions.

Keywords: Genetic Algorithms, Physical Design, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
347 Integration of Image and Patient Data, Software and International Coding Systems for Use in a Mammography Research Project

Authors: V. Balanica, W. I. D. Rae, M. Caramihai, S. Acho, C. P. Herbst

Abstract:

Mammographic images and data analysis to facilitate modelling or computer aided diagnostic (CAD) software development should best be done using a common database that can handle various mammographic image file formats and relate these to other patient information. This would optimize the use of the data as both primary reporting and enhanced information extraction of research data could be performed from the single dataset. One desired improvement is the integration of DICOM file header information into the database, as an efficient and reliable source of supplementary patient information intrinsically available in the images. The purpose of this paper was to design a suitable database to link and integrate different types of image files and gather common information that can be further used for research purposes. An interface was developed for accessing, adding, updating, modifying and extracting data from the common database, enhancing the future possible application of the data in CAD processing. Technically, future developments envisaged include the creation of an advanced search function to selects image files based on descriptor combinations. Results can be further used for specific CAD processing and other research. Design of a user friendly configuration utility for importing of the required fields from the DICOM files must be done.

Keywords: Database Integration, Mammogram Classification, Tumour Classification, Computer Aided Diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1945
346 Hepatoprotective Effect of Oleuropein against Cisplatin-Induced Liver Damage in Rat

Authors: Salim Cerig, Fatime Geyikoglu, Murat Bakir, Suat Colak, Merve Sonmez, Kubra Koc

Abstract:

Cisplatin (CIS) is one of the most effective an anticancer drug and also toxic to cells by activating oxidative stress. Oleuropein (OLE) has key role against oxidative stress in mammalian cells, but the role of this antioxidant in the toxicity of CIS remains unknown. The aim of the present study was to investigate the efficacy of OLE on CIS-induced liver damages in male rats. With this aim, male Sprague Dawley rats were randomly assigned to one of eight groups: Control group; the group treated with 7 mg/kg/day CIS; the groups treated with 50, 100 and 200 mg/kg/day OLE (i.p.); and the groups treated with OLE for three days starting at 24 h following CIS injection. After 4 days of injections, serum was provided to assess the blood AST, ALT and LDH values. The liver tissues were removed for histological, biochemical (TAC, TOS and MDA) and genotoxic evaluations. In the CIS treated group, the whole liver tissue showed significant histological changes. Also, CIS significantly increased both the incidence of oxidative stress and the induction of 8-hydroxy-deoxyguanosine (8-OH-dG). Moreover, the rats taking CIS have abnormal results on liver function tests. However, these parameters reached to the normal range after administration of OLE for 3 days. Finally, OLE demonstrated an acceptable high potential and was effective in attenuating CIS-induced liver injury. In this trial, the 200 mg/kg dose of OLE firstly appeared to induce the most optimal protective response.

Keywords: Antioxidant response, cisplatin, histology, liver, oleuropein, 8-OhdG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2253
345 Evolutionary Techniques for Model Order Reduction of Large Scale Linear Systems

Authors: S. Panda, J. S. Yadav, N. P. Patidar, C. Ardil

Abstract:

Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. In this paper both PSO and GA optimization are employed for finding stable reduced order models of single-input- single-output large-scale linear systems. Both the techniques guarantee stability of reduced order model if the original high order model is stable. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example from literature and the results are compared with recently published conventional model reduction technique.

Keywords: Genetic Algorithm, Particle Swarm Optimization, Order Reduction, Stability, Transfer Function, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2723
344 Comparison of Particle Swarm Optimization and Genetic Algorithm for TCSC-based Controller Design

Authors: Sidhartha Panda, N. P. Padhy

Abstract:

Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. Since the two approaches are supposed to find a solution to a given objective function but employ different strategies and computational effort, it is appropriate to compare their performance. This paper presents the application and performance comparison of PSO and GA optimization techniques, for Thyristor Controlled Series Compensator (TCSC)-based controller design. The design objective is to enhance the power system stability. The design problem of the FACTS-based controller is formulated as an optimization problem and both the PSO and GA optimization techniques are employed to search for optimal controller parameters. The performance of both optimization techniques in terms of computational time and convergence rate is compared. Further, the optimized controllers are tested on a weakly connected power system subjected to different disturbances, and their performance is compared with the conventional power system stabilizer (CPSS). The eigenvalue analysis and non-linear simulation results are presented and compared to show the effectiveness of both the techniques in designing a TCSC-based controller, to enhance power system stability.

Keywords: Thyristor Controlled Series Compensator, geneticalgorithm; particle swarm optimization; Phillips-Heffron model;power system stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3154
343 Comparison of Router Intelligent and Cooperative Host Intelligent Algorithms in a Continuous Model of Fixed Telecommunication Networks

Authors: Dávid Csercsik, Sándor Imre

Abstract:

The performance of state of the art worldwide telecommunication networks strongly depends on the efficiency of the applied routing mechanism. Game theoretical approaches to this problem offer new solutions. In this paper a new continuous network routing model is defined to describe data transfer in fixed telecommunication networks of multiple hosts. The nodes of the network correspond to routers whose latency is assumed to be traffic dependent. We propose that the whole traffic of the network can be decomposed to a finite number of tasks, which belong to various hosts. To describe the different latency-sensitivity, utility functions are defined for each task. The model is used to compare router and host intelligent types of routing methods, corresponding to various data transfer protocols. We analyze host intelligent routing as a transferable utility cooperative game with externalities. The main aim of the paper is to provide a framework in which the efficiency of various routing algorithms can be compared and the transferable utility game arising in the cooperative case can be analyzed.

Keywords: Routing, Telecommunication networks, Performance evaluation, Cooperative game theory, Partition function form games

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
342 An Archetype to Sustain Knowledge Management Systems through Intranet

Authors: B. T. Sayed, Nafaâ Jabeur, M. Aref

Abstract:

Creation and maintenance of knowledge management systems has been recognized as an important research area. Consecutively lack of accurate results from knowledge management systems limits the organization to apply their knowledge management processes. This leads to a failure in getting the right information to the right people at the right time thus followed by a deficiency in decision making processes. An Intranet offers a powerful tool for communication and collaboration, presenting data and information, and the means that creates and shares knowledge, all in one easily accessible place. This paper proposes an archetype describing how a knowledge management system, with the support of intranet capabilities, could very much increase the accuracy of capturing, storing and retrieving knowledge based processes thereby increasing the efficiency of the system. This system will expect a critical mass of usage, by the users, for intranet to function as knowledge management systems. This prototype would lead to a design of an application that would impose creation and maintenance of an effective knowledge management system through intranet. The aim of this paper is to introduce an effective system to handle capture, store and distribute knowledge management in a form that may not lead to any failure which exists in most of the systems. The methodology used in the system would require all the employees, in the organization, to contribute the maximum to deliver the system to a successful arena. The system is still in its initial mode and thereby the authors are under the process to practically implement the ideas, as mentioned in the system, to produce satisfactory results.

Keywords: Knowledge Management Systems, Intranet, Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2000
341 Investigation of Bubble Growth during Nucleate Boiling Using CFD

Authors: K. Jagannath, Akhilesh Kotian, S. S. Sharma, Achutha Kini U., P. R. Prabhu

Abstract:

Boiling process is characterized by the rapid formation of vapour bubbles at the solid–liquid interface (nucleate boiling) with pre-existing vapour or gas pockets. Computational fluid dynamics (CFD) is an important tool to study bubble dynamics. In the present study, CFD simulation has been carried out to determine the bubble detachment diameter and its terminal velocity. Volume of fluid method is used to model the bubble and the surrounding by solving single set of momentum equations and tracking the volume fraction of each of the fluids throughout the domain. In the simulation, bubble is generated by allowing water-vapour to enter a cylinder filled with liquid water through an inlet at the bottom. After the bubble is fully formed, the bubble detaches from the surface and rises up during which the bubble accelerates due to the net balance between buoyancy force and viscous drag. Finally when these forces exactly balance each other, it attains a constant terminal velocity. The bubble detachment diameter and the terminal velocity of the bubble are captured by the monitor function provided in FLUENT. The detachment diameter and the terminal velocity obtained are compared with the established results based on the shape of the bubble. A good agreement is obtained between the results obtained from simulation and the equations in comparison with the established results.

Keywords: Bubble growth, computational fluid dynamics, detachment diameter, terminal velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2118
340 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.

Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376
339 Proposal of Optimality Evaluation for Quantum Secure Communication Protocols by Taking the Average of the Main Protocol Parameters: Efficiency, Security and Practicality

Authors: Georgi Bebrov, Rozalina Dimova

Abstract:

In the field of quantum secure communication, there is no evaluation that characterizes quantum secure communication (QSC) protocols in a complete, general manner. The current paper addresses the problem concerning the lack of such an evaluation for QSC protocols by introducing an optimality evaluation, which is expressed as the average over the three main parameters of QSC protocols: efficiency, security, and practicality. For the efficiency evaluation, the common expression of this parameter is used, which incorporates all the classical and quantum resources (bits and qubits) utilized for transferring a certain amount of information (bits) in a secure manner. By using criteria approach whether or not certain criteria are met, an expression for the practicality evaluation is presented, which accounts for the complexity of the QSC practical realization. Based on the error rates that the common quantum attacks (Measurement and resend, Intercept and resend, probe attack, and entanglement swapping attack) induce, the security evaluation for a QSC protocol is proposed as the minimum function taken over the error rates of the mentioned quantum attacks. For the sake of clarity, an example is presented in order to show how the optimality is calculated.

Keywords: Quantum cryptography, quantum secure communcation, quantum secure direct communcation security, quantum secure direct communcation efficiency, quantum secure direct communcation practicality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974
338 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network

Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon

Abstract:

In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are class balancing, data shuffling, and standardization, were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the sequential model and ReLU activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.

Keywords: Spectroscopy, soluble solid content, pineapple, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 121
337 Practical Design Procedures of 3D Reinforced Concrete Shear Wall-Frame Structure Based on Structural Optimization Method

Authors: H. Nikzad, S. Yoshitomi

Abstract:

This study investigates and develops the structural optimization method. The effect of size constraints on practical solution of reinforced concrete (RC) building structure with shear wall is proposed. Cross-sections of beam and column, and thickness of shear wall are considered as design variables. The objective function to be minimized is total cost of the structure by using a simple and efficient automated MATLAB platform structural optimization methodology. With modification of mathematical formulations, the result is compared with optimal solution without size constraints. The most suitable combination of section sizes is selected as for the final design application based on linear static analysis. The findings of this study show that defining higher value of upper bound of sectional sizes significantly affects optimal solution, and defining of size constraints play a vital role in finding of global and practical solution during optimization procedures. The result and effectiveness of proposed method confirm the ability and efficiency of optimal solutions for 3D RC shear wall-frame structure.

Keywords: Structural optimization, linear static analysis, ETABS, MATLAB, RC shear wall-frame structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1300
336 The Optimal Placement of Capacitor in Order to Reduce Losses and the Profile of Distribution Network Voltage with GA, SA

Authors: Limouzade E., Joorabian M.

Abstract:

Most of the losses in a power system relate to the distribution sector which always has been considered. From the important factors which contribute to increase losses in the distribution system is the existence of radioactive flows. The most common way to compensate the radioactive power in the system is the power to use parallel capacitors. In addition to reducing the losses, the advantages of capacitor placement are the reduction of the losses in the release peak of network capacity and improving the voltage profile. The point which should be considered in capacitor placement is the optimal placement and specification of the amount of the capacitor in order to maximize the advantages of capacitor placement. In this paper, a new technique has been offered for the placement and the specification of the amount of the constant capacitors in the radius distribution network on the basis of Genetic Algorithm (GA). The existing optimal methods for capacitor placement are mostly including those which reduce the losses and voltage profile simultaneously. But the retaliation cost and load changes have not been considered as influential UN the target function .In this article, a holistic approach has been considered for the optimal response to this problem which includes all the parameters in the distribution network: The price of the phase voltage and load changes. So, a vast inquiry is required for all the possible responses. So, in this article, we use Genetic Algorithm (GA) as the most powerful method for optimal inquiry.

Keywords: Genetic Algorithm (GA), capacitor placement, voltage profile, network losses, Simulating Annealing (SA), distribution network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
335 Effect of On-Demand Cueing on Freezing of Gait in Parkinson’s Patients

Authors: Rosemarie Velik

Abstract:

Gait disturbance, particularly freezing of gait (FOG), is a phenomenon that is common in Parkinson’s patients and significantly contributes to a loss of function and independence. Walking performance and number of freezing episodes have been known to respond favorably to sensory cues of different modalities. However, a topic that has so far barely been touched is how to resolve freezing episodes via sensory cues once they have appeared. In this study, we analyze the effect of five different sensory cues on the duration of freezing episodes: (1) vibratory alert, (2) auditory alert, (3) vibratory rhythm, (4) auditory rhythm, (5) visual cue in form of parallel lines projected to the floor. The motivation for this study is to investigate the possibility of the design of a gait assistive device for Parkinson’s patients. Test subjects were 7 Parkinson’s patients regularly suffering from FOG. The patients had to repeatedly walk a pre-defined course and cues were triggered always 2 s after freezing onset. The effect was analyzed via experimental measurements and patient interviews. The measurements showed that all 5 sensory cues led to a decrease of the average duration of freezing: baseline (7.9s), vibratory alert (7.1s), auditory alert (6.7s), auditory rhythm (6.4s), vibratory rhythm (6.3s), and visual cue (5.3s). Nevertheless, interestingly, patients subjectively evaluated the audio alert and vibratory signals to have a significantly better effect for reducing their freezing duration than the visual cue.

Keywords: Auditory cueing, freezing of gait, gait assistance, Parkinson’s disease, vibratory cueing, visual cueing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3055
334 Aerodynamic Interaction between Two Speed Skaters Measured in a Closed Wind Tunnel

Authors: Ola Elfmark, Lars M. Bardal, Luca Oggiano, H˚avard Myklebust

Abstract:

Team pursuit is a relatively new event in international long track speed skating. For a single speed skater the aerodynamic drag will account for up to 80% of the braking force, thus reducing the drag can greatly improve the performance. In a team pursuit the interactions between athletes in near proximity will also be essential, but is not well studied. In this study, systematic measurements of the aerodynamic drag, body posture and relative positioning of speed skaters have been performed in the low speed wind tunnel at the Norwegian University of Science and Technology, in order to investigate the aerodynamic interaction between two speed skaters. Drag measurements of static speed skaters drafting, leading, side-by-side, and dynamic drag measurements in a synchronized and unsynchronized movement at different distances, were performed. The projected frontal area was measured for all postures and movements and a blockage correction was performed, as the blockage ratio ranged from 5-15% in the different setups. The static drag measurements where performed on two test subjects in two different postures, a low posture and a high posture, and two different distances between the test subjects 1.5T and 3T where T being the length of the torso (T=0.63m). A drag reduction was observed for all distances and configurations, from 39% to 11.4%, for the drafting test subject. The drag of the leading test subject was only influenced at -1.5T, with the biggest drag reduction of 5.6%. An increase in drag was seen for all side-by-side measurements, the biggest increase was observed to be 25.7%, at the closest distance between the test subjects, and the lowest at 2.7% with ∼ 0.7 m between the test subjects. A clear aerodynamic interaction between the test subjects and their postures was observed for most measurements during static measurements, with results corresponding well to recent studies. For the dynamic measurements, the leading test subject had a drag reduction of 3% even at -3T. The drafting showed a drag reduction of 15% when being in a synchronized (sync) motion with the leading test subject at 4.5T. The maximal drag reduction for both the leading and the drafting test subject were observed when being as close as possible in sync, with a drag reduction of 8.5% and 25.7% respectively. This study emphasize the importance of keeping a synchronized movement by showing that the maximal gain for the leading and drafting dropped to 3.2% and 3.3% respectively when the skaters are in opposite phase. Individual differences in technique also appear to influence the drag of the other test subject.

Keywords: Aerodynamic interaction, drag cycle, drag force, frontal area, speed skating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1039
333 Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound

Authors: Samuel A. Phillips, Emmanuel A. Ayanlowo, Rasaki O. Olanrewaju, Olayode Fatoki

Abstract:

This paper employs the Jeffrey's prior technique in the process of estimating the periodograms and frequency of sinusoidal model for unknown noisy time variants or oscillating events (data) in a Bayesian setting. The non-informative Jeffrey's prior was adopted for the posterior trigonometric function of the sinusoidal model such that Cramer-Rao Lower Bound (CRLB) inference was used in carving-out the minimum variance needed to curb the invariance structure effect for unknown noisy time observational and repeated circular patterns. An average monthly oscillating temperature series measured in degree Celsius (0C) from 1901 to 2014 was subjected to the posterior solution of the unknown noisy events of the sinusoidal model via Markov Chain Monte Carlo (MCMC). It was not only deduced that two minutes period is required before completing a cycle of changing temperature from one particular degree Celsius to another but also that the sinusoidal model via the CRLB-Jeffrey's prior for unknown noisy events produced a miniature posterior Maximum A Posteriori (MAP) compare to a known noisy events.

Keywords: Cramer-Rao Lower Bound (CRLB), Jeffrey's prior, Sinusoidal, Maximum A Posteriori (MAP), Markov Chain Monte Carlo (MCMC), Periodograms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 658
332 A Modularized Design for Multi-Drivers Off-Road Vehicle Driving-Line and its Performance Assessment

Authors: Yi Jianjun, Sun Yingce, Hu Diqing, Li Chenggang

Abstract:

Modularized design approach can facilitate the modeling of complex systems and support behavior analysis and simulation in an iterative and thus complex engineering process, by using encapsulated submodels of components and of their interfaces. Therefore it can improve the design efficiency and simplify the solving complicated problem. Multi-drivers off-road vehicle is comparatively complicated. Driving-line is an important core part to a vehicle; it has a significant contribution to the performance of a vehicle. Multi-driver off-road vehicles have complex driving-line, so its performance is heavily dependent on the driving-line. A typical off-road vehicle-s driving-line system consists of torque converter, transmission, transfer case and driving-axles, which transfer the power, generated by the engine and distribute it effectively to the driving wheels according to the road condition. According to its main function, this paper puts forward a modularized approach for designing and evaluation of vehicle-s driving-line. It can be used to effectively estimate the performance of driving-line during concept design stage. Through appropriate analysis and assessment method, an optimal design can be reached. This method has been applied to the practical vehicle design, it can improve the design efficiency and is convenient to assess and validate the performance of a vehicle, especially of multi-drivers off-road vehicle.

Keywords: Heavy-loaded Off-road Vehicle, Power Driving-line, Modularized Design, Performance Assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
331 MPPT Operation for PV Grid-connected System using RBFNN and Fuzzy Classification

Authors: A. Chaouachi, R. M. Kamel, K. Nagasaka

Abstract:

This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.

Keywords: MPPT, neuro-fuzzy, RBFN, grid-connected, photovoltaic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3182
330 Wet Flue Gas Desulfurization Using a New O-Element Design Which Replaces the Venturi Scrubber

Authors: P. Lestinsky, D. Jecha, V. Brummer, P. Stehlik

Abstract:

Scrubbing by a liquid spraying is one of the most effective processes used for removal of fine particles and soluble gas pollutants (such as SO2, HCl, HF) from the flue gas. There are many configurations of scrubbers designed to provide contact between the liquid and gas stream for effectively capturing particles or soluble gas pollutants, such as spray plates, packed bed towers, jet scrubbers, cyclones, vortex and venturi scrubbers. The primary function of venturi scrubber is the capture of fine particles as well as HCl, HF or SO2 removal with effect of the flue gas temperature decrease before input to the absorption column. In this paper, sulfur dioxide (SO2) from flue gas was captured using new design replacing venturi scrubber (1st degree of wet scrubbing). The flue gas was prepared by the combustion of the carbon disulfide solution in toluene (1:1 vol.) in the flame in the reactor. Such prepared flue gas with temperature around 150°C was processed in designed laboratory O-element scrubber. Water was used as absorbent liquid. The efficiency of SO2 removal, pressure drop and temperature drop were measured on our experimental device. The dependence of these variables on liquid-gas ratio was observed. The average temperature drop was in the range from 150°C to 40°C. The pressure drop was increased with increasing of a liquid-gas ratio, but no too much as for the common venturi scrubber designs. The efficiency of SO2 removal was up to 70 %. The pressure drop of our new designed wet scrubber is similar to commonly used venturi scrubbers; nevertheless the influence of amount of the liquid on pressure drop is not so significant.

Keywords: Desulphurization, absorption, flue gas, modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2890
329 Numerical Solution of Transient Natural Convection in Vertical Heated Rectangular Channel between Two Vertical Parallel MTR-Type Fuel Plates

Authors: Djalal Hamed

Abstract:

The aim of this paper is to perform, by mean of the finite volume method, a numerical solution of the transient natural convection in a narrow rectangular channel between two vertical parallel Material Testing Reactor (MTR)-type fuel plates, imposed under a heat flux with a cosine shape to determine the margin of the nuclear core power at which the natural convection cooling mode can ensure a safe core cooling, where the cladding temperature should not reach a specific safety limits (90 °C). For this purpose, a computer program is developed to determine the principal parameters related to the nuclear core safety, such as the temperature distribution in the fuel plate and in the coolant (light water) as a function of the reactor core power. Throughout the obtained results, we noticed that the core power should not reach 400 kW, to ensure a safe passive residual heat removing from the nuclear core by the upward natural convection cooling mode.

Keywords: Buoyancy force, friction force, friction factor, finite volume method, transient natural convection, thermal hydraulic analysis, vertical heated rectangular channel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 755
328 Evaluation of Heterogeneity of Paint Coating on Metal Substrate Using Laser Infrared Thermography and Eddy Current

Authors: S. Mezghani, E. Perrin, J. L Bodnar, J. Marthe, B. Cauwe, V. Vrabie

Abstract:

Non contact evaluation of the thickness of paint coatings can be attempted by different destructive and nondestructive methods such as cross-section microscopy, gravimetric mass measurement, magnetic gauges, Eddy current, ultrasound or terahertz. Infrared thermography is a nondestructive and non-invasive method that can be envisaged as a useful tool to measure the surface thickness variations by analyzing the temperature response. In this paper, the thermal quadrupole method for two layered samples heated up with a pulsed excitation is firstly used. By analyzing the thermal responses as a function of thermal properties and thicknesses of both layers, optimal parameters for the excitation source can be identified. Simulations show that a pulsed excitation with duration of ten milliseconds allows obtaining a substrate-independent thermal response. Based on this result, an experimental setup consisting of a near-infrared laser diode and an Infrared camera was next used to evaluate the variation of paint coating thickness between 60 μm and 130 μm on two samples. Results show that the parameters extracted for thermal images are correlated with the estimated thicknesses by the Eddy current methods. The laser pulsed thermography is thus an interesting alternative nondestructive method that can be moreover used for nonconductive substrates.

Keywords: Nondestructive, paint coating, thickness, infrared thermography, laser, heterogeneity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075
327 Adaptive WiFi Fingerprinting for Location Approximation

Authors: Mohd Fikri Azli bin Abdullah, Khairul Anwar bin Kamarul Hatta, Esther Jeganathan

Abstract:

WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.

Keywords: Adaptive Repository, Artificial Neural Network, Location Estimation, Nearest Neighbour Euclidean Distance, WiFi RSSI Fingerprinting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3460
326 Stochastic Subspace Modelling of Turbulence

Authors: M. T. Sichani, B. J. Pedersen, S. R. K. Nielsen

Abstract:

Turbulence of the incoming wind field is of paramount importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper an empirical cross spectral density function for the along-wind turbulence component over the wind field area is taken as the starting point. The spectrum is spatially discretized in terms of a Hermitian cross-spectral density matrix for the turbulence state vector which turns out not to be positive definite. Since the succeeding state space and ARMA modelling of the turbulence rely on the positive definiteness of the cross-spectral density matrix, the problem with the non-positive definiteness of such matrices is at first addressed and suitable treatments regarding it are proposed. From the adjusted positive definite cross-spectral density matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise. Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.

Keywords: Turbulence, wind turbine, complex coherence, state space modelling, ARMA modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
325 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: Time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
324 Self-Adaptive Differential Evolution Based Power Economic Dispatch of Generators with Valve-Point Effects and Multiple Fuel Options

Authors: R.Balamurugan, S.Subramanian

Abstract:

This paper presents the solution of power economic dispatch (PED) problem of generating units with valve point effects and multiple fuel options using Self-Adaptive Differential Evolution (SDE) algorithm. The global optimal solution by mathematical approaches becomes difficult for the realistic PED problem in power systems. The Differential Evolution (DE) algorithm is found to be a powerful evolutionary algorithm for global optimization in many real problems. In this paper the key parameters of control in DE algorithm such as the crossover constant CR and weight applied to random differential F are self-adapted. The PED problem formulation takes into consideration of nonsmooth fuel cost function due to valve point effects and multi fuel options of generator. The proposed approach has been examined and tested with the numerical results of PED problems with thirteen-generation units including valve-point effects, ten-generation units with multiple fuel options neglecting valve-point effects and ten-generation units including valve-point effects and multiple fuel options. The test results are promising and show the effectiveness of proposed approach for solving PED problems.

Keywords: Multiple fuels, power economic dispatch, selfadaptivedifferential evolution and valve-point effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
323 Comparison of Compression Ability Using DCT and Fractal Technique on Different Imaging Modalities

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Image compression is one of the most important applications Digital Image Processing. Advanced medical imaging requires storage of large quantities of digitized clinical data. Due to the constrained bandwidth and storage capacity, however, a medical image must be compressed before transmission and storage. There are two types of compression methods, lossless and lossy. In Lossless compression method the original image is retrieved without any distortion. In lossy compression method, the reconstructed images contain some distortion. Direct Cosine Transform (DCT) and Fractal Image Compression (FIC) are types of lossy compression methods. This work shows that lossy compression methods can be chosen for medical image compression without significant degradation of the image quality. In this work DCT and Fractal Compression using Partitioned Iterated Function Systems (PIFS) are applied on different modalities of images like CT Scan, Ultrasound, Angiogram, X-ray and mammogram. Approximately 20 images are considered in each modality and the average values of compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the reconstructed image is arrived by the PSNR values. Based on the results it can be concluded that the DCT has higher PSNR values and FIC has higher compression ratio. Hence in medical image compression, DCT can be used wherever picture quality is preferred and FIC is used wherever compression of images for storage and transmission is the priority, without loosing picture quality diagnostically.

Keywords: DCT, FIC, PIFS, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826
322 Higher Frequency Modeling of Synchronous Exciter Machines by Equivalent Circuits and Transfer Functions

Authors: Marcus Banda

Abstract:

In this article the influence of higher frequency effects in addition to a special damper design on the electrical behavior of a synchronous generator main exciter machine is investigated. On the one hand these machines are often highly stressed by harmonics from the bridge rectifier thus facing additional eddy current losses. On the other hand the switching may cause the excitation of dangerous voltage peaks in resonant circuits formed by the diodes of the rectifier and the commutation reactance of the machine. Therefore modern rotating exciters are treated like synchronous generators usually modeled with a second order equivalent circuit. Hence the well known Standstill Frequency Response Test (SSFR) method is applied to a test machine in order to determine parameters for the simulation. With these results it is clearly shown that higher frequencies have a strong impact on the conventional equivalent circuit model. Because of increasing field displacement effects in the stranded armature winding the sub-transient reactance is even smaller than the armature leakage at high frequencies. As a matter of fact this prevents the algorithm to find an equivalent scheme. This issue is finally solved using Laplace transfer functions fully describing the transient behavior at the model ports.

Keywords: Synchronous exciter machine, Linear transfer function, SSFR, Equivalent Circuit

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
321 Seismic Performance of Slopes Subjected to Earthquake Mainshock Aftershock Sequences

Authors: Alisha Khanal, Gokhan Saygili

Abstract:

It is commonly observed that aftershocks follow the mainshock. Aftershocks continue over a period of time with a decreasing frequency and typically there is not sufficient time for repair and retrofit between a mainshock–aftershock sequence. Usually, aftershocks are smaller in magnitude; however, aftershock ground motion characteristics such as the intensity and duration can be greater than the mainshock due to the changes in the earthquake mechanism and location with respect to the site. The seismic performance of slopes is typically evaluated based on the sliding displacement predicted to occur along a critical sliding surface. Various empirical models are available that predict sliding displacement as a function of seismic loading parameters, ground motion parameters, and site parameters but these models do not include the aftershocks. The seismic risks associated with the post-mainshock slopes ('damaged slopes') subjected to aftershocks is significant. This paper extends the empirical sliding displacement models for flexible slopes subjected to earthquake mainshock-aftershock sequences (a multi hazard approach). A dataset was developed using 144 pairs of as-recorded mainshock-aftershock sequences using the Pacific Earthquake Engineering Research Center (PEER) database. The results reveal that the combination of mainshock and aftershock increases the seismic demand on slopes relative to the mainshock alone; thus, seismic risks are underestimated if aftershocks are neglected.

Keywords: Seismic slope stability, sliding displacement, mainshock, aftershock, landslide, earthquake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 900
320 Supramolecular Cocrystal of 2-Amino-4-Chloro-6- Methylpyrimidine with 4-Methylbenzoic Acid: Synthesis, Structural Determinations and Quantum Chemical Investigations

Authors: Nuridayanti Che Khalib, Kaliyaperumal Thanigaimani, Suhana Arshad, Ibrahim Abdul Razak

Abstract:

The 1:1 cocrystal of 2-amino-4-chloro-6- methylpyrimidine (2A4C6MP) with 4-methylbenzoic acid (4MBA) (I) has been prepared by slow evaporation method in methanol, which was crystallized in monoclinic C2/c space group, Z = 8, and a = 28.431 (2) Å, b = 7.3098 (5) Å, c = 14.2622 (10) Å and β = 109.618 (3)°. The presence of unionized –COOH functional group in cocrystal I was identified both by spectral methods (1H and 13C NMR, FTIR) and X-ray diffraction structural analysis. The 2A4C6MP molecule interact with the carboxylic group of the respective 4MBA molecule through N—H⋯O and O—H⋯N hydrogen bonds, forming a cyclic hydrogen–bonded motif R2 2(8). The crystal structure was stabilized by Npyrimidine—H⋯O=C and C=O—H⋯Npyrimidine types hydrogen bonding interactions. Theoretical investigations have been computed by HF and density function (B3LYP) method with 6–311+G (d,p)basis set. The vibrational frequencies together with 1H and 13C NMR chemical shifts have been calculated on the fully optimized geometry of cocrystal I. Theoretical calculations are in good agreement with the experimental results. Solvent–free formation of this cocrystal I is confirmed by powder X-ray diffraction analysis.

Keywords: Supramolecular Cocrystal, 2-amino-4-chloro-6- methylpyrimidine, Hartree-Fock and DFT Studies, Spectroscopic Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023
319 Ethereum Based Smart Contracts for Trade and Finance

Authors: Rishabh Garg

Abstract:

Traditionally, business parties build trust with a centralized operating mechanism, such as payment by letter of credit. However, the increase in cyber-attacks and malicious hacking has jeopardized business operations and finance practices. Emerging markets, due to their high banking risks and the large presence of digital financing, are looking for technology that enables transparency and traceability of any transaction in trade, finance or supply chain management. Blockchain systems, in the absence of any central authority, enable transactions across the globe with the help of decentralized applications. DApps consist of a front-end, a blockchain back-end, and middleware, that is, the code that connects the two. The front-end can be a sophisticated web app or mobile app, which is used to implement the functions/methods on the smart contract. Web apps can employ technologies such as HTML, CSS, React and Express. In this wake, fintech and blockchain products are popping up in brokerages, digital wallets, exchanges, post-trade clearance, settlement, middleware, infrastructure and base protocols. The present paper provides a technology driven solution, financial inclusion and innovative working paradigm for business and finance.

Keywords: Authentication, blockchain, channel, cryptography, DApps, data portability, Decentralized Public Key Infrastructure, Ethereum, hash function, Hashgraph, Privilege creep, Proof of Work algorithm, revocation, storage variables, Zero Knowledge Proof.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 583