Search results for: Function optimization.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3687

Search results for: Function optimization.

387 Optimizing of Fuzzy C-Means Clustering Algorithm Using GA

Authors: Mohanad Alata, Mohammad Molhim, Abdullah Ramini

Abstract:

Fuzzy C-means Clustering algorithm (FCM) is a method that is frequently used in pattern recognition. It has the advantage of giving good modeling results in many cases, although, it is not capable of specifying the number of clusters by itself. In FCM algorithm most researchers fix weighting exponent (m) to a conventional value of 2 which might not be the appropriate for all applications. Consequently, the main objective of this paper is to use the subtractive clustering algorithm to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clustering algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for the FCM algorithm. In order to get an optimal number of clusters, the iterative search approach is used to find the optimal single-output Sugenotype Fuzzy Inference System (FIS) model by optimizing the parameters of the subtractive clustering algorithm that give minimum least square error between the actual data and the Sugeno fuzzy model. Once the number of clusters is optimized, then two approaches are proposed to optimize the weighting exponent (m) in the FCM algorithm, namely, the iterative search approach and the genetic algorithms. The above mentioned approach is tested on the generated data from the original function and optimal fuzzy models are obtained with minimum error between the real data and the obtained fuzzy models.

Keywords: Fuzzy clustering, Fuzzy C-Means, Genetic Algorithm, Sugeno fuzzy systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3204
386 Kinetic Theory Based CFD Modeling of Particulate Flows in Horizontal Pipes

Authors: Pandaba Patro, Brundaban Patro

Abstract:

The numerical simulation of fully developed gas–solid flow in a horizontal pipe is done using the eulerian-eulerian approach, also known as two fluids modeling as both phases are treated as continuum and inter-penetrating continua. The solid phase stresses are modeled using kinetic theory of granular flow (KTGF). The computed results for velocity profiles and pressure drop are compared with the experimental data. We observe that the convection and diffusion terms in the granular temperature cannot be neglected in gas solid flow simulation along a horizontal pipe. The particle-wall collision and lift also play important role in eulerian modeling. We also investigated the effect of flow parameters like gas velocity, particle properties and particle loading on pressure drop prediction in different pipe diameters. Pressure drop increases with gas velocity and particle loading. The gas velocity has the same effect ((proportional toU2 ) as single phase flow on pressure drop prediction. With respect to particle diameter, pressure drop first increases, reaches a peak and then decreases. The peak is a strong function of pipe bore.

Keywords: CFD, Eulerian modeling, gas solid flow, KTGF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3140
385 ANN Based Currency Recognition System using Compressed Gray Scale and Application for Sri Lankan Currency Notes - SLCRec

Authors: D. A. K. S. Gunaratna, N. D. Kodikara, H. L. Premaratne

Abstract:

Automatic currency note recognition invariably depends on the currency note characteristics of a particular country and the extraction of features directly affects the recognition ability. Sri Lanka has not been involved in any kind of research or implementation of this kind. The proposed system “SLCRec" comes up with a solution focusing on minimizing false rejection of notes. Sri Lankan currency notes undergo severe changes in image quality in usage. Hence a special linear transformation function is adapted to wipe out noise patterns from backgrounds without affecting the notes- characteristic images and re-appear images of interest. The transformation maps the original gray scale range into a smaller range of 0 to 125. Applying Edge detection after the transformation provided better robustness for noise and fair representation of edges for new and old damaged notes. A three layer back propagation neural network is presented with the number of edges detected in row order of the notes and classification is accepted in four classes of interest which are 100, 500, 1000 and 2000 rupee notes. The experiments showed good classification results and proved that the proposed methodology has the capability of separating classes properly in varying image conditions.

Keywords: Artificial intelligence, linear transformation and pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2788
384 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees

Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.

Keywords: Cloud storage, decision trees, diagnostic image, search, telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 889
383 Hydrogeological Risk and Mining Tunnels: the Fontane-Rodoretto Mine Turin (Italy)

Authors: Paola Gattinoni, Laura Scesi, Elena Cerino Adbin, Daniele Cremonesi

Abstract:

The interaction of tunneling or mining with groundwater has become a very relevant problem not only due to the need to guarantee the safety of workers and to assure the efficiency of the tunnel drainage systems, but also to safeguard water resources from impoverishment and pollution risk. Therefore it is very important to forecast the drainage processes (i.e., the evaluation of drained discharge and drawdown caused by the excavation). The aim of this study was to know better the system and to quantify the flow drained from the Fontane mines, located in Val Germanasca (Turin, Italy). This allowed to understand the hydrogeological local changes in time. The work has therefore been structured as follows: the reconstruction of the conceptual model with the geological, hydrogeological and geological-structural study; the calculation of the tunnel inflows (through the use of structural methods) and the comparison with the measured flow rates; the water balance at the basin scale. In this way it was possible to understand what are the relationships between rainfall, groundwater level variations and the effect of the presence of tunnels as a means of draining water. Subsequently, it the effects produced by the excavation of the mining tunnels was quantified, through numerical modeling. In particular, the modeling made it possible to observe the drawdown variation as a function of number, excavation depth and different mines linings.

Keywords: Groundwater, Italy, numerical model, tunneling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
382 Carvacrol Attenuates Lung Injury in Rats with Severe Acute Pancreatitis

Authors: Salim Cerig, Fatime Geyikoglu, Pınar Akpulat, Suat Colak, Hasan Turkez, Murat Bakir, Mirkhalil Hosseinigouzdagani, Kubra Koc

Abstract:

This study was designed to evaluate whether carvacrol (CAR) could provide protection against lung injury by acute pancreatitis development. The rats were randomized into groups to receive (I) no therapy; (II) 50 μg/kg cerulein at 1h intervals by four intraperitoneal injections (i.p.); (III) 50, 100 and 200 mg/kg CAR by one i.p.; and (IV) cerulein+CAR after 2h of cerulein injection. 12h later, serum samples were obtained to assess pancreatic function the lipase and amylase values. The animals were euthanized and lung samples were excised. The specimens were stained with hematoxylin-eosin (H&E), periodic acid–Schif (PAS), Mallory's trichrome and amyloid. Additionally, oxidative DNA damage was determined by measuring as increases in 8-hydroxy-deoxyguanosine (8-OH-dG) adducts. The results showed that the serum activity of lipase and amylase in AP rats were significantly reduced after the therapy (p<0.05). We also found that the 100 mg/kg dose of CAR significantly decreased 8-OH-dG levels. Moreover, the severe pathological findings in the lung such as necrosis, inflammation, congestion, fibrosis, and thickened alveolar septum were attenuated in the AP+CAR groups when compared with AP group. Finally, the magnitude of the protective effect on lung is certain, and CAR is an effective therapy for lung injury caused by AP.

Keywords: Antioxidant activity, carvacrol, experimental acute pancreatitis, lung injury, oxidative DNA damage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442
381 Numerical Solution of Manning's Equation in Rectangular Channels

Authors: Abdulrahman Abdulrahman

Abstract:

When the Manning equation is used, a unique value of normal depth in the uniform flow exists for a given channel geometry, discharge, roughness, and slope. Depending on the value of normal depth relative to the critical depth, the flow type (supercritical or subcritical) for a given characteristic of channel conditions is determined whether or not flow is uniform. There is no general solution of Manning's equation for determining the flow depth for a given flow rate, because the area of cross section and the hydraulic radius produce a complicated function of depth. The familiar solution of normal depth for a rectangular channel involves 1) a trial-and-error solution; 2) constructing a non-dimensional graph; 3) preparing tables involving non-dimensional parameters. Author in this paper has derived semi-analytical solution to Manning's equation for determining the flow depth given the flow rate in rectangular open channel. The solution was derived by expressing Manning's equation in non-dimensional form, then expanding this form using Maclaurin's series. In order to simplify the solution, terms containing power up to 4 have been considered. The resulted equation is a quartic equation with a standard form, where its solution was obtained by resolving this into two quadratic factors. The proposed solution for Manning's equation is valid over a large range of parameters, and its maximum error is within -1.586%.

Keywords: Channel design, civil engineering, hydraulic engineering, open channel flow, Manning's equation, normal depth, uniform flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2195
380 Investigating the Dynamic Response of the Ballast

Authors: Osama Brinji, Wing Kong Chiu, Graham Tew

Abstract:

Understanding the stability of rail ballast is one of the most important aspects in the railways. An unstable track may cause some issues such as unnecessary vibration and ultimately loss of track quality. The track foundation plays an important role in the stabilization of the railway. The dynamic response of rail ballast in the vicinity of the rail sleeper can affect the stability of the rail track and this has not been studied in detail. A review of literature showed that most of the works focused on the area under the concrete sleeper. Although there are some theories about the shear (longitudinal) effect of the rail ballast, these have not properly been studied and hence are not well understood. The stability of a rail track will depend on the compactness of the ballast in its vicinity. This paper will try to determine the dynamic response of the ballast to identify its resonant behaviour. This preliminary research is one of several studies that examine the vibration response of the granular materials. The main aim is to use this information for future design of sleepers to ensure that any dynamic response of the sleeper will not compromise the state of compactness of the ballast. This paper will report on the dependence of damping and the natural frequency of the ballast as a function of depth and distance from the point of excitation introduced through a concrete block. The concrete block is used to simulate a sleeper and the ballast is simulated with gravel. In spite of these approximations, the results presented in the paper will show an agreement with theories and the assumptions that are used in study the mechanical behaviour of the rail ballast.

Keywords: Ballast, dynamic response, sleeper, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604
379 A Novel Approach to Allocate Channels Dynamically in Wireless Mesh Networks

Authors: Y. Harold Robinson, M. Rajaram

Abstract:

Wireless mesh networking is rapidly gaining in popularity with a variety of users: from municipalities to enterprises, from telecom service providers to public safety and military organizations. This increasing popularity is based on two basic facts: ease of deployment and increase in network capacity expressed in bandwidth per footage; WMNs do not rely on any fixed infrastructure. Many efforts have been used to maximizing throughput of the network in a multi-channel multi-radio wireless mesh network. Current approaches are purely based on either static or dynamic channel allocation approaches. In this paper, we use a hybrid multichannel multi radio wireless mesh networking architecture, where static and dynamic interfaces are built in the nodes. Dynamic Adaptive Channel Allocation protocol (DACA), it considers optimization for both throughput and delay in the channel allocation. The assignment of the channel has been allocated to be codependent with the routing problem in the wireless mesh network and that should be based on passage flow on every link. Temporal and spatial relationship rises to re compute the channel assignment every time when the pattern changes in mesh network, channel assignment algorithms assign channels in network. In this paper a computing path which captures the available path bandwidth is the proposed information and the proficient routing protocol based on the new path which provides both static and dynamic links. The consistency property guarantees that each node makes an appropriate packet forwarding decision and balancing the control usage of the network, so that a data packet will traverse through the right path.

Keywords: Wireless mesh network, spatial time division multiple access, hybrid topology, timeslot allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
378 Integrated Modeling of Transformation of Electricity and Transportation Sectors: A Case Study of Australia

Authors: T. Aboumahboub, R. Brecha, H. B. Shrestha, U. F. Hutfilter, A. Geiges, W. Hare, M. Schaeffer, L. Welder, M. Gidden

Abstract:

The proposed stringent mitigation targets require an immediate start for a drastic transformation of the whole energy system. The current Australian energy system is mainly centralized and fossil fuel-based in most states with coal and gas-fired plants dominating the total produced electricity over the recent past. On the other hand, the country is characterized by a huge, untapped renewable potential, where wind and solar energy could play a key role in the decarbonization of the Australia’s future energy system. However, integrating high shares of such variable renewable energy sources (VRES) challenges the power system considerably due to their temporal fluctuations and geographical dispersion. This raises the concerns about flexibility gap in the system to ensure the security of supply with increasing shares of such intermittent sources. One main flexibility dimension to facilitate system integration of high shares of VRES is to increase the cross-sectoral integration through coupling of electricity to other energy sectors alongside the decarbonization of the power sector and reinforcement of the transmission grid. This paper applies a multi-sectoral energy system optimization model for Australia. We investigate the cost-optimal configuration of a renewable-based Australian energy system and its transformation pathway in line with the ambitious range of proposed climate change mitigation targets. We particularly analyse the implications of linking the electricity and transport sectors in a prospective, highly renewable Australian energy system.

Keywords: Decarbonization, energy system modeling, sector coupling, variable renewable energies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527
377 An Insurer’s Investment Model with Reinsurance Strategy under the Modified Constant Elasticity of Variance Process

Authors: K. N. C. Njoku, Chinwendu Best Eleje, Christian Chukwuemeka Nwandu

Abstract:

One of the problems facing most insurance companies is how best the burden of paying claims to its policy holders can be managed whenever need arises. Hence there is need for the insurer to buy a reinsurance contract in order to reduce risk which will enable the insurer to share the financial burden with the reinsurer. In this paper, the insurer’s and reinsurer’s strategy is investigated under the modified constant elasticity of variance (M-CEV) process and proportional administrative charges. The insurer considered investment in one risky asset and one risk free asset where the risky asset is modeled based on the M-CEV process which is an extension of constant elasticity of variance (CEV) process. Next, a nonlinear partial differential equation in the form of Hamilton Jacobi Bellman equation is obtained by dynamic programming approach. Using power transformation technique and variable change, the explicit solutions of the optimal investment strategy and optimal reinsurance strategy are obtained. Finally, some numerical simulations of some sensitive parameters were obtained and discussed in details where we observed that the modification factor only affects the optimal investment strategy and not the reinsurance strategy for an insurer with exponential utility function.

Keywords: Reinsurance strategy, Hamilton Jacobi Bellman equation, power transformation, M-CEV process, exponential utility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 262
376 Solving Transient Conduction and Radiation Using Finite Volume Method

Authors: Ashok K. Satapathy, Prerana Nashine

Abstract:

Radiative heat transfer in participating medium was carried out using the finite volume method. The radiative transfer equations are formulated for absorbing and anisotropically scattering and emitting medium. The solution strategy is discussed and the conditions for computational stability are conferred. The equations have been solved for transient radiative medium and transient radiation incorporated with transient conduction. Results have been obtained for irradiation and corresponding heat fluxes for both the cases. The solutions can be used to conclude incident energy and surface heat flux. Transient solutions were obtained for a slab of heat conducting in slab and by thermal radiation. The effect of heat conduction during the transient phase is to partially equalize the internal temperature distribution. The solution procedure provides accurate temperature distributions in these regions. A finite volume procedure with variable space and time increments is used to solve the transient radiation equation. The medium in the enclosure absorbs, emits, and anisotropically scatters radiative energy. The incident radiations and the radiative heat fluxes are presented in graphical forms. The phase function anisotropy plays a significant role in the radiation heat transfer when the boundary condition is non-symmetric.

Keywords: Participating media, finite volume method, radiation coupled with conduction, heat transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2914
375 Speaker Identification using Neural Networks

Authors: R.V Pawar, P.P.Kajave, S.N.Mali

Abstract:

The speech signal conveys information about the identity of the speaker. The area of speaker identification is concerned with extracting the identity of the person speaking the utterance. As speech interaction with computers becomes more pervasive in activities such as the telephone, financial transactions and information retrieval from speech databases, the utility of automatically identifying a speaker is based solely on vocal characteristic. This paper emphasizes on text dependent speaker identification, which deals with detecting a particular speaker from a known population. The system prompts the user to provide speech utterance. System identifies the user by comparing the codebook of speech utterance with those of the stored in the database and lists, which contain the most likely speakers, could have given that speech utterance. The speech signal is recorded for N speakers further the features are extracted. Feature extraction is done by means of LPC coefficients, calculating AMDF, and DFT. The neural network is trained by applying these features as input parameters. The features are stored in templates for further comparison. The features for the speaker who has to be identified are extracted and compared with the stored templates using Back Propogation Algorithm. Here, the trained network corresponds to the output; the input is the extracted features of the speaker to be identified. The network does the weight adjustment and the best match is found to identify the speaker. The number of epochs required to get the target decides the network performance.

Keywords: Average Mean Distance function, Backpropogation, Linear Predictive Coding, MultilayeredPerceptron,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859
374 Agreement Options in Multi-person Decision on Optimizing High-Rise Building Columns

Authors: Christiono Utomo, Arazi Idrus, Madzlan Napiah, Mohd. Faris Khamidi

Abstract:

This paper presents a conceptual model of agreement options for negotiation support in multi-person decision on optimizing high-rise building columns. The decision is complicated since many parties involved in choosing a single alternative from a set of solutions. There are different concern caused by differing preferences, experiences, and background. Such building columns as alternatives are referred to as agreement options which are determined by identifying the possible decision maker group, followed by determining the optimal solution for each group. The group in this paper is based on three-decision makers preferences that are designer, programmer, and construction manager. Decision techniques applied to determine the relative value of the alternative solutions for performing the function. Analytical Hierarchy Process (AHP) was applied for decision process and game theory based agent system for coalition formation. An n-person cooperative game is represented by the set of all players. The proposed coalition formation model enables each agent to select individually its allies or coalition. It further emphasizes the importance of performance evaluation in the design process and value-based decision.

Keywords: Agreement options, coalition, group choice, game theory, building columns selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
373 Application of Stabilized Polyaniline Microparticles for Better Protective Ability of Zinc Coatings

Authors: N. Boshkova, K. Kamburova, N. Tabakova, N. Boshkov, Ts. Radeva

Abstract:

Coatings based on polyaniline (PANI) can improve the resistance of steel against corrosion. In this work, the preparation of stable suspensions of colloidal PANI-SiO2 particles, suitable for obtaining of composite anticorrosive coating on steel, is described. Electrokinetic data as a function of pH are presented, showing that the zeta potentials of the PANI-SiO2 particles are governed primarily by the charged groups at the silica oxide surface. Electrosteric stabilization of the PANI-SiO2 particles’ suspension against aggregation is realized at pH>5.5 (EB form of PANI) by adsorption of positively charged polyelectrolyte molecules onto negatively charged PANI-SiO2 particles. The PANI-SiO2 particles are incorporated by electrodeposition into the metal matrix of zinc in order to obtain composite (hybrid) coatings. The latter are aimed to ensure sacrificial protection of steel mainly in aggressive media leading to local corrosion damages. The surface morphology of the composite zinc coatings is investigated with SEM. The influence of PANI-SiO2 particles on the cathodic and anodic processes occurring in the starting electrolyte for obtaining of the coatings is followed with cyclic voltammetry. The electrochemical and corrosion behavior is evaluated with potentiodynamic polarization curves and polarization resistance measurements. The beneficial effect of the stabilized PANI-SiO2 particles for the increased protective ability of the composites is commented and discussed.

Keywords: Corrosion, polyaniline particles, zinc, protective ability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 772
372 Improved Dynamic Bayesian Networks Applied to Arabic on Line Characters Recognition

Authors: Redouane Tlemsani, Abdelkader Benyettou

Abstract:

Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology.

This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data.

Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables.

In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization.

The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1694
371 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method

Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky

Abstract:

It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finiteelements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.

Keywords: Finite elements method, modeling, expected welding deformations, welding, assembling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
370 Effects of Thermal Radiation and Magnetic Field on Unsteady Stretching Permeable Sheet in Presence of Free Stream Velocity

Authors: Phool Singh, Ashok Jangid, N. S. Tomer, Deepa Sinha

Abstract:

The aim of this paper is to investigate twodimensional unsteady flow of a viscous incompressible fluid about stagnation point on permeable stretching sheet in presence of time dependent free stream velocity. Fluid is considered in the influence of transverse magnetic field in the presence of radiation effect. Rosseland approximation is use to model the radiative heat transfer. Using time-dependent stream function, partial differential equations corresponding to the momentum and energy equations are converted into non-linear ordinary differential equations. Numerical solutions of these equations are obtained by using Runge-Kutta Fehlberg method with the help of Newton-Raphson shooting technique. In the present work the effect of unsteadiness parameter, magnetic field parameter, radiation parameter, stretching parameter and the Prandtl number on flow and heat transfer characteristics have been discussed. Skin-friction coefficient and Nusselt number at the sheet are computed and discussed. The results reported in the paper are in good agreement with published work in literature by other researchers.

Keywords: Magneto hydrodynamics, stretching sheet, thermal radiation, unsteady flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2225
369 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2714
368 Network Reconfiguration of Distribution System Using Artificial Bee Colony Algorithm

Authors: S. Ganesh

Abstract:

Power distribution systems typically have tie and sectionalizing switches whose states determine the topological configuration of the network. The aim of network reconfiguration of the distribution network is to minimize the losses for a load arrangement at a particular time. Thus the objective function is to minimize the losses of the network by satisfying the distribution network constraints. The various constraints are radiality, voltage limits and the power balance condition. In this paper the status of the switches is obtained by using Artificial Bee Colony (ABC) algorithm. ABC is based on a particular intelligent behavior of honeybee swarms. ABC is developed based on inspecting the behaviors of real bees to find nectar and sharing the information of food sources to the bees in the hive. The proposed methodology has three stages. In stage one ABC is used to find the tie switches, in stage two the identified tie switches are checked for radiality constraint and if the radilaity constraint is satisfied then the procedure is proceeded to stage three otherwise the process is repeated. In stage three load flow analysis is performed. The process is repeated till the losses are minimized. The ABC is implemented to find the power flow path and the Forward Sweeper algorithm is used to calculate the power flow parameters. The proposed methodology is applied for a 33–bus single feeder distribution network using MATLAB.

Keywords: Artificial Bee Colony (ABC) algorithm, Distribution system, Loss reduction, Network reconfiguration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3763
367 Purity Monitor Studies in Medium Liquid Argon TPC

Authors: I. Badhrees

Abstract:

This paper is an attempt to describe some of the results that had been found through a journey of study in the field of particle physics. This study consists of two parts, one about the measurement of the cross section of the decay of the Z particle in two electrons, and the other deals with the measurement of the cross section of the multi-photon absorption process using a beam of Laser in the Liquid Argon Time Projection Chamber.

The first part of the paper concerns the results based on the analysis of a data sample containing 8120 ee candidates to reconstruct the mass of the Z particle for each event where each event has an ee pair with PT(e) > 20GeV, and η(e) < 2.5. Monte Carlo templates of the reconstructed Z particle were produced as a function of the Z mass scale. The distribution of the reconstructed Z mass in the data was compared to the Monte Carlo templates, where the total cross section is calculated to be equal to 1432pb.

The second part concerns the Liquid Argon Time Projection Chamber, LAr TPC, the results of the interaction of the UV Laser, Nd-YAG with λ= 266mm, with LAr and through the study of the multi-photon ionization process as a part of the R&D at Bern University. The main result of this study was the cross section of the process of the multi-photon ionization process of the LAr, σe = 1.24±0.10stat±0.30sys.10 -56cm4.

Keywords: ATLAS, CERN, KACST, LArTPC, Particle Physics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667
366 Influence of Local Soil Conditions on Optimal Load Factors for Seismic Design of Buildings

Authors: Miguel A. Orellana, Sonia E. Ruiz, Juan Bojórquez

Abstract:

Optimal load factors (dead, live and seismic) used for the design of buildings may be different, depending of the seismic ground motion characteristics to which they are subjected, which are closely related to the type of soil conditions where the structures are located. The influence of the type of soil on those load factors, is analyzed in the present study. A methodology that is useful for establishing optimal load factors that minimize the cost over the life cycle of the structure is employed; and as a restriction, it is established that the probability of structural failure must be less than or equal to a prescribed value. The life-cycle cost model used here includes different types of costs. The optimization methodology is applied to two groups of reinforced concrete buildings. One set (consisting on 4-, 7-, and 10-story buildings) is located on firm ground (with a dominant period Ts=0.5 s) and the other (consisting on 6-, 12-, and 16-story buildings) on soft soil (Ts=1.5 s) of Mexico City. Each group of buildings is designed using different combinations of load factors. The statistics of the maximums inter-story drifts (associated with the structural capacity) are found by means of incremental dynamic analyses. The buildings located on firm zone are analyzed under the action of 10 strong seismic records, and those on soft zone, under 13 strong ground motions. All the motions correspond to seismic subduction events with magnitudes M=6.9. Then, the structural damage and the expected total costs, corresponding to each group of buildings, are estimated. It is concluded that the optimal load factors combination is different for the design of buildings located on firm ground than that for buildings located on soft soil.

Keywords: Life-cycle cost, optimal load factors, reinforced concrete buildings, total costs, type of soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 863
365 Turbulent Mixing and its Effects on Thermal Fatigue in Nuclear Reactors

Authors: Eggertson, E.C. Kapulla, R, Fokken, J, Prasser, H.M.

Abstract:

The turbulent mixing of coolant streams of different temperature and density can cause severe temperature fluctuations in piping systems in nuclear reactors. In certain periodic contraction cycles these conditions lead to thermal fatigue. The resulting aging effect prompts investigation in how the mixing of flows over a sharp temperature/density interface evolves. To study the fundamental turbulent mixing phenomena in the presence of density gradients, isokinetic (shear-free) mixing experiments are performed in a square channel with Reynolds numbers ranging from 2-500 to 60-000. Sucrose is used to create the density difference. A Wire Mesh Sensor (WMS) is used to determine the concentration map of the flow in the cross section. The mean interface width as a function of velocity, density difference and distance from the mixing point are analyzed based on traditional methods chosen for the purposes of atmospheric/oceanic stratification analyses. A definition of the mixing layer thickness more appropriate to thermal fatigue and based on mixedness is devised. This definition shows that the thermal fatigue risk assessed using simple mixing layer growth can be misleading and why an approach that separates the effects of large scale (turbulent) and small scale (molecular) mixing is necessary.

Keywords: Concentration measurements, Mixedness, Stablystratified turbulent isokinetic mixing layer, Wire mesh sensor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2190
364 Statistical Analysis and Optimization of a Process for CO2 Capture

Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi

Abstract:

CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.

Keywords: Bubble column reactor, CO2 capture, Response Surface Methodology, water desalination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
363 A Survey of Various Algorithms for Vlsi Physical Design

Authors: Rajine Swetha R, B. Shekar Babu, Sumithra Devi K.A

Abstract:

Electronic Systems are the core of everyday lives. They form an integral part in financial networks, mass transit, telephone systems, power plants and personal computers. Electronic systems are increasingly based on complex VLSI (Very Large Scale Integration) integrated circuits. Initial electronic design automation is concerned with the design and production of VLSI systems. The next important step in creating a VLSI circuit is Physical Design. The input to the physical design is a logical representation of the system under design. The output of this step is the layout of a physical package that optimally or near optimally realizes the logical representation. Physical design problems are combinatorial in nature and of large problem sizes. Darwin observed that, as variations are introduced into a population with each new generation, the less-fit individuals tend to extinct in the competition of basic necessities. This survival of fittest principle leads to evolution in species. The objective of the Genetic Algorithms (GA) is to find an optimal solution to a problem .Since GA-s are heuristic procedures that can function as optimizers, they are not guaranteed to find the optimum, but are able to find acceptable solutions for a wide range of problems. This survey paper aims at a study on Efficient Algorithms for VLSI Physical design and observes the common traits of the superior contributions.

Keywords: Genetic Algorithms, Physical Design, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700
362 Integration of Image and Patient Data, Software and International Coding Systems for Use in a Mammography Research Project

Authors: V. Balanica, W. I. D. Rae, M. Caramihai, S. Acho, C. P. Herbst

Abstract:

Mammographic images and data analysis to facilitate modelling or computer aided diagnostic (CAD) software development should best be done using a common database that can handle various mammographic image file formats and relate these to other patient information. This would optimize the use of the data as both primary reporting and enhanced information extraction of research data could be performed from the single dataset. One desired improvement is the integration of DICOM file header information into the database, as an efficient and reliable source of supplementary patient information intrinsically available in the images. The purpose of this paper was to design a suitable database to link and integrate different types of image files and gather common information that can be further used for research purposes. An interface was developed for accessing, adding, updating, modifying and extracting data from the common database, enhancing the future possible application of the data in CAD processing. Technically, future developments envisaged include the creation of an advanced search function to selects image files based on descriptor combinations. Results can be further used for specific CAD processing and other research. Design of a user friendly configuration utility for importing of the required fields from the DICOM files must be done.

Keywords: Database Integration, Mammogram Classification, Tumour Classification, Computer Aided Diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907
361 Hepatoprotective Effect of Oleuropein against Cisplatin-Induced Liver Damage in Rat

Authors: Salim Cerig, Fatime Geyikoglu, Murat Bakir, Suat Colak, Merve Sonmez, Kubra Koc

Abstract:

Cisplatin (CIS) is one of the most effective an anticancer drug and also toxic to cells by activating oxidative stress. Oleuropein (OLE) has key role against oxidative stress in mammalian cells, but the role of this antioxidant in the toxicity of CIS remains unknown. The aim of the present study was to investigate the efficacy of OLE on CIS-induced liver damages in male rats. With this aim, male Sprague Dawley rats were randomly assigned to one of eight groups: Control group; the group treated with 7 mg/kg/day CIS; the groups treated with 50, 100 and 200 mg/kg/day OLE (i.p.); and the groups treated with OLE for three days starting at 24 h following CIS injection. After 4 days of injections, serum was provided to assess the blood AST, ALT and LDH values. The liver tissues were removed for histological, biochemical (TAC, TOS and MDA) and genotoxic evaluations. In the CIS treated group, the whole liver tissue showed significant histological changes. Also, CIS significantly increased both the incidence of oxidative stress and the induction of 8-hydroxy-deoxyguanosine (8-OH-dG). Moreover, the rats taking CIS have abnormal results on liver function tests. However, these parameters reached to the normal range after administration of OLE for 3 days. Finally, OLE demonstrated an acceptable high potential and was effective in attenuating CIS-induced liver injury. In this trial, the 200 mg/kg dose of OLE firstly appeared to induce the most optimal protective response.

Keywords: Antioxidant response, cisplatin, histology, liver, oleuropein, 8-OhdG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
360 Comparison of Router Intelligent and Cooperative Host Intelligent Algorithms in a Continuous Model of Fixed Telecommunication Networks

Authors: Dávid Csercsik, Sándor Imre

Abstract:

The performance of state of the art worldwide telecommunication networks strongly depends on the efficiency of the applied routing mechanism. Game theoretical approaches to this problem offer new solutions. In this paper a new continuous network routing model is defined to describe data transfer in fixed telecommunication networks of multiple hosts. The nodes of the network correspond to routers whose latency is assumed to be traffic dependent. We propose that the whole traffic of the network can be decomposed to a finite number of tasks, which belong to various hosts. To describe the different latency-sensitivity, utility functions are defined for each task. The model is used to compare router and host intelligent types of routing methods, corresponding to various data transfer protocols. We analyze host intelligent routing as a transferable utility cooperative game with externalities. The main aim of the paper is to provide a framework in which the efficiency of various routing algorithms can be compared and the transferable utility game arising in the cooperative case can be analyzed.

Keywords: Routing, Telecommunication networks, Performance evaluation, Cooperative game theory, Partition function form games

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809
359 An Archetype to Sustain Knowledge Management Systems through Intranet

Authors: B. T. Sayed, Nafaâ Jabeur, M. Aref

Abstract:

Creation and maintenance of knowledge management systems has been recognized as an important research area. Consecutively lack of accurate results from knowledge management systems limits the organization to apply their knowledge management processes. This leads to a failure in getting the right information to the right people at the right time thus followed by a deficiency in decision making processes. An Intranet offers a powerful tool for communication and collaboration, presenting data and information, and the means that creates and shares knowledge, all in one easily accessible place. This paper proposes an archetype describing how a knowledge management system, with the support of intranet capabilities, could very much increase the accuracy of capturing, storing and retrieving knowledge based processes thereby increasing the efficiency of the system. This system will expect a critical mass of usage, by the users, for intranet to function as knowledge management systems. This prototype would lead to a design of an application that would impose creation and maintenance of an effective knowledge management system through intranet. The aim of this paper is to introduce an effective system to handle capture, store and distribute knowledge management in a form that may not lead to any failure which exists in most of the systems. The methodology used in the system would require all the employees, in the organization, to contribute the maximum to deliver the system to a successful arena. The system is still in its initial mode and thereby the authors are under the process to practically implement the ideas, as mentioned in the system, to produce satisfactory results.

Keywords: Knowledge Management Systems, Intranet, Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
358 Investigation of Bubble Growth during Nucleate Boiling Using CFD

Authors: K. Jagannath, Akhilesh Kotian, S. S. Sharma, Achutha Kini U., P. R. Prabhu

Abstract:

Boiling process is characterized by the rapid formation of vapour bubbles at the solid–liquid interface (nucleate boiling) with pre-existing vapour or gas pockets. Computational fluid dynamics (CFD) is an important tool to study bubble dynamics. In the present study, CFD simulation has been carried out to determine the bubble detachment diameter and its terminal velocity. Volume of fluid method is used to model the bubble and the surrounding by solving single set of momentum equations and tracking the volume fraction of each of the fluids throughout the domain. In the simulation, bubble is generated by allowing water-vapour to enter a cylinder filled with liquid water through an inlet at the bottom. After the bubble is fully formed, the bubble detaches from the surface and rises up during which the bubble accelerates due to the net balance between buoyancy force and viscous drag. Finally when these forces exactly balance each other, it attains a constant terminal velocity. The bubble detachment diameter and the terminal velocity of the bubble are captured by the monitor function provided in FLUENT. The detachment diameter and the terminal velocity obtained are compared with the established results based on the shape of the bubble. A good agreement is obtained between the results obtained from simulation and the equations in comparison with the established results.

Keywords: Bubble growth, computational fluid dynamics, detachment diameter, terminal velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2082