Search results for: large array
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2566

Search results for: large array

2176 An Intelligent Approach of Rough Set in Knowledge Discovery Databases

Authors: Hrudaya Ku. Tripathy, B. K. Tripathy, Pradip K. Das

Abstract:

Knowledge Discovery in Databases (KDD) has evolved into an important and active area of research because of theoretical challenges and practical applications associated with the problem of discovering (or extracting) interesting and previously unknown knowledge from very large real-world databases. Rough Set Theory (RST) is a mathematical formalism for representing uncertainty that can be considered an extension of the classical set theory. It has been used in many different research areas, including those related to inductive machine learning and reduction of knowledge in knowledge-based systems. One important concept related to RST is that of a rough relation. In this paper we presented the current status of research on applying rough set theory to KDD, which will be helpful for handle the characteristics of real-world databases. The main aim is to show how rough set and rough set analysis can be effectively used to extract knowledge from large databases.

Keywords: Data mining, Data tables, Knowledge discovery in database (KDD), Rough sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2336
2175 Identification of Wideband Sources Using Higher Order Statistics in Noisy Environment

Authors: S. Bourennane, A. Bendjama

Abstract:

This paper deals with the localization of the wideband sources. We develop a new approach for estimating the wide band sources parameters. This method is based on the high order statistics of the recorded data in order to eliminate the Gaussian components from the signals received on the various hydrophones.In fact the noise of sea bottom is regarded as being Gaussian. Thanks to the coherent signal subspace algorithm based on the cumulant matrix of the received data instead of the cross-spectral matrix the wideband correlated sources are perfectly located in the very noisy environment. We demonstrate the performance of the proposed algorithm on the real data recorded during an underwater acoustics experiments.

Keywords: Higher-order statistics, high resolution array processing techniques, localization of acoustics sources, wide band sources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599
2174 Improving Convergence of Parameter Tuning Process of the Additive Fuzzy System by New Learning Strategy

Authors: Thi Nguyen, Lee Gordon-Brown, Jim Peterson, Peter Wheeler

Abstract:

An additive fuzzy system comprising m rules with n inputs and p outputs in each rule has at least t m(2n + 2 p + 1) parameters needing to be tuned. The system consists of a large number of if-then fuzzy rules and takes a long time to tune its parameters especially in the case of a large amount of training data samples. In this paper, a new learning strategy is investigated to cope with this obstacle. Parameters that tend toward constant values at the learning process are initially fixed and they are not tuned till the end of the learning time. Experiments based on applications of the additive fuzzy system in function approximation demonstrate that the proposed approach reduces the learning time and hence improves convergence speed considerably.

Keywords: Additive fuzzy system, improving convergence, parameter learning process, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
2173 Multi-Objective Optimization in End Milling of Al-6061 Using Taguchi Based G-PCA

Authors: M. K. Pradhan, Mayank Meena, Shubham Sen, Arvind Singh

Abstract:

In this study, a multi objective optimization for end milling of Al 6061 alloy has been presented to provide better surface quality and higher Material Removal Rate (MRR). The input parameters considered for the analysis are spindle speed, depth of cut and feed. The experiments were planned as per Taguchis design of experiment, with L27 orthogonal array. The Grey Relational Analysis (GRA) has been used for transforming multiple quality responses into a single response and the weights of the each performance characteristics are determined by employing the Principal Component Analysis (PCA), so that their relative importance can be properly and objectively described. The results reveal that Taguchi based G-PCA can effectively acquire the optimal combination of cutting parameters.

Keywords: Material Removal Rate, Surface Roughness, Taguchi Method, Grey Relational Analysis, Principal Component Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2227
2172 Multi-Element Synthetic Transmit Aperture Method in Medical Ultrasound Imaging

Authors: Ihor Trots, Yuriy Tasinkevych, Andrzej Nowicki, Marcin Lewandowski

Abstract:

The paper presents the multi-element synthetic transmit aperture (MSTA) method with a small number of elements transmitting and all elements apertures in medical ultrasound imaging. As compared to the other methods MSTA allows to increase the system frame rate and provides the best compromise between penetration depth and lateral resolution. In the experiments a 128-element linear transducer array with 0.3 mm pitch excited by a burst pulse of 125 ns duration were used. The comparison of 2D ultrasound images of tissue mimicking phantom obtained using the STA and the MSTA methods is presented to demonstrate the benefits of the second approach. The results were obtained using SA algorithm with transmit and receive signals correction based on a single element directivity function.

Keywords: Beamforming, frame rate, synthetic aperture, ultrasound imaging

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2456
2171 FPGA Implementation of Adaptive Clock Recovery for TDMoIP Systems

Authors: Semih Demir, Anil Celebi

Abstract:

Circuit switched networks widely used until the end of the 20th century have been transformed into packages switched networks. Time Division Multiplexing over Internet Protocol (TDMoIP) is a system that enables Time Division Multiplexing (TDM) traffic to be carried over packet switched networks (PSN). In TDMoIP systems, devices that send TDM data to the PSN and receive it from the network must operate with the same clock frequency. In this study, it was aimed to implement clock synchronization process in Field Programmable Gate Array (FPGA) chips using time information attached to the packages received from PSN. The designed hardware is verified using the datasets obtained for the different carrier types and comparing the results with the software model. Field tests are also performed by using the real time TDMoIP system.

Keywords: Clock recovery on TDMoIP, FPGA, MATLAB reference model, clock synchronization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1463
2170 Using Submerge Fermentation Method to Production of Extracellular Lipase by Aspergillus niger

Authors: Masoumeh Ghasemi, Afshin Farahbakhsh, Arman Farahbakhsh, Ali Asghar Safari

Abstract:

In this study, lipase production has been investigated using submerge fermentation by Aspergillus niger in Kilka fish oil as main substrate. The Taguchi method with an L9 orthogonal array design was used to investigate the effect of parameters and their levels on lipase productivity. The optimum conditions for Kilka fish oil concentration, incubation temperature and pH were obtained 3 gr./ml 35°C and 7, respectively. The amount of lipase activity in optimum condition was obtained 4.59IU/ml. By comparing this amount with the amount of productivity in the olive oil medium based on the cost of each medium, it was that using Kilka fish oil is 84% economical. Therefore Kilka fish oil can be used as an economical and suitable substrate in the lipase production and industrial usages.

Keywords: Lipase, Aspergillus niger, Kilka Fish oil, Submerge Fermentation method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2876
2169 Classification Based on Deep Neural Cellular Automata Model

Authors: Yasser F. Hassan

Abstract:

Deep learning structure is a branch of machine learning science and greet achievement in research and applications. Cellular neural networks are regarded as array of nonlinear analog processors called cells connected in a way allowing parallel computations. The paper discusses how to use deep learning structure for representing neural cellular automata model. The proposed learning technique in cellular automata model will be examined from structure of deep learning. A deep automata neural cellular system modifies each neuron based on the behavior of the individual and its decision as a result of multi-level deep structure learning. The paper will present the architecture of the model and the results of simulation of approach are given. Results from the implementation enrich deep neural cellular automata system and shed a light on concept formulation of the model and the learning in it.

Keywords: Cellular automata, neural cellular automata, deep learning, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 866
2168 Design of the Large Dimension Cold Shield Cooled by G-M Cryocooler

Authors: Gong Jie, Yu Qianxu, Liu Min, Shan Weiwei

Abstract:

The design of methods of the 20 K large dimension cold shield used for infrared radiation demarcating in space environment simulation test were introduced in this paper. The cold shield were cooled by five G-M cryocoolers , and the dimension of the cold shield is the largest in our country.Cold shield installation and distribution and compensator for contraction on cooling were introduced detailedly. The temperature distribution and cool-down time of cold shield surface were also calculated and analysed in this paper. The design of cold shield resolves the difficulty of compensator for contraction on cooling successfully. Test results show that the actual technical performance indicators of cold shield met and exceeded the design requirements.

Keywords: cold shield, G-M cryocooler,infrared radiometer demarcating, satellite, space environment simulation equipments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
2167 Mathematical Model for Progressive Phase Distribution of Ku-band Reflectarray Antennas

Authors: M. Y. Ismail, M. Inam, A. F. M. Zain, N. Misran

Abstract:

Progressive phase distribution is an important consideration in reflectarray antenna design which is required to form a planar wave in front of the reflectarray aperture. This paper presents a detailed mathematical model in order to determine the required reflection phase values from individual element of a reflectarray designed in Ku-band frequency range. The proposed technique of obtaining reflection phase can be applied for any geometrical design of elements and is independent of number of array elements. Moreover the model also deals with the solution of reflectarray antenna design with both centre and off-set feed configurations. The theoretical modeling has also been implemented for reflectarrays constructed on 0.508mm thickness of different dielectric substrates. The results show an increase in the slope of the phase curve from 4.61°/mm to 22.35°/mm by varying the material properties.

Keywords: Mathematical modeling, Progressive phase distribution, Reflectarray antenna, Reflection phase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
2166 Research on the Protection and Reuse Model of Historical Buildings in Chinese Airports

Authors: Jie Ouyang, Chen Nie

Abstract:

China had constructed a large number of military and civilian airports before and after World War II, and then began large-scale repairs, reconstructions or relocation of airports after the baptism of wars after World War I and World War II. The airport's historical area and its historical buildings such as terminals, hangars, and towers have adopted different protection strategies and reuse application strategies. This paper is based on the judgment of the value of airport historical buildings to study different protection and reuse strategies. The protection and reuse models of historical buildings are classified in three dimensions: the airport historical area, the airport historical building complex and its individual buildings, and combined with specific examples to discuss and summarize the technical characteristics, protection strategies and successful experiences of different modes of protection and reuse of historical areas and historical buildings of airports.

Keywords: Airport, airport area, historic airport building, protection, reuse model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 674
2165 Reducing the Number of Constraints in Non Safe Petri Net

Authors: M. Zareiee, A. Dideban

Abstract:

This paper addresses the problem of forbidden states in non safe Petri Nets. In the system, for preventing it from entering the forbidden states, some linear constraints can be assigned to them. Then these constraints can be enforced on the system using control places. But when the number of constraints in the system is large, a large number of control places must be added to the model of system. This concept complicates the model of system. There are some methods for reducing the number of constraints in safe Petri Nets. But there is no a systematic method for non safe Petri Nets. In this paper we propose a method for reducing the number of constraints in non safe Petri Nets which is based on solving an integer linear programming problem.

Keywords: discrete event system, Supervisory control, Petri Net, Constraint

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314
2164 Development of Position Changing System for Obstructive Sleep Apnea Patient using HRV

Authors: Soo- Young Ye, Dong-Hyun Kim

Abstract:

Obstructive sleep apnea in patients, between 70 and 80 percent, can be cured with just a posture correcting. The most import thing to do this is detection of obstructive sleep apnea. Detection of obstructive sleep apnea can be performed through heart rate variability analysis using power spectrum density analysis. After HRV analysis we needed to know the current position information for correcting the position. The pressure sensors of the array type were used to obtain position information. These sensors can obtain information from the experimenter about position. In addition, air cylinder corrected the position of the experimenter by lifting the bed. The experimenter can be changed position without breaking during sleep by the system. Polysomnograph recording were obtained from 10 patients. The results of HRV analysis were that NLF and LF/HF ratio increased, while NHF decreased during OSA. Position change had to be done the periods.

Keywords: Obstructive sleep apnea, Heart rate variability, Air cylinder, PSD, RR interval, ANS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
2163 Planar Plasmonic Terahertz Waveguides for Sensor Applications

Authors: Maidul Islam, Dibakar Roy Chowdhury, Gagan Kumar

Abstract:

We investigate sensing capabilities of a planar plasmonic THz waveguide. The waveguide is comprised of one dimensional array of periodically arranged sub wavelength scale corrugations in the form of rectangular dimples in order to ensure the plasmonic response. The THz waveguide transmission is observed for polyimide (as thin film) substance filling the dimples. The refractive index of the polyimide film is varied to examine various sensing parameters such as frequency shift, sensitivity and Figure of Merit (FoM) of the fundamental plasmonic resonance supported by the waveguide. In efforts to improve sensing characteristics, we also examine sensing capabilities of a plasmonic waveguide having V shaped corrugations and compare results with that of rectangular dimples. The proposed study could be significant in developing new terahertz sensors with improved sensitivity utilizing the plasmonic waveguides.

Keywords: Terahertz, plasmonic, sensor, sub-wavelength structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1217
2162 Enhancement of MIMO H2S Gas Sweetening Separator Tower Using Fuzzy Logic Controller Array

Authors: Muhammad M. A. S. Mahmoud

Abstract:

Natural gas sweetening process is a controlled process that must be done at maximum efficiency and with the highest quality. In this work, due to complexity and non-linearity of the process, the H2S gas separation and the intelligent fuzzy controller, which is used to enhance the process, are simulated in MATLAB – Simulink. New design of fuzzy control for Gas Separator is discussed in this paper. The design is based on the utilization of linear state-estimation to generate the internal knowledge-base that stores input-output pairs. The obtained input/output pairs are then used to design a feedback fuzzy controller. The proposed closed-loop fuzzy control system maintains the system asymptotically-stability while it enhances the system time response to achieve better control of the concentration of the output gas from the tower. Simulation studies are carried out to illustrate the Gas Separator system performance.

Keywords: Gas separator, gas sweetening, intelligent controller, fuzzy control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503
2161 Multidimensional Data Mining by Means of Randomly Travelling Hyper-Ellipsoids

Authors: Pavel Y. Tabakov, Kevin Duffy

Abstract:

The present study presents a new approach to automatic data clustering and classification problems in large and complex databases and, at the same time, derives specific types of explicit rules describing each cluster. The method works well in both sparse and dense multidimensional data spaces. The members of the data space can be of the same nature or represent different classes. A number of N-dimensional ellipsoids are used for enclosing the data clouds. Due to the geometry of an ellipsoid and its free rotation in space the detection of clusters becomes very efficient. The method is based on genetic algorithms that are used for the optimization of location, orientation and geometric characteristics of the hyper-ellipsoids. The proposed approach can serve as a basis for the development of general knowledge systems for discovering hidden knowledge and unexpected patterns and rules in various large databases.

Keywords: Classification, clustering, data minig, genetic algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
2160 Particle Size Effect on Shear Strength of Granular Materials in Direct Shear Test

Authors: R. Alias, A. Kasa, M. R. Taha

Abstract:

The effect of particle size on shear strength of granular materials are investigated using direct shear tests. Small direct shear test (60 mm by 60 mm by 24 mm deep) were conducted for particles passing the sieves with opening size of 2.36 mm. Meanwhile, particles passing the standard 20 mm sieves were tested using large direct shear test (300 mm by 300 mm by 200 mm deep). The large direct shear tests and the small direct shear tests carried out using the same shearing rate of 0.09 mm/min and similar normal stresses of 100, 200 and 300 kPa. The results show that the peak and residual shear strength increases as particle size increases.

Keywords: Particle size, shear strength, granular material, direct shear test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5250
2159 A Genetic Algorithm for Clustering on Image Data

Authors: Qin Ding, Jim Gasvoda

Abstract:

Clustering is the process of subdividing an input data set into a desired number of subgroups so that members of the same subgroup are similar and members of different subgroups have diverse properties. Many heuristic algorithms have been applied to the clustering problem, which is known to be NP Hard. Genetic algorithms have been used in a wide variety of fields to perform clustering, however, the technique normally has a long running time in terms of input set size. This paper proposes an efficient genetic algorithm for clustering on very large data sets, especially on image data sets. The genetic algorithm uses the most time efficient techniques along with preprocessing of the input data set. We test our algorithm on both artificial and real image data sets, both of which are of large size. The experimental results show that our algorithm outperforms the k-means algorithm in terms of running time as well as the quality of the clustering.

Keywords: Clustering, data mining, genetic algorithm, image data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2052
2158 Lightweight Mirrors for Space X-Ray Telescopes

Authors: M. Mika, L. Pina, M. Landova, L. Sveda, R. Havlikova, V. Marsikova, R. Hudec, A. Inneman

Abstract:

Future astronomical projects on large space x-ray imaging telescopes require novel substrates and technologies for the construction of their reflecting mirrors. The mirrors must be lightweight and precisely shaped to achieve large collecting area with high angular resolution. The new materials and technologies must be cost-effective. Currently, the most promising materials are glass or silicon foils. We focused on precise shaping these foils by thermal forming process. We studied free and forced slumping in the temperature region of hot plastic deformation and compared the shapes obtained by the different slumping processes. We measured the shapes and the surface quality of the foils. In the experiments, we varied both heat-treatment temperature and time following our experiment design. The obtained data and relations we can use for modeling and optimizing the thermal forming procedure.

Keywords: Glass, silicon, thermal forming, x-ray

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1385
2157 Optimal Performance of Plastic Extrusion Process Using Fuzzy Goal Programming

Authors: Abbas Al-Refaie

Abstract:

This study optimized the performance of plastic extrusion process of drip irrigation pipes using fuzzy goal programming. Two main responses were of main interest; roll thickness and hardness. Four main process factors were studied. The L18 array was then used for experimental design. The individual-moving range control charts were used to assess the stability of the process, while the process capability index was used to assess process performance. Confirmation experiments were conducted at the obtained combination of optimal factor setting by fuzzy goal programming. The results revealed that process capability was improved significantly from -1.129 to 0.8148 for roll thickness and from 0.0965 to 0.714 and hardness. Such improvement results in considerable savings in production and quality costs.

Keywords: Fuzzy goal programming, extrusion process, process capability, irrigation plastic pipes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 901
2156 Optimization of Asphalt Binder Modified with PP/SBS/Nanoclay Nanocomposite using Taguchi Method

Authors: Abolghasem Yazdani, Sara Pourjafar

Abstract:

This study has applied the L16 orthogonal array of the Taguchi method to determine the optimized polymeric Nanocomposite asphalt binder. Three control factors are defined as polypropylene plastomer (PP), styrene-butadiene-styrene elastomer (SBS) and Nanoclay. Four level of concentration contents are introduced for prepared asphalt binder samples. all samples were prepared with 4.5% of bitumen 60/70 content. Compressive strength tests were carried out for defining the optimized sample via QUALITEK-4 software. SBS with 3%, PP with 5 % and Nanoclay with 1.5% of concentrations are defined as the optimized Nanocomposite asphalt binders. The confirmation compressive strength and also softening point tests showed that modification of asphalt binders with this method, improved the compressive strength and softening points of asphalt binders up to 55%.

Keywords: modified asphalt, Polypropylene, SBS, Nanoclay, Taguchi method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3175
2155 Wavelet-Based Despeckling of Synthetic Aperture Radar Images Using Adaptive and Mean Filters

Authors: Syed Musharaf Ali, Muhammad Younus Javed, Naveed Sarfraz Khattak

Abstract:

In this paper we introduced new wavelet based algorithm for speckle reduction of synthetic aperture radar images, which uses combination of undecimated wavelet transformation, wiener filter (which is an adaptive filter) and mean filter. Further more instead of using existing thresholding techniques such as sure shrinkage, Bayesian shrinkage, universal thresholding, normal thresholding, visu thresholding, soft and hard thresholding, we use brute force thresholding, which iteratively run the whole algorithm for each possible candidate value of threshold and saves each result in array and finally selects the value for threshold that gives best possible results. That is why it is slow as compared to existing thresholding techniques but gives best results under the given algorithm for speckle reduction.

Keywords: Brute force thresholding, directional smoothing, direction dependent mask, undecimated wavelet transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2880
2154 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: Adaptive sampling, batch bulk methyl methacrylate polymerization, large margin nearest neighbor regression, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400
2153 Maximum Power Point Tracking by ANN Controller for a Standalone Photovoltaic System

Authors: K. Ranjani, M. Raja, B. Anitha

Abstract:

In this paper, ANN controller for maximum power point tracking of photovoltaic (PV) systems is proposed and PV modeling is discussed. Maximum power point tracking (MPPT) methods are used to maximize the PV array output power by tracking continuously the maximum power point. ANN controller with hill-climbing algorithm offers fast and accurate converging to the maximum operating point during steady-state and varying weather conditions compared to conventional hill-climbing. The proposed algorithm gives a good maximum power operation of the PV system. Simulation results obtained are presented and compared with the conventional hill-climbing algorithm. Simulation results show the effectiveness of the proposed technique.

Keywords: Artificial neural network (ANN), hill-climbing, maximum power-point tracking (MPPT), photovoltaic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3155
2152 Gravitino Dark Matter in (nearly) SLagy D3/D7 m-Split SUSY

Authors: Mansi Dhuria, Aalok Misra

Abstract:

In the context of large volume Big Divisor (nearly) SLagy D3/D7 μ-Split SUSY [1], after an explicit identification of first generation of SM leptons and quarks with fermionic superpartners of four Wilson line moduli, we discuss the identification of gravitino as a potential dark matter candidate by explicitly calculating the decay life times of gravitino (LSP) to be greater than age of universe and lifetimes of decays of the co-NLSPs (the first generation squark/slepton and a neutralino) to the LSP (the gravitino) to be very small to respect BBN constraints. Interested in non-thermal production mechanism of gravitino, we evaluate the relic abundance of gravitino LSP in terms of that of the co-NLSP-s by evaluating their (co-)annihilation cross sections and hence show that the former satisfies the requirement for a potential Dark Matter candidate. We also show that it is possible to obtain a 125 GeV light Higgs in our setup.

Keywords: Split Supersymmetry, Large Volume Swiss-Cheese Calabi-Yau's, Dark Matter, (N)LSP decays, relic abundance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
2151 MONARC: A Case Study on Simulation Analysis for LHC Activities

Authors: Ciprian Dobre

Abstract:

The scale, complexity and worldwide geographical spread of the LHC computing and data analysis problems are unprecedented in scientific research. The complexity of processing and accessing this data is increased substantially by the size and global span of the major experiments, combined with the limited wide area network bandwidth available. We present the latest generation of the MONARC (MOdels of Networked Analysis at Regional Centers) simulation framework, as a design and modeling tool for large scale distributed systems applied to HEP experiments. We present simulation experiments designed to evaluate the capabilities of the current real-world distributed infrastructure to support existing physics analysis processes and the means by which the experiments bands together to meet the technical challenges posed by the storage, access and computing requirements of LHC data analysis within the CMS experiment.

Keywords: Modeling and simulation, evaluation, large scale distributed systems, LHC experiments, CMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
2150 Strengthening of RC Beams with Large Openings in Shear by CFRP Laminates: 2D Nonlinear FE Analysis

Authors: S.C. Chin, N. Shafiq, M.F. Nuruddin

Abstract:

To date, theoretical studies concerning the Carbon Fiber Reinforced Polymer (CFRP) strengthening of RC beams with openings have been rather limited. In addition, various numerical analyses presented so far have effectively simulated the behaviour of solid beam strengthened by FRP material. In this paper, a two dimensional nonlinear finite element analysis is presented to validate against the laboratory test results of six RC beams. All beams had the same rectangular cross-section geometry and were loaded under four point bending. The crack pattern results of the finite element model show good agreement with the crack pattern of the experimental beams. The load midspan deflection curves of the finite element models exhibited a stiffer result compared to the experimental beams. The possible reason may be due to the perfect bond assumption used between the concrete and steel reinforcement.

Keywords: CFRP, large opening, RC beam, strengthening

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818
2149 Resource-Constrained Heterogeneous Workflow Scheduling Algorithm for Heterogeneous Computing Clusters

Authors: Lei Wang, Jiahao Zhou

Abstract:

The development of heterogeneous computing clusters provides robust computational support for large-scale workflows, commonly seen in domains such as scientific computing and artificial intelligence. However, the tasks within these large-scale workflows are increasingly heterogeneous, exhibiting varying demands on computing resources. This shift necessitates the integration of resource-constrained considerations into the workflow scheduling problem on heterogeneous computing platforms. In this study, we propose a scheduling algorithm designed to minimize the makespan under heterogeneous constraints, employing a greedy strategy to effectively address the scheduling challenges posed by heterogeneous workflows. We evaluate the performance of the proposed algorithm using randomly generated heterogeneous workflows and a corresponding heterogeneous computing platform. The experimental results demonstrate a 15.2% improvement in performance compared to existing state-of-the-art methods.

Keywords: Heterogeneous Computing, Workflow Scheduling, Constrained Resources, Minimal Makespan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20
2148 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1398
2147 Learning Monte Carlo Data for Circuit Path Length

Authors: Namal A. Senanayake, A. Beg, Withana C. Prasad

Abstract:

This paper analyzes the patterns of the Monte Carlo data for a large number of variables and minterms, in order to characterize the circuit path length behavior. We propose models that are determined by training process of shortest path length derived from a wide range of binary decision diagram (BDD) simulations. The creation of the model was done use of feed forward neural network (NN) modeling methodology. Experimental results for ISCAS benchmark circuits show an RMS error of 0.102 for the shortest path length complexity estimation predicted by the NN model (NNM). Use of such a model can help reduce the time complexity of very large scale integrated (VLSI) circuitries and related computer-aided design (CAD) tools that use BDDs.

Keywords: Monte Carlo data, Binary decision diagrams, Neural network modeling, Shortest path length estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595