Search results for: Kernel Principal Component Analysis
9013 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.
Keywords: e2e reliability prediction, SSD, TCT, Solder Joint Reliability, NUDD, connectivity issues, qualifications, characterization and control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4009012 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit
Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu
Abstract:
Efficient matrix-vector multiplication with diagonal sparse matrices is pivotal in a multitude of computational domains, ranging from scientific simulations to machine learning workloads. When encoded in the conventional Diagonal (DIA) format, these matrices often induce computational overheads due to extensive zero-padding and non-linear memory accesses, which can hamper the computational throughput, and elevate the usage of precious compute and memory resources beyond necessity. The ’DIA-Adaptive’ approach, a methodological enhancement introduced in this paper, confronts these challenges head-on by leveraging the advanced parallel instruction sets embedded within Machine Learning Units (MLUs). This research presents a thorough analysis of the DIA-Adaptive scheme’s efficacy in optimizing Sparse Matrix-Vector Multiplication (SpMV) operations. The scope of the evaluation extends to a variety of hardware architectures, examining the repercussions of distinct thread allocation strategies and cluster configurations across multiple storage formats. A dedicated computational kernel, intrinsic to the DIA-Adaptive approach, has been meticulously developed to synchronize with the nuanced performance characteristics of MLUs. Empirical results, derived from rigorous experimentation, reveal that the DIA-Adaptive methodology not only diminishes the performance bottlenecks associated with the DIA format but also exhibits pronounced enhancements in execution speed and resource utilization. The analysis delineates a marked improvement in parallelism, showcasing the DIA-Adaptive scheme’s ability to adeptly manage the interplay between storage formats, hardware capabilities, and algorithmic design. The findings suggest that this approach could set a precedent for accelerating SpMV tasks, thereby contributing significantly to the broader domain of high-performance computing and data-intensive applications.
Keywords: Adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2379011 Template-Based Object Detection through Partial Shape Matching and Boundary Verification
Authors: Feng Ge, Tiecheng Liu, Song Wang, Joachim Stahl
Abstract:
This paper presents a novel template-based method to detect objects of interest from real images by shape matching. To locate a target object that has a similar shape to a given template boundary, the proposed method integrates three components: contour grouping, partial shape matching, and boundary verification. In the first component, low-level image features, including edges and corners, are grouped into a set of perceptually salient closed contours using an extended ratio-contour algorithm. In the second component, we develop a partial shape matching algorithm to identify the fractions of detected contours that partly match given template boundaries. Specifically, we represent template boundaries and detected contours using landmarks, and apply a greedy algorithm to search the matched landmark subsequences. For each matched fraction between a template and a detected contour, we estimate an affine transform that transforms the whole template into a hypothetic boundary. In the third component, we provide an efficient algorithm based on oriented edge lists to determine the target boundary from the hypothetic boundaries by checking each of them against image edges. We evaluate the proposed method on recognizing and localizing 12 template leaves in a data set of real images with clutter back-grounds, illumination variations, occlusions, and image noises. The experiments demonstrate the high performance of our proposed method1.Keywords: Object detection, shape matching, contour grouping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23059010 Fatigue Life of an Anti-Roll Bar of a Passenger Vehicle
Authors: J. Marzbanrad, A. Yadollahi
Abstract:
In the present paper, Fatigue life assessment of an anti-roll bar component of a passenger vehicle, is investigated by ANSYS 11 software. A stress analysis is also carried out by the finite element technique for the determination of highly stressed regions on the bar. Anti-roll bar is a suspension element used at the front, rear, or at both ends of a car that reduces body roll by resisting any unequal vertical motion between the pair of wheels to which it is connected. As a first stage, fatigue damage models proposed by some well-known references and the corresponding assumptions are discussed and some enhancements are proposed. Then, fracture analysis of an anti-roll bar of an automobile is carried out. The analysed type of the anti-roll bar is especially important as many cases are reported about the fracture after a 100,000 km of travel fatigue and fracture conditions. This paper demonstrates fatigue life of an anti-roll bar and then evaluated by experimental analytically results from other researcher.Keywords: Anti-roll bar, Fracture, Fatigue life, Random loading
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35529009 A Study on the Modeling and Analysis of an Electro-Hydraulic Power Steering System
Authors: Ji-Hye Kim, Sung-Gaun Kim
Abstract:
Electro-hydraulic power steering (EHPS) system for the fuel rate reduction and steering feel improvement is comprised of ECU including the logic which controls the steering system and BL DC motor and produces the best suited cornering force, BLDC motor, high pressure pump integrated module and basic oil-hydraulic circuit of the commercial HPS system. Electro-hydraulic system can be studied in two ways such as experimental and computer simulation. To get accurate results in experimental study of EHPS system, the real boundary management is necessary which is difficult task. And the accuracy of the experimental results depends on the preparation of the experimental setup and accuracy of the data collection. The computer simulation gives accurate and reliable results if the simulation is carried out considering proper boundary conditions. So, in this paper, each component of EHPS was modeled, and the model-based analysis and control logic was designed by using AMESimKeywords: Power steering system, Electro-Hydraulic power steering (EHPS) system, Modeling of EHPS system, Analysis modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27089008 Dynamic Response of Wind Turbines to Theoretical 3D Seismic Motions Taking into Account the Rotational Component
Authors: L. Hermanns, M.A. Santoyo, L. E. Quirós, J. Vega, J. M. Gaspar-Escribano, B. Benito
Abstract:
We study the dynamic response of a wind turbine structure subjected to theoretical seismic motions, taking into account the rotational component of ground shaking. Models are generated for a shallow moderate crustal earthquake in the Madrid Region (Spain). Synthetic translational and rotational time histories are computed using the Discrete Wavenumber Method, assuming a point source and a horizontal layered earth structure. These are used to analyze the dynamic response of a wind turbine, represented by a simple finite element model. Von Mises stress values at different heights of the tower are used to study the dynamical structural response to a set of synthetic ground motion time historiesKeywords: Synthetic seismograms, rotations, wind turbine, dynamic structural response
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13239007 Modeling the Transport of Charge Carriers in the Active Devices MESFET, Based of GaInP by the Monte Carlo Method
Authors: N. Massoum, A. Guen. Bouazza, B. Bouazza, A. El Ouchdi
Abstract:
The progress of industry integrated circuits in recent years has been pushed by continuous miniaturization of transistors. With the reduction of dimensions of components at 0.1 micron and below, new physical effects come into play as the standard simulators of two dimensions (2D) do not consider. In fact the third dimension comes into play because the transverse and longitudinal dimensions of the components are of the same order of magnitude. To describe the operation of such components with greater fidelity, we must refine simulation tools and adapted to take into account these phenomena. After an analytical study of the static characteristics of the component, according to the different operating modes, a numerical simulation is performed of field-effect transistor with submicron gate MESFET GaInP. The influence of the dimensions of the gate length is studied. The results are used to determine the optimal geometric and physical parameters of the component for their specific applications and uses.
Keywords: Monte Carlo simulation, transient electron transport, MESFET device.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16659006 Stress Analysis of the Ceramics Heads with Different Sizes under the Destruction Tests
Authors: V. Fuis, P. Janicek, T. Navrat
Abstract:
The global solved problem is the calculation of the parameters of ceramic material from a set of destruction tests of ceramic heads of total hip joint endoprosthesis. The standard way of calculation of the material parameters consists in carrying out a set of 3 or 4 point bending tests of specimens cut out from parts of the ceramic material to be analysed. In case of ceramic heads, it is not possible to cut out specimens of required dimensions because the heads are too small (if the cut out specimens were smaller than the normalised ones, the material parameters derived from them would exhibit higher strength values than those which the given ceramic material really has). A special destruction device for heads destruction was designed and the solved local problem is the modification of this destructive device based on the analysis of tensile stress in the head for two different values of the depth of the conical hole in the head. The goal of device modification is a shift of the location with extreme value of σ1max from the region of head’s hole bottom to its opening. This modification will increase the credibility of the obtained material properties of bioceramics, which will be determined from a set of head destructions using the Weibull weakest link theory.
Keywords: Ceramic heads, depth of the conical hole, destruction test, material parameters, principal stress, total hip joint endoprosthesis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18439005 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components
Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura
Abstract:
This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.Keywords: Brain-computer interface, BCI, electroencephalography, EEG, finger motion decoding, independent component analysis, pseudo-real-time motion decoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5999004 Prediction of Writer Using Tamil Handwritten Document Image Based on Pooled Features
Authors: T. Thendral, M. S. Vijaya, S. Karpagavalli
Abstract:
Tamil handwritten document is taken as a key source of data to identify the writer. Tamil is a classical language which has 247 characters include compound characters, consonants, vowels and special character. Most characters of Tamil are multifaceted in nature. Handwriting is a unique feature of an individual. Writer may change their handwritings according to their frame of mind and this place a risky challenge in identifying the writer. A new discriminative model with pooled features of handwriting is proposed and implemented using support vector machine. It has been reported on 100% of prediction accuracy by RBF and polynomial kernel based classification model.
Keywords: Classification, Feature extraction, Support vector machine, Training, Writer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23129003 Prediction of Writer Using Tamil Handwritten Document Image Based on Pooled Features
Authors: T. Thendral, M. S. Vijaya, S. Karpagavalli
Abstract:
Tamil handwritten document is taken as a key source of data to identify the writer. Tamil is a classical language which has 247 characters include compound characters, consonants, vowels and special character. Most characters of Tamil are multifaceted in nature. Handwriting is a unique feature of an individual. Writer may change their handwritings according to their frame of mind and this place a risky challenge in identifying the writer. A new discriminative model with pooled features of handwriting is proposed and implemented using support vector machine. It has been reported on 100% of prediction accuracy by RBF and polynomial kernel based classification model.Keywords: Classification, Feature extraction, Support vector machine, Training, Writer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17019002 Land Use around Metro Stations: A Case Study
Authors: A. Roukouni, S. Basbas, M. Giannopoulou
Abstract:
Transport and land use are two systems that are mutually influenced. Their interaction is a complex process associated with continuous feedback. The paper examines the existing land use around an under construction metro station of the new metro network of Thessaloniki, Greece, through the use of field investigations, around the station-s predefined location. Moreover, except from the analytical land use recording, a sampling questionnaire survey is addressed to several selected enterprises of the study area. The survey aims to specify the characteristics of the enterprises, the trip patterns of their employees and clients, as well as the stated preferences towards the changes the new metro station is considered to bring to the area. The interpretation of the interrelationships among selected data from the questionnaire survey takes place using the method of Principal Components Analysis for Categorical Data. The followed methodology and the survey-s results contribute to the enrichment of the relevant bibliography concerning the way the creation of a new metro station can have an impact on the land use pattern of an area, by examining the situation before the operation of the station.Keywords: land use, metro station, questionnaire survey
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31139001 Application of Staining Intensity Correlation Analysis to Visualize Protein Colocalizationat a Cellular Level
Authors: Permphan Dharmasaroja
Abstract:
Mutations of the telomeric copy of the survival motor neuron 1 (SMN1) gene cause spinal muscular atrophy. A deletion of the Eef1a2 gene leads to lower motor neuron degeneration in wasted mice. Indirect evidences have been shown that the eEF1A protein family may interact with SMN, and our previous study showed that abnormalities of neuromuscular junctions in wasted mice were similar to those of Smn mutant mice. To determine potential colocalization between SMN and tissue-specific translation elongation factor 1A2 (eEF1A2), an immunochemical analysis of HeLa cells transfected with the plasmid pcDNA3.1(+)C-hEEF1A2- myc and a new quantitative test of colocalization by intensity correlation analysis (ICA) was used to explore the association of SMN and eEF1A2. Here the results showed that eEF1A2 redistributed from the cytoplasm to the nucleus in response to serum and epidermal growth factor. In the cytoplasm, compelling evidence showed that staining for myc-tagged eEF1A2 varied in synchrony with that for SMN, consistent with the formation of a SMN-eEF1A2 complex in the cytoplasm of HeLa cells. These findings suggest that eEF1A2 may colocalize with SMN in the cytoplasm and may be a component of the SMN complex. However, the limitation of the ICA method is an inability to resolve colocalization in components of small organelles such as the nucleus.
Keywords: Intensity correlation analysis, intensity correlation quotient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15049000 Evaluating the Effectiveness of Memory Overcommit Techniques on KVM-based Hosting Platform
Authors: Chin-Hung Li
Abstract:
Determining how many virtual machines a Linux host could run can be a challenge. One of tough missions is to find the balance among performance, density and usability. Now KVM hypervisor has become the most popular open source full virtualization solution. It supports several ways of running guests with more memory than host really has. Due to large differences between minimum and maximum guest memory requirements, this paper presents initial results on same-page merging, ballooning and live migration techniques that aims at optimum memory usage on KVM-based cloud platform. Given the design of initial experiments, the results data is worth reference for system administrators. The results from these experiments concluded that each method offers different reliability tradeoff.Keywords: Kernel-based Virtual Machine, Overcommit, Virtualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31238999 Essential Oils of Polygonum L. Plants Growing in Kazakhstan and Their Antibacterial and Antifungal Activity
Authors: Dmitriy Yu. Korulkin, Raissa A. Muzychkina
Abstract:
The article represents the results of isolation and component chromatographic analysis of essential oils of Polygonum L. plants growing in Kazakhstan in commercial reserves at the territory of Kazakhstan. The results of research of antibacterial and antifungal activity of isolated compounds have been represented.Keywords: Antibacterial, antifungal, bioactive substances, essential oils, isolation, Polygonum L.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18868998 Identification of Reusable Software Modules in Function Oriented Software Systems using Neural Network Based Technique
Authors: Sonia Manhas, Parvinder S. Sandhu, Vinay Chopra, Nirvair Neeru
Abstract:
The cost of developing the software from scratch can be saved by identifying and extracting the reusable components from already developed and existing software systems or legacy systems [6]. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. We have used metric based approach for characterizing a software module. In this present work, the metrics McCabe-s Cyclometric Complexity Measure for Complexity measurement, Regularity Metric, Halstead Software Science Indicator for Volume indication, Reuse Frequency metric and Coupling Metric values of the software component are used as input attributes to the different types of Neural Network system and reusability of the software component is calculated. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE).Keywords: Software reusability, Neural Networks, MAE, RMSE, Accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18698997 An Exploration on Competency-Based Curricula in Integrated Circuit Design
Authors: Chih Chin Yang, Chung Shan Sun
Abstract:
In this paper the relationships between professional competences and school curriculain IC design industry are explored. The semi-structured questionnaire survey and focus group interview is the research method. Study participants are graduates of microelectronics engineering professional departments who are currently employed in the IC industry. The IC industries are defined as the electronic component manufacturing industry and optical-electronic component manufacturing industry in the semiconductor industry and optical-electronic material devices, respectively. Study participants selected from IC design industry include IC engineering and electronic & semiconductor engineering. The human training with IC design professional competence in microelectronics engineering professional departments is explored in this research. IC professional competences of human resources in the IC design industry include general intelligence and professional intelligence.
Keywords: IC design, curricula, competence, task, duty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14958996 Structural Reliability of Existing Structures: A Case Study
Authors: Z. Sakka, I. Assakkaf, T. Al-Yaqoub, J. Parol
Abstract:
reliability-based methodology for the assessment and evaluation of reinforced concrete (R/C) structural elements of concrete structures is presented herein. The results of the reliability analysis and assessment for R/C structural elements were verified by the results obtained through deterministic methods. The outcomes of the reliability-based analysis were compared against currently adopted safety limits that are incorporated in the reliability indices β’s, according to international standards and codes. The methodology is based on probabilistic analysis using reliability concepts and statistics of the main random variables that are relevant to the subject matter, and for which they are to be used in the performance-function equation(s) associated with the structural elements under study. These methodology techniques can result in reliability index β, which is commonly known as the reliability index or reliability measure value that can be utilized to assess and evaluate the safety, human risk, and functionality of the structural component. Also, these methods can result in revised partial safety factor values for certain target reliability indices that can be used for the purpose of redesigning the R/C elements of the building and in which they could assist in considering some other remedial actions to improve the safety and functionality of the member.
Keywords: Concrete Structures, FORM, Monte Carlo Simulation, Structural Reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30938995 A Hybrid CamShift and l1-Minimization Video Tracking Algorithm
Authors: Clark Van Dam, Gagan Mirchandani
Abstract:
The Continuously Adaptive Mean-Shift (CamShift) algorithm, incorporating scene depth information is combined with the l1-minimization sparse representation based method to form a hybrid kernel and state space-based tracking algorithm. We take advantage of the increased efficiency of the former with the robustness to occlusion property of the latter. A simple interchange scheme transfers control between algorithms based upon drift and occlusion likelihood. It is quantified by the projection of target candidates onto a depth map of the 2D scene obtained with a low cost stereo vision webcam. Results are improved tracking in terms of drift over each algorithm individually, in a challenging practical outdoor multiple occlusion test case.Keywords: CamShift, l1-minimization, particle filter, stereo vision, video tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20438994 A Serial Hierarchical Support Vector Machine and 2D Feature Sets Act for Brain DTI Segmentation
Authors: Mohammad Javadi
Abstract:
Serial hierarchical support vector machine (SHSVM) is proposed to discriminate three brain tissues which are white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). SHSVM has novel classification approach by repeating the hierarchical classification on data set iteratively. It used Radial Basis Function (rbf) Kernel with different tuning to obtain accurate results. Also as the second approach, segmentation performed with DAGSVM method. In this article eight univariate features from the raw DTI data are extracted and all the possible 2D feature sets are examined within the segmentation process. SHSVM succeed to obtain DSI values higher than 0.95 accuracy for all the three tissues, which are higher than DAGSVM results.
Keywords: Brain segmentation, DTI, hierarchical, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18568993 Color and Layout-based Identification of Documents Captured from Handheld Devices
Authors: Ardhendu Behera, Denis Lalanne, Rolf Ingold
Abstract:
This paper proposes a method, combining color and layout features, for identifying documents captured from low-resolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. Our identification method first uses the color information in the documents in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining of the search space.Keywords: Document color modeling, document visualsignature, kernel density estimation, document identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15718992 Shear Modulus Degradation of a Liquefiable Sand Deposit by Shaking Table Tests
Authors: Henry Munoz, Muhammad Mohsan, Takashi Kiyota
Abstract:
Strength and deformability characteristics of a liquefiable sand deposit including the development of earthquake-induced shear stress and shear strain as well as soil softening via the progressive degradation of shear modulus were studied via shaking table experiments. To do so, a model of a liquefiable sand deposit was constructed and densely instrumented where accelerations, pressures, and displacements at different locations were continuously monitored. Furthermore, the confinement effects on the strength and deformation characteristics of the liquefiable sand deposit due to an external surcharge by placing a heavy concrete slab (i.e. the model of an actual structural rigid pavement) on the ground surface were examined. The results indicate that as the number of seismic-loading cycles increases, the sand deposit softens progressively as large shear strains take place in different sand elements. Liquefaction state is reached after the combined effects of the progressive degradation of the initial shear modulus associated with the continuous decrease in the mean principal stress, and the buildup of the excess of pore pressure takes place in the sand deposit. Finally, the confinement effects given by a concrete slab placed on the surface of the sand deposit resulted in a favorable increasing in the initial shear modulus, an increase in the mean principal stress and a decrease in the softening rate (i.e. the decreasing rate in shear modulus) of the sand, thus making the onset of liquefaction to take place at a later stage. This is, only after the sand deposit having a concrete slab experienced a higher number of seismic loading cycles liquefaction took place, in contrast to an ordinary sand deposit having no concrete slab.
Keywords: Liquefaction, shaking table, shear modulus degradation, earthquake.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17568991 Application of Adaptive Neural Network Algorithms for Determination of Salt Composition of Waters Using Laser Spectroscopy
Authors: Tatiana A. Dolenko, Sergey A. Burikov, Alexander O. Efitorov, Sergey A. Dolenko
Abstract:
In this study, a comparative analysis of the approaches associated with the use of neural network algorithms for effective solution of a complex inverse problem – the problem of identifying and determining the individual concentrations of inorganic salts in multicomponent aqueous solutions by the spectra of Raman scattering of light – is performed. It is shown that application of artificial neural networks provides the average accuracy of determination of concentration of each salt no worse than 0.025 M. The results of comparative analysis of input data compression methods are presented. It is demonstrated that use of uniform aggregation of input features allows decreasing the error of determination of individual concentrations of components by 16-18% on the average.
Keywords: Inverse problems, multi-component solutions, neural networks, Raman spectroscopy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19278990 The Underestimation of Cultural Risk in the Execution of Megaprojects
Authors: Alan Walsh, Peter Walker, Michael Ellis
Abstract:
There is a real danger that both practitioners and researchers considering risks associated with megaprojects ignore or underestimate the impacts of cultural risk. The paper investigates the potential impacts of a failure to achieve cultural unity between the principal actors executing a megaproject. The principle relationships include the relationships between the principle Contractors and the project stakeholders or the project stakeholders and their principle advisors, Western Consultants. This study confirms that cultural dissonance between these parties can delay or disrupt the megaproject execution and examines why cultural issues should be prioritized as a significant risk factor in megaproject delivery. This paper addresses the practical impacts and potential mitigation measures, which may reduce cultural dissonance for a megaproject's delivery. This information is retrieved from on-going case studies in live infrastructure megaprojects in Europe and the Middle East's GCC states, from Western Consultants' perspective. The collaborating researchers each have at least 30 years of construction experience and are engaged in architecture, project management and contracts management, dealing with megaprojects in Europe or the GCC. After examining the cultural interfaces they have observed during the execution of megaprojects, they conclude that globally, culture significantly influences their efficient delivery. The study finds that cultural risk is ever-present, where different nationalities co-manage megaprojects and that cultural conflict poses a real threat to the timely delivery of megaprojects. The study indicates that the higher the cultural distance between the principal actors, the more pronounced the risk, with the risk of cultural dissonance more prominent in GCC megaprojects. The findings support a more culturally aware and cohesive team approach and recommend cross-cultural training to mitigate the effects of cultural disparity.
Keywords: Cultural risk underestimation, cultural distance, megaproject characteristics, megaproject execution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6028989 Teager-Huang Analysis Applied to Sonar Target Recognition
Authors: J.-C. Cexus, A.O. Boudraa
Abstract:
In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.
Keywords: Target recognition, Empirical mode decomposition, Teager-Kaiser energy operator, Features extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22848988 Implementation of TinyHash based on Hash Algorithm for Sensor Network
Authors: HangRok Lee, YongJe Choi, HoWon Kim
Abstract:
In recent years, it has been proposed security architecture for sensor network.[2][4]. One of these, TinySec by Chris Kalof, Naveen Sastry, David Wagner had proposed Link layer security architecture, considering some problems of sensor network. (i.e : energy, bandwidth, computation capability,etc). The TinySec employs CBC_mode of encryption and CBC-MAC for authentication based on SkipJack Block Cipher. Currently, This TinySec is incorporated in the TinyOS for sensor network security. This paper introduces TinyHash based on general hash algorithm. TinyHash is the module in order to replace parts of authentication and integrity in the TinySec. it implies that apply hash algorithm on TinySec architecture. For compatibility about TinySec, Components in TinyHash is constructed as similar structure of TinySec. And TinyHash implements the HMAC component for authentication and the Digest component for integrity of messages. Additionally, we define the some interfaces for service associated with hash algorithm.Keywords: sensor network security, nesC, TinySec, TinyOS, Hash, HMAC, integrity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23548987 The Laser Line Detection for Autonomous Mapping Based on Color Segmentation
Authors: Pavel Chmelar, Martin Dobrovolny
Abstract:
Laser projection or laser footprint detection is today widely used in many fields of robotics, measurement or electronics. The system accuracy strictly depends on precise laser footprint detection on target objects. This article deals with the laser line detection based on the RGB segmentation and the component labeling. As a measurement device was used the developed optical rangefinder. The optical rangefinder is equipped with vertical sweeping of the laser beam and high quality camera. This system was developed mainly for automatic exploration and mapping of unknown spaces. In the first section is presented a new detection algorithm. In the second section are presented measurements results. The measurements were performed in variable light conditions in interiors. The last part of the article present achieved results and their differences between day and night measurements.
Keywords: Automatic mapping, color segmentation, component labeling, distance measurement, laser line detection, vector map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35638986 Uncertainty Analysis of ROSA/LSTF Test on Pressurized Water Reactor Cold Leg Small-Break Loss-of-Coolant Accident without Scram
Authors: Takeshi Takeda
Abstract:
The author conducted post-test analysis with the RELAP5/MOD3.3 code for an experiment using the ROSA/LSTF (rig of safety assessment/large-scale test facility) that simulated a 1% cold leg small-break loss-of-coolant accident under the failure of scram in a pressurized water reactor. The LSTF test assumed total failure of high-pressure injection system of emergency core cooling system. In the LSTF test, natural circulation contributed to maintain core cooling effect for a relatively long time until core uncovery occurred. The post-test analysis result confirmed inadequate prediction of the primary coolant distribution. The author created the phenomena identification and ranking table (PIRT) for each component. The author investigated the influences of uncertain parameters determined by the PIRT on the cladding surface temperature at a certain time during core uncovery within the defined uncertain ranges.
Keywords: LSTF, LOCA, scram, RELAP5.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7728985 Thermodynamic Analysis of R507A-R23 Cascade Refrigeration System
Authors: A. D. Parekh, P. R. Tailor
Abstract:
The present work deals with thermodynamic analysis of cascade refrigeration system using ozone friendly refrigerants pair R507A and R23. R507A is azeotropic mixture composed of HFC refrigerants R125/R143a (50%/50% wt.). R23 is a single component HFC refrigerant used as replacement to CFC refrigerant R13 in low temperature applications. These refrigerants have zero ozone depletion potential and are non-flammable and as R507A an azeotropic mixture there is no problem of temperature glide. This study thermodynamically analyzed R507A-R23 cascade refrigeration system to optimize the design and operating parameters of the system. The design and operating parameters include: Condensing, evaporating, subcooling and superheating temperatures in the high temperature circuit, temperature difference in the cascade heat exchanger, Condensing, evaporating, subcooling and superheating temperatures in the low temperature circuit.Keywords: COP, R507A, R23, cascade refrigeration system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29468984 Solving Partially Monotone Problems with Neural Networks
Authors: Marina Velikova, Hennie Daniels, Ad Feelders
Abstract:
In many applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. Here we consider partially monotone problems, where the target variable depends monotonically on some of the predictor variables but not all. We propose an approach to build partially monotone models based on the convolution of monotone neural networks and kernel functions. The results from simulations and a real case study on house pricing show that our approach has significantly better performance than partially monotone linear models. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.Keywords: Mixture models, monotone neural networks, partially monotone models, partially monotone problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621