Search results for: solvent casting method
18179 An Accelerated Stochastic Gradient Method with Momentum
Authors: Liang Liu, Xiaopeng Luo
Abstract:
In this paper, we propose an accelerated stochastic gradient method with momentum. The momentum term is the weighted average of generated gradients, and the weights decay inverse proportionally with the iteration times. Stochastic gradient descent with momentum (SGDM) uses weights that decay exponentially with the iteration times to generate the momentum term. Using exponential decay weights, variants of SGDM with inexplicable and complicated formats have been proposed to achieve better performance. However, the momentum update rules of our method are as simple as that of SGDM. We provide theoretical convergence analyses, which show both the exponential decay weights and our inverse proportional decay weights can limit the variance of the parameter moving directly to a region. Experimental results show that our method works well with many practical problems and outperforms SGDM.Keywords: exponential decay rate weight, gradient descent, inverse proportional decay rate weight, momentum
Procedia PDF Downloads 16218178 Nonuniformity Correction Technique in Infrared Video Using Feedback Recursive Least Square Algorithm
Authors: Flavio O. Torres, Maria J. Castilla, Rodrigo A. Augsburger, Pedro I. Cachana, Katherine S. Reyes
Abstract:
In this paper, we present a scene-based nonuniformity correction method using a modified recursive least square algorithm with a feedback system on the updates. The feedback is designed to remove impulsive noise contamination images produced by a recursive least square algorithm by measuring the output of the proposed algorithm. The key advantage of the method is based on its capacity to estimate detectors parameters and then compensate for impulsive noise contamination image in a frame by frame basics. We define the algorithm and present several experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published recursive least square-based methods. We show that the proposed method removes impulsive noise contamination image.Keywords: infrared focal plane arrays, infrared imaging, least mean square, nonuniformity correction
Procedia PDF Downloads 14318177 Failure Simulation of Small-scale Walls with Chases Using the Lattic Discrete Element Method
Authors: Karina C. Azzolin, Luis E. Kosteski, Alisson S. Milani, Raquel C. Zydeck
Abstract:
This work aims to represent Numerically tests experimentally developed in reduced scale walls with horizontal and inclined cuts by using the Lattice Discrete Element Method (LDEM) implemented On de Abaqus/explicit environment. The cuts were performed with depths of 20%, 30%, and 50% On the walls subjected to centered and eccentric loading. The parameters used to evaluate the numerical model are its strength, the failure mode, and the in-plane and out-of-plane displacements.Keywords: structural masonry, wall chases, small scale, numerical model, lattice discrete element method
Procedia PDF Downloads 17718176 BTEX (Benzene, Toluene, Ethylbenzene and Xylene) Degradation by Cold Plasma
Authors: Anelise Leal Vieira Cubas, Marina de Medeiros Machado, Marília de Medeiros Machado
Abstract:
The volatile organic compounds - BTEX (Benzene, Toluene, Ethylbenzene, and Xylene) petroleum derivatives, have high rates of toxicity, which may carry consequences for human health, biota and environment. In this direction, this paper proposes a method of treatment of these compounds by using corona discharge plasma technology. The efficiency of the method was tested by analyzing samples of BTEX after going through a plasma reactor by gas chromatography method. The results show that the optimal residence time of the sample in the reactor was 8 minutes.Keywords: BTEX, degradation, cold plasma, ecological sciences
Procedia PDF Downloads 31718175 Determine the Optimal Path of Content Adaptation Services with Max Heap Tree
Authors: Shilan Rahmani Azr, Siavash Emtiyaz
Abstract:
Recent development in computing and communicative technologies leads to much easier mobile accessibility to the information. Users can access to the information in different places using various deceives in which the care variety of abilities. Meanwhile, the format and details of electronic documents are changing each day. In these cases, a mismatch is created between content and client’s abilities. Recently the service-oriented content adaption has been developed which the adapting tasks are dedicated to some extended services. In this method, the main problem is to choose the best appropriate service among accessible and distributed services. In this paper, a method for determining the optimal path to the best services, based on the quality control parameters and user preferences, is proposed using max heap tree. The efficiency of this method in contrast to the other previous methods of the content adaptation is related to the determining the optimal path of the best services which are measured. The results show the advantages and progresses of this method in compare of the others.Keywords: service-oriented content adaption, QoS, max heap tree, web services
Procedia PDF Downloads 25918174 Investigation of the Effect of Excavation Step in NATM on Surface Settlement by Finite Element Method
Authors: Seyed Mehrdad Gholami
Abstract:
Nowadays, using rail transport system (Metro) is increased in most cities of The world, so the need for safe and economical way of building tunnels and subway stations is felt more and more. One of the most commonly used methods for constructing underground structures in urban areas is NATM (New Austrian tunneling method). In this method, there are some key parameters such as excavation steps and cross-sectional area that have a significant effect on the surface settlement. Settlement is a very important control factor related to safe excavation. In this paper, Finite Element Method is used by Abaqus. R6 station of Tehran Metro Line 6 is built by NATM and the construction of that is studied and analyzed. Considering the outcomes obtained from numerical modeling and comparison with the results of the instrumentation and monitoring of field, finally, the excavation step of 1 meter and longitudinal distance of 14 meters between side drifts is suggested to achieve safe tunneling with allowable settlement.Keywords: excavation step, NATM, numerical modeling, settlement.
Procedia PDF Downloads 13918173 Solving SPDEs by Least Squares Method
Authors: Hassan Manouzi
Abstract:
We present in this paper a useful strategy to solve stochastic partial differential equations (SPDEs) involving stochastic coefficients. Using the Wick-product of higher order and the Wiener-Itˆo chaos expansion, the SPDEs is reformulated as a large system of deterministic partial differential equations. To reduce the computational complexity of this system, we shall use a decomposition-coordination method. To obtain the chaos coefficients in the corresponding deterministic equations, we use a least square formulation. Once this approximation is performed, the statistics of the numerical solution can be easily evaluated.Keywords: least squares, wick product, SPDEs, finite element, wiener chaos expansion, gradient method
Procedia PDF Downloads 41918172 Ground Deformation Module for the New Laboratory Methods
Authors: O. Giorgishvili
Abstract:
For calculation of foundations one of the important characteristics is the module of deformation (E0). As we all know, the main goal of calculation of the foundations of buildings on deformation is to arrange the base settling and difference in settlings in such limits that do not cause origination of cracks and changes in design levels that will be dangerous to standard operation in the buildings and their individual structures. As is known from the literature and the practical application, the modulus of deformation is determined by two basic methods: laboratory method, soil test on compression (without the side widening) and soil test in field conditions. As we know, the deformation modulus of soil determined by field method is closer to the actual modulus deformation of soil, but the complexity of the tests to be carried out and the financial concerns did not allow determination of ground deformation modulus by field method. Therefore, we determine the ground modulus of deformation by compression method without side widening. Concerning this, we introduce a new way for determination of ground modulus of deformation by laboratory order that occurs by side widening and more accurately reflects the ground modulus of deformation and more accurately reflects the actual modulus of deformation and closer to the modulus of deformation determined by the field method. In this regard, we bring a new approach on the ground deformation detection laboratory module, which is done by widening sides. The tests and the results showed that the proposed method of ground deformation modulus is closer to the results that are obtained in the field, which reflects the foundation's work in real terms more accurately than the compression of the ground deformation module.Keywords: build, deformation modulus, foundations, ground, laboratory research
Procedia PDF Downloads 36918171 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur
Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh
Abstract:
Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.Keywords: hanging, channelling, blast furnace, coke
Procedia PDF Downloads 19518170 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures
Procedia PDF Downloads 35518169 Descent Algorithms for Optimization Algorithms Using q-Derivative
Authors: Geetanjali Panda, Suvrakanti Chakraborty
Abstract:
In this paper, Newton-like descent methods are proposed for unconstrained optimization problems, which use q-derivatives of the gradient of an objective function. First, a local scheme is developed with alternative sufficient optimality condition, and then the method is extended to a global scheme. Moreover, a variant of practical Newton scheme is also developed introducing a real sequence. Global convergence of these schemes is proved under some mild conditions. Numerical experiments and graphical illustrations are provided. Finally, the performance profiles on a test set show that the proposed schemes are competitive to the existing first-order schemes for optimization problems.Keywords: Descent algorithm, line search method, q calculus, Quasi Newton method
Procedia PDF Downloads 39818168 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction
Authors: Omer Cahana, Ofer Levi, Maya Herman
Abstract:
Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning
Procedia PDF Downloads 9118167 Poly(Acrylamide-Co-Itaconic Acid) Nanocomposite Hydrogels and Its Use in the Removal of Lead in Aqueous Solution
Authors: Majid Farsadrouh Rashti, Alireza Mohammadinejad, Amir Shafiee Kisomi
Abstract:
Lead (Pb²⁺), a cation, is a prime constituent of the majority of the industrial effluents such as mining, smelting and coal combustion, Pb-based painting and Pb containing pipes in water supply systems, paper and pulp refineries, printing, paints and pigments, explosive manufacturing, storage batteries, alloy and steel industries. The maximum permissible limit of lead in the water used for drinking and domesticating purpose is 0.01 mg/L as advised by Bureau of Indian Standards, BIS. This becomes the acceptable 'safe' level of lead(II) ions in water beyond which, the water becomes unfit for human use and consumption, and is potential enough to lead health problems and epidemics leading to kidney failure, neuronal disorders, and reproductive infertility. Superabsorbent hydrogels are loosely crosslinked hydrophilic polymers that in contact with aqueous solution can easily water and swell to several times to their initial volume without dissolving in aqueous medium. Superabsorbents are kind of hydrogels capable to swell and absorb a large amount of water in their three-dimensional networks. While the shapes of hydrogels do not change extensively during swelling, because of tremendously swelling capacity of superabsorbent, their shape will broadly change.Because of their superb response to changing environmental conditions including temperature pH, and solvent composition, superabsorbents have been attracting in numerous industrial applications. For instance, water retention property and subsequently. Natural-based superabsorbent hydrogels have attracted much attention in medical pharmaceutical, baby diapers, agriculture, and horticulture because of their non-toxicity, biocompatibility, and biodegradability. Novel superabsorbent hydrogel nanocomposites were prepared by graft copolymerization of acrylamide and itaconic acid in the presence of nanoclay (laponite), using methylene bisacrylamide (MBA) and potassium persulfate, former as a crosslinking agent and the second as an initiator. The superabsorbent hydrogel nanocomposites structure was characterized by FTIR spectroscopy, SEM and TGA Spectroscopy adsorption of metal ions on poly (AAm-co-IA). The equilibrium swelling values of copolymer was determined by gravimetric method. During the adsorption of metal ions on polymer, residual metal ion concentration in the solution and the solution pH were measured. The effects of the clay content of the hydrogel on its metal ions uptake behavior were studied. The NC hydrogels may be considered as a good candidate for environmental applications to retain more water and to remove heavy metals.Keywords: adsorption, hydrogel, nanocomposite, super adsorbent
Procedia PDF Downloads 18718166 An Event Relationship Extraction Method Incorporating Deep Feedback Recurrent Neural Network and Bidirectional Long Short-Term Memory
Authors: Yin Yuanling
Abstract:
A Deep Feedback Recurrent Neural Network (DFRNN) and Bidirectional Long Short-Term Memory (BiLSTM) are designed to address the problem of low accuracy of traditional relationship extraction models. This method combines a deep feedback-based recurrent neural network (DFRNN) with a bi-directional long short-term memory (BiLSTM) approach. The method combines DFRNN, which extracts local features of text based on deep feedback recurrent mechanism, BiLSTM, which better extracts global features of text, and Self-Attention, which extracts semantic information. Experiments show that the method achieves an F1 value of 76.69% on the CEC dataset, which is 0.0652 better than the BiLSTM+Self-ATT model, thus optimizing the performance of the deep learning method in the event relationship extraction task.Keywords: event relations, deep learning, DFRNN models, bi-directional long and short-term memory networks
Procedia PDF Downloads 14418165 Bi-Directional Evolutionary Topology Optimization Based on Critical Fatigue Constraint
Authors: Khodamorad Nabaki, Jianhu Shen, Xiaodong Huang
Abstract:
This paper develops a method for considering the critical fatigue stress as a constraint in the Bi-directional Evolutionary Structural Optimization (BESO) method. Our aim is to reach an optimal design in which high cycle fatigue failure does not occur for a specific life time. The critical fatigue stress is calculated based on modified Goodman criteria and used as a stress constraint in our topology optimization problem. Since fatigue generally does not occur for compressive stresses, we use the p-norm approach of the stress measurement that considers the highest tensile principal stress in each point as stress measure to calculate the sensitivity numbers. The BESO method has been extended to minimize volume an object subjected to the critical fatigue stress constraint. The optimization results are compared with the results from the compliance minimization problem which shows clearly the merits of our newly developed approach.Keywords: topology optimization, BESO method, p-norm, fatigue constraint
Procedia PDF Downloads 29518164 Study of Natural Convection Heat Transfer of Plate-Fin Heat Sink
Authors: Han-Taw Chen, Tzu-Hsiang Lin, Chung-Hou Lai
Abstract:
This study applies the inverse method and three-dimensional CFD commercial software in conjunction with the experimental temperature data to investigate the heat transfer and fluid flow characteristics of the plate-fin heat sink in a rectangular closed enclosure. The inverse method with the finite difference method and the experimental temperature data is applied to determine the approximate heat transfer coefficient. Later, based on the obtained results, the zero-equation turbulence model is used to obtain the heat transfer and fluid flow characteristics between two fins. To validate the accuracy of the results obtained, the comparison of the heat transfer coefficient is made. The obtained temperature at selected measurement locations of the fin is also compared with experimental data. The effect of the height of the rectangular enclosure on the obtained results is discussed.Keywords: inverse method, fluent, heat transfer characteristics, plate-fin heat sink
Procedia PDF Downloads 38918163 Least Squares Method Identification of Corona Current-Voltage Characteristics and Electromagnetic Field in Electrostatic Precipitator
Authors: H. Nouri, I. E. Achouri, A. Grimes, H. Ait Said, M. Aissou, Y. Zebboudj
Abstract:
This paper aims to analysis the behaviour of DC corona discharge in wire-to-plate electrostatic precipitators (ESP). Current-voltage curves are particularly analysed. Experimental results show that discharge current is strongly affected by the applied voltage. The proposed method of current identification is to use the method of least squares. Least squares problems that of into two categories: linear or ordinary least squares and non-linear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. A closed-form solution (or closed form expression) is any formula that can be evaluated in a finite number of standard operations. The non-linear problem has no closed-form solution and is usually solved by iterative.Keywords: electrostatic precipitator, current-voltage characteristics, least squares method, electric field, magnetic field
Procedia PDF Downloads 43118162 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics
Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty
Abstract:
Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC
Procedia PDF Downloads 22218161 Online Estimation of Clutch Drag Torque in Wet Dual Clutch Transmission Based on Recursive Least Squares
Authors: Hongkui Li, Tongli Lu , Jianwu Zhang
Abstract:
This paper focuses on developing an estimation method of clutch drag torque in wet DCT. The modelling of clutch drag torque is investigated. As the main factor affecting the clutch drag torque, dynamic viscosity of oil is discussed. The paper proposes an estimation method of clutch drag torque based on recursive least squares by utilizing the dynamic equations of gear shifting synchronization process. The results demonstrate that the estimation method has good accuracy and efficiency.Keywords: clutch drag torque, wet DCT, dynamic viscosity, recursive least squares
Procedia PDF Downloads 31818160 Design Systems and the Need for a Usability Method: Assessing the Fitness of Components and Interaction Patterns in Design Systems Using Atmosphere Methodology
Authors: Patrik Johansson, Selina Mardh
Abstract:
The present study proposes a usability test method, Atmosphere, to assess the fitness of components and interaction patterns of design systems. The method covers the user’s perception of the components of the system, the efficiency of the logic of the interaction patterns, perceived ease of use as well as the user’s understanding of the intended outcome of interactions. These aspects are assessed by combining measures of first impression, visual affordance and expectancy. The method was applied to a design system developed for the design of an electronic health record system. The study was conducted involving 15 healthcare personnel. It could be concluded that the Atmosphere method provides tangible data that enable human-computer interaction practitioners to analyze and categorize components and patterns based on perceived usability, success rate of identifying interactive components and success rate of understanding components and interaction patterns intended outcome.Keywords: atomic design, atmosphere methodology, design system, expectancy testing, first impression testing, usability testing, visual affordance testing
Procedia PDF Downloads 18018159 A Calibration Method of Portable Coordinate Measuring Arm Using Bar Gauge with Cone Holes
Authors: Rim Chang Hyon, Song Hak Jin, Song Kwang Hyok, Jong Ki Hun
Abstract:
The calibration of the articulated arm coordinate measuring machine (AACMM) is key to improving calibration accuracy and saving calibration time. To reduce the time consumed for calibration, we should choose the proper calibration gauges and develop a reasonable calibration method. In addition, we should get the exact optimal solution by accurately removing the rough errors within the experimental data. In this paper, we present a calibration method of the portable coordinate measuring arm (PCMA) using the 1.2m long bar guage with cone-holes. First, we determine the locations of the bar gauge and establish an optimal objective function for identifying the structural parameter errors. Next, we make a mathematical model of the calibration algorithm and present a new mathematical method to remove the rough errors within calibration data. Finally, we find the optimal solution to identify the kinematic parameter errors by using Levenberg-Marquardt algorithm. The experimental results show that our calibration method is very effective in saving the calibration time and improving the calibration accuracy.Keywords: AACMM, kinematic model, parameter identify, measurement accuracy, calibration
Procedia PDF Downloads 8318158 The Employees' Classification Method in the Space of Their Job Satisfaction, Loyalty and Involvement
Authors: Svetlana Ignatjeva, Jelena Slesareva
Abstract:
The aim of the study is development and adaptation of the method to analyze and quantify the indicators characterizing the relationship between a company and its employees. Diagnostics of such indicators is one of the most complex and actual issues in psychology of labour. The offered method is based on the questionnaire; its indicators reflect cognitive, affective and connotative components of socio-psychological attitude of employees to be as efficient as possible in their professional activities. This approach allows measure not only the selected factors but also such parameters as cognitive and behavioural dissonances. Adaptation of the questionnaire includes factor structure analysis and suitability analysis of phenomena indicators measured in terms of internal consistency of individual factors. Structural validity of the questionnaire was tested by exploratory factor analysis. Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. Factor analysis allows reduce dimension of the phenomena moving from the indicators to aggregative indexes and latent variables. Aggregative indexes are obtained as the sum of relevant indicators followed by standardization. The coefficient Cronbach's Alpha was used to assess the reliability-consistency of the questionnaire items. The two-step cluster analysis in the space of allocated factors allows classify employees according to their attitude to work in the company. The results of psychometric testing indicate possibility of using the developed technique for the analysis of employees’ attitude towards their work in companies and development of recommendations on their optimization.Keywords: involved in the organization, loyalty, organizations, method
Procedia PDF Downloads 35618157 A Safety Analysis Method for Multi-Agent Systems
Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller
Abstract:
Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.Keywords: multi-agent system, safety analysis, safety model, integration map
Procedia PDF Downloads 41718156 A Holistic Workflow Modeling Method for Business Process Redesign
Authors: Heejung Lee
Abstract:
In a highly competitive environment, it becomes more important to shorten the whole business process while delivering or even enhancing the business value to the customers and suppliers. Although the workflow management systems receive much attention for its capacity to practically support the business process enactment, the effective workflow modeling method remain still challenging and the high degree of process complexity makes it more difficult to gain the short lead time. This paper presents a workflow structuring method in a holistic way that can reduce the process complexity using activity-needs and formal concept analysis, which eventually enhances the key performance such as quality, delivery, and cost in business process.Keywords: workflow management, re-engineering, formal concept analysis, business process
Procedia PDF Downloads 40918155 Fire Smoke Removal over Cu-Mn-Ce Oxide Catalyst with CO₂ Sorbent Addition: Co Oxidation and in-situ CO₂ Sorption
Authors: Jin Lin, Shouxiang Lu, Kim Meow Liew
Abstract:
In a fire accident, fire smoke often poses a serious threat to human safety especially in the enclosed space such as submarine and space-crafts environment. Efficient removal of the hazardous gas products particularly a large amount of CO and CO₂ gases from these confined space is critical for the security of the staff and necessary for the post-fire environment recovery. In this work, Cu-Mn-Ce composite oxide catalysts coupled with CO₂ sorbents were prepared using wet impregnation method, solid-state impregnation method and wet/solid-state impregnation method. The as-prepared samples were tested dynamically and isothermally for CO oxidation and CO₂ sorption and further characterized by the X-ray diffraction (XRD), nitrogen adsorption and desorption, and field emission scanning electron microscopy (FE-SEM). The results showed that all the samples were able to catalyze CO into CO₂ and capture CO₂ in situ by chemisorption. Among all the samples, the sample synthesized by the wet/solid-state impregnation method showed the highest catalytic activity toward CO oxidation and the fine ability of CO₂ sorption. The sample prepared by the solid-state impregnation method showed the second CO oxidation performance, while the coupled sample using the wet impregnation method exhibited much poor CO oxidation activity. The various CO oxidation and CO₂ sorption properties of the samples might arise from the different dispersed states of the CO₂ sorbent in the CO catalyst, owing to the different preparation methods. XRD results confirmed the high-dispersed sorbent phase in the samples prepared by the wet and solid impregnation method, while that of the sample prepared by wet/solid-state impregnation method showed the larger bulk phase as indicated by the high-intensity diffraction peaks. Nitrogen adsorption and desorption results further revealed that the latter sample had a higher surface area and pore volume, which were beneficial for the CO oxidation over the catalyst. Hence, the Cu-Mn-Ce oxide catalyst coupled with CO₂ sorbent using wet/solid-state impregnation method could be a good choice for fire smoke removal in the enclosed space.Keywords: CO oxidation, CO₂ sorption, preparation methods, smoke removal
Procedia PDF Downloads 13918154 K-Means Clustering-Based Infinite Feature Selection Method
Authors: Seyyedeh Faezeh Hassani Ziabari, Sadegh Eskandari, Maziar Salahi
Abstract:
Infinite Feature Selection (IFS) algorithm is an efficient feature selection algorithm that selects a subset of features of all sizes (including infinity). In this paper, we present an improved version of it, called clustering IFS (CIFS), by clustering the dataset in advance. To do so, first, we apply the K-means algorithm to cluster the dataset, then we apply IFS. In the CIFS method, the spatial and temporal complexities are reduced compared to the IFS method. Experimental results on 6 datasets show the superiority of CIFS compared to IFS in terms of accuracy, running time, and memory consumption.Keywords: feature selection, infinite feature selection, clustering, graph
Procedia PDF Downloads 12818153 An Optimal Control Model to Determine Body Forces of Stokes Flow
Authors: Yuanhao Gao, Pin Lin, Kees Weijer
Abstract:
In this paper, we will determine the external body force distribution with analysis of stokes fluid motion using mathematical modelling and numerical approaching. The body force distribution is regarded as the unknown variable and could be determined by the idea of optimal control theory. The Stokes flow motion and its velocity are generated by given forces in a unit square domain. A regularized objective functional is built to match the numerical result of flow velocity with the generated velocity data. So that the force distribution could be determined by minimizing the value of objective functional, which is also the difference between the numerical and experimental velocity. Then after utilizing the Lagrange multiplier method, some partial differential equations are formulated consisting the optimal control system to solve. Finite element method and conjugate gradient method are used to discretize equations and deduce the iterative expression of target body force to compute the velocity numerically and body force distribution. Programming environment FreeFEM++ supports the implementation of this model.Keywords: optimal control model, Stokes equation, finite element method, conjugate gradient method
Procedia PDF Downloads 40518152 Fluorination Renders the Wood Surface Hydrophobic without Any Loos of Physical and Mechanical Properties
Authors: Martial Pouzet, Marc Dubois, Karine Charlet, Alexis Béakou
Abstract:
The availability, the ecologic and economic characteristics of wood are advantages which explain the very wide scope of applications of this material, in several domains such as paper industry, furniture, carpentry and building. However, wood is a hygroscopic material highly sensitive to ambient humidity and temperature. The swelling and the shrinking caused by water absorption and desorption cycles lead to crack and deformation in the wood volume, making it incompatible for such applications. In this study, dynamic fluorination using F2 gas was applied to wood samples (douglas and silver fir species) to decrease their hydrophilic character. The covalent grafting of fluorine atoms onto wood surface through a conversion of C-OH group into C-F was validated by Fourier-Transform infrared spectroscopy and 19F solid state Nuclear Magnetic Resonance. It revealed that the wood, which is initially hydrophilic, acquired a hydrophobic character comparable to that of the Teflon, thanks to fluorination. A good durability of this treatment was also determined by aging tests under ambient atmosphere and under UV irradiation. Moreover, this treatment allowed obtaining hydrophobic character without major structural (morphology, density and colour) or mechanical changes. The maintaining of these properties after fluorination, which requires neither toxic solvent nor heating, appears as a remarkable advantage over other more traditional physical and chemical wood treatments.Keywords: cellulose, spectroscopy, surface treatment, water absorption
Procedia PDF Downloads 20218151 Structural Properties of Surface Modified PVA: Zn97Pr3O Polymer Nanocomposite Free Standing Films
Authors: Pandiyarajan Thangaraj, Mangalaraja Ramalinga Viswanathan, Karthikeyan Balasubramanian, Héctor D. Mansilla, José Ruiz
Abstract:
Rare earth ions doped semiconductor nanostructures gained much attention due to their novel physical and chemical properties which lead to potential applications in laser technology as inexpensive luminescent materials. Doping of rare earth ions into ZnO semiconductor alter its electronic structure and emission properties. Surface modification (polymer covering) is one of the simplest techniques to modify the emission characteristics of host materials. The present work reports the synthesis and structural properties of PVA:Zn97Pr3O polymer nanocomposite free standing films. To prepare Pr3+ doped ZnO nanostructures and PVA:Zn97Pr3O polymer nanocomposite free standing films, the colloidal chemical and solution casting techniques were adopted, respectively. The formation of PVA:Zn97Pr3O films were confirmed through X-ray diffraction (XRD), absorption and Fourier transform infrared (FTIR) spectroscopy analyses. XRD measurements confirm the prepared materials are crystalline having hexagonal wurtzite structure. Polymer composite film exhibits the diffraction peaks of both PVA and ZnO structures. TEM images reveal the pure and Pr3+ doped ZnO nanostructures exhibit sheet like morphology. Optical absorption spectra show free excitonic absorption band of ZnO at 370 nm and, the PVA:Zn97Pr3O polymer film shows absorption bands at ~282 and 368 nm and these arise due to the presence of carbonyl containing structures connected to the PVA polymeric chains, mainly at the ends and free excitonic absorption of ZnO nanostructures, respectively. Transmission spectrum of as prepared film shows 57 to 69% of transparency in the visible and near IR region. FTIR spectral studies confirm the presence of A1 (TO) and E1 (TO) modes of Zn-O bond vibration and the formation of polymer composite materials.Keywords: rare earth doped ZnO, polymer composites, structural characterization, surface modification
Procedia PDF Downloads 36218150 Influence of Recycled Polymer-Based Aggregates on Mechanical Properties of Polymer Concrete
Authors: Ahmet Kurklu, Abdussamed Sarp, Gokmen Arikan, Akin Eren, Arif Ulu, Ferit Cakir
Abstract:
Our natural resources are diminishing day by day with the needs of the growing world population. There is a danger that these resources will be depleted if they are not used in a controlled manner. As a result of the rapid increase in the consumption of limited natural resources, one of the issues where studies have gained importance is recycling. Many countries have carried out various research and development activities on recycling and reuse to prevent wastage of resources. For sustainable and healthy living, the limited amount of raw material resources in nature should be consumed consciously, and the necessary awareness should be given for recycling activities. One of the sectors where the consumption of raw materials is high is the construction sector. With the changing consumption habits of the evolving technology in the construction sector, the need to use special concrete along with the normal concrete has arisen. With the increasing need for specialty concretes, polymer concrete, which was discovered in the early 1900s, has evolved to the present day. Polymer concretes are special concretes with high strength, water impermeability, resistance to chemical action, and low surface roughness. Thanks to these properties, they find wide applications in many fields such as swimming pools, drainage systems, repair works. In the study, the effect of using recycled aggregates instead of natural aggregates in the production of polymer concrete on the performance of polymer concrete is investigated. In the experiments conducted for this purpose, the use of natural aggregate is reduced at certain rates, and instead, recycled aggregate is added at the same rate. The recycled aggregate to be used in the study is obtained from the polymer concrete drainage channel production facility of Mert Casting Co., Istanbul, Turkey. In order to clearly observe the effect of recycled materials on the product in the study, the other components (resin, hardener, accelerator, and additive) are kept constant in the concrete mix. In the study, fresh and hardened concrete tests are to be carried out on the mixes to be prepared.Keywords: concrete, mechanical properties, polymer concrete, recycle aggregate
Procedia PDF Downloads 144