Search results for: linear acceleration method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21196

Search results for: linear acceleration method

18856 BTEX (Benzene, Toluene, Ethylbenzene and Xylene) Degradation by Cold Plasma

Authors: Anelise Leal Vieira Cubas, Marina de Medeiros Machado, Marília de Medeiros Machado

Abstract:

The volatile organic compounds - BTEX (Benzene, Toluene, Ethylbenzene, and Xylene) petroleum derivatives, have high rates of toxicity, which may carry consequences for human health, biota and environment. In this direction, this paper proposes a method of treatment of these compounds by using corona discharge plasma technology. The efficiency of the method was tested by analyzing samples of BTEX after going through a plasma reactor by gas chromatography method. The results show that the optimal residence time of the sample in the reactor was 8 minutes.

Keywords: BTEX, degradation, cold plasma, ecological sciences

Procedia PDF Downloads 309
18855 Optimization the Conditions of Electrophoretic Deposition Fabrication of Graphene-Based Electrode to Consider Applications in Electro-Optical Sensors

Authors: Sepehr Lajevardi Esfahani, Shohre Rouhani, Zahra Ranjbar

Abstract:

Graphene has gained much attention owing to its unique optical and electrical properties. Charge carriers in graphene sheets (GS) carry out a linear dispersion relation near the Fermi energy and behave as massless Dirac fermions resulting in unusual attributes such as the quantum Hall effect and ambipolar electric field effect. It also exhibits nondispersive transport characteristics with an extremely high electron mobility (15000 cm2/(Vs)) at room temperature. Recently, several progresses have been achieved in the fabrication of single- or multilayer GS for functional device applications in the fields of optoelectronic such as field-effect transistors ultrasensitive sensors and organic photovoltaic cells. In addition to device applications, graphene also can serve as reinforcement to enhance mechanical, thermal, or electrical properties of composite materials. Electrophoretic deposition (EPD) is an attractive method for development of various coatings and films. It readily applied to any powdered solid that forms a stable suspension. The deposition parameters were controlled in various thicknesses. In this study, the graphene electrodeposition conditions were optimized. The results were obtained from SEM, Ohm resistance measuring technique and AFM characteristic tests. The minimum sheet resistance of electrodeposited reduced graphene oxide layers is achieved at conditions of 2 V in 10 s and it is annealed at 200 °C for 1 minute.

Keywords: electrophoretic deposition (EPD), graphene oxide (GO), electrical conductivity, electro-optical devices

Procedia PDF Downloads 180
18854 Determine the Optimal Path of Content Adaptation Services with Max Heap Tree

Authors: Shilan Rahmani Azr, Siavash Emtiyaz

Abstract:

Recent development in computing and communicative technologies leads to much easier mobile accessibility to the information. Users can access to the information in different places using various deceives in which the care variety of abilities. Meanwhile, the format and details of electronic documents are changing each day. In these cases, a mismatch is created between content and client’s abilities. Recently the service-oriented content adaption has been developed which the adapting tasks are dedicated to some extended services. In this method, the main problem is to choose the best appropriate service among accessible and distributed services. In this paper, a method for determining the optimal path to the best services, based on the quality control parameters and user preferences, is proposed using max heap tree. The efficiency of this method in contrast to the other previous methods of the content adaptation is related to the determining the optimal path of the best services which are measured. The results show the advantages and progresses of this method in compare of the others.

Keywords: service-oriented content adaption, QoS, max heap tree, web services

Procedia PDF Downloads 250
18853 Investigation of the Effect of Excavation Step in NATM on Surface Settlement by Finite Element Method

Authors: Seyed Mehrdad Gholami

Abstract:

Nowadays, using rail transport system (Metro) is increased in most cities of The world, so the need for safe and economical way of building tunnels and subway stations is felt more and more. One of the most commonly used methods for constructing underground structures in urban areas is NATM (New Austrian tunneling method). In this method, there are some key parameters such as excavation steps and cross-sectional area that have a significant effect on the surface settlement. Settlement is a very important control factor related to safe excavation. In this paper, Finite Element Method is used by Abaqus. R6 station of Tehran Metro Line 6 is built by NATM and the construction of that is studied and analyzed. Considering the outcomes obtained from numerical modeling and comparison with the results of the instrumentation and monitoring of field, finally, the excavation step of 1 meter and longitudinal distance of 14 meters between side drifts is suggested to achieve safe tunneling with allowable settlement.

Keywords: excavation step, NATM, numerical modeling, settlement.

Procedia PDF Downloads 130
18852 Solving SPDEs by Least Squares Method

Authors: Hassan Manouzi

Abstract:

We present in this paper a useful strategy to solve stochastic partial differential equations (SPDEs) involving stochastic coefficients. Using the Wick-product of higher order and the Wiener-Itˆo chaos expansion, the SPDEs is reformulated as a large system of deterministic partial differential equations. To reduce the computational complexity of this system, we shall use a decomposition-coordination method. To obtain the chaos coefficients in the corresponding deterministic equations, we use a least square formulation. Once this approximation is performed, the statistics of the numerical solution can be easily evaluated.

Keywords: least squares, wick product, SPDEs, finite element, wiener chaos expansion, gradient method

Procedia PDF Downloads 412
18851 Ground Deformation Module for the New Laboratory Methods

Authors: O. Giorgishvili

Abstract:

For calculation of foundations one of the important characteristics is the module of deformation (E0). As we all know, the main goal of calculation of the foundations of buildings on deformation is to arrange the base settling and difference in settlings in such limits that do not cause origination of cracks and changes in design levels that will be dangerous to standard operation in the buildings and their individual structures. As is known from the literature and the practical application, the modulus of deformation is determined by two basic methods: laboratory method, soil test on compression (without the side widening) and soil test in field conditions. As we know, the deformation modulus of soil determined by field method is closer to the actual modulus deformation of soil, but the complexity of the tests to be carried out and the financial concerns did not allow determination of ground deformation modulus by field method. Therefore, we determine the ground modulus of deformation by compression method without side widening. Concerning this, we introduce a new way for determination of ground modulus of deformation by laboratory order that occurs by side widening and more accurately reflects the ground modulus of deformation and more accurately reflects the actual modulus of deformation and closer to the modulus of deformation determined by the field method. In this regard, we bring a new approach on the ground deformation detection laboratory module, which is done by widening sides. The tests and the results showed that the proposed method of ground deformation modulus is closer to the results that are obtained in the field, which reflects the foundation's work in real terms more accurately than the compression of the ground deformation module.

Keywords: build, deformation modulus, foundations, ground, laboratory research

Procedia PDF Downloads 365
18850 ‘An Invisible Labyrinth of Time’: Temporal Disjunction in J.M. Coetzee’s Dusklands

Authors: Barbara Janari

Abstract:

This paper focuses on temporality in J.M. Coetzee’s first novel, Dusklands, to argue that the novel’s fractured, disjointed temporality is intricately linked to the representations of the war in Vietnam and the colonial project in South Africa. The disrupted temporalities in the novel eschew chronological plots and linear time in favour of narratives that subvert the notion of historical progress to suggest instead the coextensive, multivalent ways in which the past and present interpenetrate one another. The disruption of temporal flow in the novel is evident in its form – the novel comprises two novellas that are juxtaposed, with the first part (‘The Vietnam War’) set centuries before the second part (‘The Narrative of Jacobus Coetzee’). The juxtaposition of the two novellas suggests history’s sometimes overlapping and lateral, rather than linear, movement. The novel’s form is extended in its montage narrative structure, which works to extend its temporal range. The temporal disjunction is reinforced, firstly, by Coetzee’s textual strategies, which include the subversion and critique of realism, parody, repetition, and the narrative technique of montage, and secondly, by the novel’s thematic concerns, which focus on the ways in which American domination can be linked to the colonial quest from earlier times. The complex structure of various strands and levels of authorship slows down the narrative’s temporal flow, requiring the reader to spend a fair amount of time unraveling the various parts of the narrative and relating them to each other. The structure epitomizes reflexive referencing, in which the reader can only make sense of the narrative by going back and forth and connecting various parts of it. The narrative structure also emphasizes the underlying similarities in the brutality that marked these two distinct historical events, epitomized by the drive towards subjection and domination by the novel’s two protagonists, Eugene Dawn and Jacobus Coetzee. The links and overlapping strands between the two novellas emphasize the ways in which the historical truth of colonial discourse becomes as much a myth as the propaganda program in Vietnam.

Keywords: disjunction, juxtaposition, montage, temporality

Procedia PDF Downloads 61
18849 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 298
18848 Descent Algorithms for Optimization Algorithms Using q-Derivative

Authors: Geetanjali Panda, Suvrakanti Chakraborty

Abstract:

In this paper, Newton-like descent methods are proposed for unconstrained optimization problems, which use q-derivatives of the gradient of an objective function. First, a local scheme is developed with alternative sufficient optimality condition, and then the method is extended to a global scheme. Moreover, a variant of practical Newton scheme is also developed introducing a real sequence. Global convergence of these schemes is proved under some mild conditions. Numerical experiments and graphical illustrations are provided. Finally, the performance profiles on a test set show that the proposed schemes are competitive to the existing first-order schemes for optimization problems.

Keywords: Descent algorithm, line search method, q calculus, Quasi Newton method

Procedia PDF Downloads 392
18847 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction

Authors: Omer Cahana, Ofer Levi, Maya Herman

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning

Procedia PDF Downloads 84
18846 Size Effects on Structural Performance of Concrete Gravity Dams

Authors: Mehmet Akköse

Abstract:

Concern about seismic safety of concrete dams have been growing around the world, partly because the population at risk in locations downstream of major dams continues to expand and also because it is increasingly evident that the seismic design concepts in use at the time most existing dams were built were inadequate. Most of the investigations in the past have been conducted on large dams, typically above 100m high. A large number of concrete dams in our country and in other parts of the world are less than 50m high. Most of these dams were usually designed using pseudo-static methods, ignoring the dynamic characteristics of the structure as well as the characteristics of the ground motion. Therefore, it is important to carry out investigations on seismic behavior this category of dam in order to assess and evaluate the safety of existing dams and improve the knowledge for different high dams to be constructed in the future. In this study, size effects on structural performance of concrete gravity dams subjected to near and far-fault ground motions are investigated including dam-water-foundation interaction. For this purpose, a benchmark problem proposed by ICOLD (International Committee on Large Dams) is chosen as a numerical application. Structural performance of the dam having five different heights is evaluated according to damage criterions in USACE (U.S. Army Corps of Engineers). It is decided according to their structural performance if non-linear analysis of the dams requires or not. The linear elastic dynamic analyses of the dams to near and far-fault ground motions are performed using the step-by-step integration technique. The integration time step is 0.0025 sec. The Rayleigh damping constants are calculated assuming 5% damping ratio. The program NONSAP modified for fluid-structure systems with the Lagrangian fluid finite element is employed in the response calculations.

Keywords: concrete gravity dams, Lagrangian approach, near and far-fault ground motion, USACE damage criterions

Procedia PDF Downloads 267
18845 Bisphenol-A Concentrations in Urine and Drinking Water Samples of Adults Living in Ankara

Authors: Hasan Atakan Sengul, Nergis Canturk, Bahar Erbas

Abstract:

Drinking water is indispensable for life. With increasing awareness of communities, the content of drinking water and tap water has been a matter of curiosity. The presence of Bisphenol-A is the top one when content curiosity is concerned. The most used chemical worldwide for production of polycarbonate plastics and epoxy resins is Bisphenol-A. People are exposed to Bisphenol-A chemical, which disrupts the endocrine system, almost every day. Each year it is manufactured an average of 5.4 billion kilograms of Bisphenol-A. Linear formula of Bisphenol-A is (CH₃)₂C(C₆H₄OH)₂, its molecular weight is 228.29 and CAS number is 80-05-7. Bisphenol-A is known to be used in the manufacturing of plastics, along with various chemicals. Bisphenol-A, an industrial chemical, is used in the raw materials of packaging mate-rials in the monomers of polycarbonate and epoxy resins. The pass through the nutrients of Bisphenol-A substance happens by packaging. This substance contaminates with nutrition and penetrates into body by consuming. International researches show that BPA is transported through body fluids, leading to hormonal disorders in animals. Experimental studies on animals report that BPA exposure also affects the gender of the newborn and its time to reach adolescence. The extent to what similar endocrine disrupting effects are on humans is a debate topic in many researches. In our country, detailed studies on BPA have not been done. However, it is observed that 'BPA-free' phrases are beginning to appear on plastic packaging such as baby products and water carboys. Accordingly, this situation increases the interest of the society about the subject; yet it causes information pollution. In our country, all national and international studies on exposure to BPA have been examined and Ankara province has been designated as testing region. To assess the effects of plastic use in daily habits of people and the plastic amounts removed out of the body, the results of the survey conducted with volunteers who live in Ankara has been analyzed with Sciex appliance by means of LC-MS/MS in the laboratory and the amount of exposure and BPA removal have been detected by comparing the results elicited before. The results have been compared with similar studies done in international arena and the relation between them has been exhibited. Consequently, there has been found no linear correlation between the amount of BPA in drinking water and the amount of BPA in urine. This has also revealed that environmental exposure and the habits of daily plastic use have also direct effects a human body. When the amount of BPA in drinking water is considered; minimum 0.028 µg/L, maximum 1.136 µg/L, mean 0.29194 µg/L and SD(standard deviation)= 0.199 have been detected. When the amount of BPA in urine is considered; minimum 0.028 µg/L, maximum 0.48 µg/L, mean 0.19181 µg/L and SD= 0.099 have been detected. In conclusion, there has been found no linear correlation between the amount of BPA in drinking water and the amount of BPA in urine (r= -0.151). The p value of the comparison between drinking water’s and urine’s BPA amounts is 0.004 which shows that there is a significant change and the amounts of BPA in urine is dependent on the amounts in drinking waters (p < 0.05). This has revealed that environmental exposure and daily plastic habits have also direct effects on the human body.

Keywords: analyze of bisphenol-A, BPA, BPA in drinking water, BPA in urine

Procedia PDF Downloads 125
18844 Limited Ventilation Efficacy of Prehospital I-Gel Insertion in Out-of-Hospital Cardiac Arrest Patients

Authors: Eunhye Cho, Hyuk-Hoon Kim, Sieun Lee, Minjung Kathy Chae

Abstract:

Introduction: I-gel is a commonly used supraglottic advanced airway device in prehospital out-of-hospital cardiac arrest (OHCA) allowing for minimal interruption of continuous chest compression. However, previous studies have shown that prehospital supraglottic airway had inferior neurologic outcomes and survival compared to no advanced prehospital airway with conventional bag mask ventilation. We hypothesize that continuous compression with i-gel as an advanced airway may cause insufficient ventilation compared to 30:2 chest compression with conventional BVM. Therefore, we investigated the ventilation efficacy of i-gel with the initial arterial blood gas analysis in OHCA patients visiting our ER. Material and Method: Demographics, arrest parameters including i-gel insertion, initial arterial blood gas analysis was retrospectively analysed for 119 transported OHCA patients that visited our ER. Linear regression was done to investigate the association with i-gel insertion and initial pCO2 as a surrogate of prehospital ventilation. Result: A total of 52 patients were analysed for the study. Of the patients who visited the ER during OHCA, 24 patients had i-gel insertion and 28 patients had BVM as airway management in the prehospital phase. Prehospital i-gel insertion was associated with the initial pCO2 level (B coefficient 29.9, SE 10.1, p<0.01) after adjusting for bystander CPR, cardiogenic cause of arrest, EMS call to arrival. Conclusion: Despite many limitations to the study, prehospital insertion of i-gel was associated with high initial pCO2 values in OHCA patients visiting our ER, possibly indicating insufficient ventilation with prehospital i-gel as an advanced airway and continuous chest compressions.

Keywords: arrest, I-gel, prehospital, ventilation

Procedia PDF Downloads 332
18843 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 83
18842 An Event Relationship Extraction Method Incorporating Deep Feedback Recurrent Neural Network and Bidirectional Long Short-Term Memory

Authors: Yin Yuanling

Abstract:

A Deep Feedback Recurrent Neural Network (DFRNN) and Bidirectional Long Short-Term Memory (BiLSTM) are designed to address the problem of low accuracy of traditional relationship extraction models. This method combines a deep feedback-based recurrent neural network (DFRNN) with a bi-directional long short-term memory (BiLSTM) approach. The method combines DFRNN, which extracts local features of text based on deep feedback recurrent mechanism, BiLSTM, which better extracts global features of text, and Self-Attention, which extracts semantic information. Experiments show that the method achieves an F1 value of 76.69% on the CEC dataset, which is 0.0652 better than the BiLSTM+Self-ATT model, thus optimizing the performance of the deep learning method in the event relationship extraction task.

Keywords: event relations, deep learning, DFRNN models, bi-directional long and short-term memory networks

Procedia PDF Downloads 135
18841 Bi-Directional Evolutionary Topology Optimization Based on Critical Fatigue Constraint

Authors: Khodamorad Nabaki, Jianhu Shen, Xiaodong Huang

Abstract:

This paper develops a method for considering the critical fatigue stress as a constraint in the Bi-directional Evolutionary Structural Optimization (BESO) method. Our aim is to reach an optimal design in which high cycle fatigue failure does not occur for a specific life time. The critical fatigue stress is calculated based on modified Goodman criteria and used as a stress constraint in our topology optimization problem. Since fatigue generally does not occur for compressive stresses, we use the p-norm approach of the stress measurement that considers the highest tensile principal stress in each point as stress measure to calculate the sensitivity numbers. The BESO method has been extended to minimize volume an object subjected to the critical fatigue stress constraint. The optimization results are compared with the results from the compliance minimization problem which shows clearly the merits of our newly developed approach.

Keywords: topology optimization, BESO method, p-norm, fatigue constraint

Procedia PDF Downloads 289
18840 Study of Natural Convection Heat Transfer of Plate-Fin Heat Sink

Authors: Han-Taw Chen, Tzu-Hsiang Lin, Chung-Hou Lai

Abstract:

This study applies the inverse method and three-dimensional CFD commercial software in conjunction with the experimental temperature data to investigate the heat transfer and fluid flow characteristics of the plate-fin heat sink in a rectangular closed enclosure. The inverse method with the finite difference method and the experimental temperature data is applied to determine the approximate heat transfer coefficient. Later, based on the obtained results, the zero-equation turbulence model is used to obtain the heat transfer and fluid flow characteristics between two fins. To validate the accuracy of the results obtained, the comparison of the heat transfer coefficient is made. The obtained temperature at selected measurement locations of the fin is also compared with experimental data. The effect of the height of the rectangular enclosure on the obtained results is discussed.

Keywords: inverse method, fluent, heat transfer characteristics, plate-fin heat sink

Procedia PDF Downloads 387
18839 Influence of Molecular and Supramolecular Structure on Thermally Stimulated Short-Circuit Currents in Polyvinylidene Fluoride Films

Authors: Temnov D., Volgina E., Gerasimov D.

Abstract:

Relaxation processes in polyvinylidene fluoride (PVDF) films were studied by the method of thermally stimulated fractional polarization currents (TSTF). The films were obtained by extrusion of a polymer melt followed by isometric annealing. PVDF granules of the Kynar-720 brand (Atofina Chemicals, USA) with a molecular weight of Mw=190,000 g•mol-1 were used for the manufacture of films. The annealing temperature was varied in the range from 120 °C to 170 °C in increments of 10 °C. The dependences of the degree of crystallinity of films (χ) and the intensity of thermally stimulated depolarization currents on the annealing temperature (Toc) are investigated. The TSTF spectra were obtained at the TSC II facility (Setaram, France). Measurements were carried out in a helium atmosphere, and the values of currents were determined by a Keithley electrometer. The annealed PVDF films were polarized at an electric field strength of 100 V/mm at a temperature of 31°C, after which they were cooled to 26°C, at which they were kept for 1 minute. During depolarization, the external field was removed, and the short-circuit sample was cooled to 0°C. The thermally stimulated short-circuit current was recorded during linear heating. Relaxation processes in PVDF films were studied in the temperature range from 0 – 70 °C. It is shown that the intensity curve of the peaks of TST FP has a course that is the reverse of the dependence of the degree of crystallinity on the annealing temperature. This allows us to conclude that the relaxation processes occurring in PVDF in the 35°C region are associated with the amorphous part of the structure of PVDF films between the layers of the spherulite crystalline phase.

Keywords: molecular and supramolecular structure, thermally stimulated currents, polyvinylidene fluoride films, relaxation processes

Procedia PDF Downloads 42
18838 Online Estimation of Clutch Drag Torque in Wet Dual Clutch Transmission Based on Recursive Least Squares

Authors: Hongkui Li, Tongli Lu , Jianwu Zhang

Abstract:

This paper focuses on developing an estimation method of clutch drag torque in wet DCT. The modelling of clutch drag torque is investigated. As the main factor affecting the clutch drag torque, dynamic viscosity of oil is discussed. The paper proposes an estimation method of clutch drag torque based on recursive least squares by utilizing the dynamic equations of gear shifting synchronization process. The results demonstrate that the estimation method has good accuracy and efficiency.

Keywords: clutch drag torque, wet DCT, dynamic viscosity, recursive least squares

Procedia PDF Downloads 315
18837 Design Systems and the Need for a Usability Method: Assessing the Fitness of Components and Interaction Patterns in Design Systems Using Atmosphere Methodology

Authors: Patrik Johansson, Selina Mardh

Abstract:

The present study proposes a usability test method, Atmosphere, to assess the fitness of components and interaction patterns of design systems. The method covers the user’s perception of the components of the system, the efficiency of the logic of the interaction patterns, perceived ease of use as well as the user’s understanding of the intended outcome of interactions. These aspects are assessed by combining measures of first impression, visual affordance and expectancy. The method was applied to a design system developed for the design of an electronic health record system. The study was conducted involving 15 healthcare personnel. It could be concluded that the Atmosphere method provides tangible data that enable human-computer interaction practitioners to analyze and categorize components and patterns based on perceived usability, success rate of identifying interactive components and success rate of understanding components and interaction patterns intended outcome.

Keywords: atomic design, atmosphere methodology, design system, expectancy testing, first impression testing, usability testing, visual affordance testing

Procedia PDF Downloads 171
18836 A Calibration Method of Portable Coordinate Measuring Arm Using Bar Gauge with Cone Holes

Authors: Rim Chang Hyon, Song Hak Jin, Song Kwang Hyok, Jong Ki Hun

Abstract:

The calibration of the articulated arm coordinate measuring machine (AACMM) is key to improving calibration accuracy and saving calibration time. To reduce the time consumed for calibration, we should choose the proper calibration gauges and develop a reasonable calibration method. In addition, we should get the exact optimal solution by accurately removing the rough errors within the experimental data. In this paper, we present a calibration method of the portable coordinate measuring arm (PCMA) using the 1.2m long bar guage with cone-holes. First, we determine the locations of the bar gauge and establish an optimal objective function for identifying the structural parameter errors. Next, we make a mathematical model of the calibration algorithm and present a new mathematical method to remove the rough errors within calibration data. Finally, we find the optimal solution to identify the kinematic parameter errors by using Levenberg-Marquardt algorithm. The experimental results show that our calibration method is very effective in saving the calibration time and improving the calibration accuracy.

Keywords: AACMM, kinematic model, parameter identify, measurement accuracy, calibration

Procedia PDF Downloads 70
18835 Management of Urban Watering: A Study of Appliance of Technologies and Legislation in Goiania, Brazil

Authors: Vinicius Marzall, Jussanã Milograna

Abstract:

The urban drainwatering remains a major challenge for most of the Brazilian cities. Not so different of the most part, Goiania, a state capital located in Midwest of the country has few legislations about the subject matter and only one registered solution of compensative techniques for drainwater. This paper clam to show some solutions which are adopted in other Brazilian cities with consolidated legislation, suggesting technics about detention tanks in a building sit. This study analyzed and compared the legislation of Curitiba, Porto Alegre e Sao Paulo, with the actual legislation and politics of Goiania. After this, were created models with adopted data for dimensioning the size of detention tanks using the envelope curve method considering synthetic series for intense precipitations and building sits between 250 m² and 600 m², with an impermeabilization tax of 50%. The results showed great differences between the legislation of Goiania and the documentation of the others cities analyzed, like the number of techniques for drainwatering applied to the reality of the cities, educational actions to awareness the population about care the water courses and political management by having a specified funds for drainwater subjects, for example. Besides, the use of detention tank showed itself practicable, have seen that the occupation of the tank is minor than 3% of the building sit, whatever the size of the terrain, granting the exit flow to pre-occupational taxes in extreme rainfall events. Also, was developed a linear equation to measure the detention tank based in the size of the building sit in Goiania, making simpler the calculation and implementation for non-specialized people.

Keywords: clean technology, legislation, rainwater management, urban drainwater

Procedia PDF Downloads 148
18834 Performance Enhancement of Hybrid Racing Car by Design Optimization

Authors: Tarang Varmora, Krupa Shah, Karan Patel

Abstract:

Environmental pollution and shortage of conventional fuel are the main concerns in the transportation sector. Most of the vehicles use an internal combustion engine (ICE), powered by gasoline fuels. This results into emission of toxic gases. Hybrid electric vehicle (HEV) powered by electric machine and ICE is capable of reducing emission of toxic gases and fuel consumption. However to build HEV, it is required to accommodate motor and batteries in the vehicle along with engine and fuel tank. Thus, overall weight of the vehicle increases. To improve the fuel economy and acceleration, the weight of the HEV can be minimized. In this paper, the design methodology to reduce the weight of the hybrid racing car is proposed. To this end, the chassis design is optimized. Further, attempt is made to obtain the maximum strength with minimum material weight. The best configuration out of the three main configurations such as series, parallel and the dual-mode (series-parallel) is chosen. Moreover, the most suitable type of motor, battery, braking system, steering system and suspension system are identified. The racing car is designed and analyzed in the simulating software. The safety of the vehicle is assured by performing static and dynamic analysis on the chassis frame. From the results, it is observed that, the weight of the racing car is reduced by 11 % without compromising on safety and cost. It is believed that the proposed design and specifications can be implemented practically for manufacturing hybrid racing car.

Keywords: design optimization, hybrid racing car, simulation, vehicle, weight reduction

Procedia PDF Downloads 289
18833 The Employees' Classification Method in the Space of Their Job Satisfaction, Loyalty and Involvement

Authors: Svetlana Ignatjeva, Jelena Slesareva

Abstract:

The aim of the study is development and adaptation of the method to analyze and quantify the indicators characterizing the relationship between a company and its employees. Diagnostics of such indicators is one of the most complex and actual issues in psychology of labour. The offered method is based on the questionnaire; its indicators reflect cognitive, affective and connotative components of socio-psychological attitude of employees to be as efficient as possible in their professional activities. This approach allows measure not only the selected factors but also such parameters as cognitive and behavioural dissonances. Adaptation of the questionnaire includes factor structure analysis and suitability analysis of phenomena indicators measured in terms of internal consistency of individual factors. Structural validity of the questionnaire was tested by exploratory factor analysis. Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. Factor analysis allows reduce dimension of the phenomena moving from the indicators to aggregative indexes and latent variables. Aggregative indexes are obtained as the sum of relevant indicators followed by standardization. The coefficient Cronbach's Alpha was used to assess the reliability-consistency of the questionnaire items. The two-step cluster analysis in the space of allocated factors allows classify employees according to their attitude to work in the company. The results of psychometric testing indicate possibility of using the developed technique for the analysis of employees’ attitude towards their work in companies and development of recommendations on their optimization.

Keywords: involved in the organization, loyalty, organizations, method

Procedia PDF Downloads 351
18832 Design Optimization of a Micro Compressor for Micro Gas Turbine Using Computational Fluid Dynamics

Authors: Kamran Siddique, Hiroyuki Asada, Yoshifumi Ogami

Abstract:

The use of Micro Gas Turbine (MGT) as the engine in Unmanned Aerobic Vehicles (UAVs) and power source in Robotics is widespread these days. Research has been conducted in the past decade or so to improve the performance of different components of MGT. This type of engine has interrelated components which have non-linear characteristics. Therefore, the overall engine performance depends on the individual engine element’s performance. Computational Fluid Dynamics (CFD) is one of the simulation method tools used to analyze or even optimize MGT system performance. In this study, the compressor of the MGT is designed, and performance optimization is being done using CFD. Performance of the micro compressor is improved in order to increase the overall performance of MGT. A high value of pressure ratio is to be achieved by studying the effect of change of different operating parameters like mass flow rate and revolutions per minute (RPM) and aerodynamical and geometrical parameters on the pressure ratio of the compressor. Two types of compressor designs are considered in this study; 3D centrifugal and ‘planar’ designs. For a 10 mm impeller, the planar model is the simplest compressor model with the ease in manufacturability. On the other hand, 3D centrifugal model, although more efficient, is very difficult to manufacture using current microfabrication resources. Therefore, the planar model is the best-suited model for a micro compressor. So. a planar micro compressor has been designed that has a good pressure ratio, and it is easy to manufacture using current microfabrication technologies. Future work is to fabricate the compressor to get experimental results and validate the theoretical model.

Keywords: computational fluid dynamics, microfabrication, MEMS, unmanned aerobic vehicles

Procedia PDF Downloads 137
18831 Shear Stress and Effective Structural Stress ‎Fields of an Atherosclerotic Coronary Artery

Authors: Alireza Gholipour, Mergen H. Ghayesh, Anthony Zander, Stephen J. Nicholls, Peter J. Psaltis

Abstract:

A three-dimensional numerical model of an atherosclerotic coronary ‎artery is developed for the determination of high-risk situation and ‎hence heart attack prediction. Employing the finite element method ‎‎(FEM) using ANSYS, fluid-structure interaction (FSI) model of the ‎artery is constructed to determine the shear stress distribution as well ‎as the von Mises stress field. A flexible model for an atherosclerotic ‎coronary artery conveying pulsatile blood is developed incorporating ‎three-dimensionality, artery’s tapered shape via a linear function for ‎artery wall distribution, motion of the artery, blood viscosity via the ‎non-Newtonian flow theory, blood pulsation via use of one-period ‎heartbeat, hyperelasticity via the Mooney-Rivlin model, viscoelasticity ‎via the Prony series shear relaxation scheme, and micro-calcification ‎inside the plaque. The material properties used to relate the stress field ‎to the strain field have been extracted from clinical data from previous ‎in-vitro studies. The determined stress fields has potential to be used as ‎a predictive tool for plaque rupture and dissection.‎ The results show that stress concentration due to micro-calcification ‎increases the von Mises stress significantly; chance of developing a ‎crack inside the plaque increases. Moreover, the blood pulsation varies ‎the stress distribution substantially for some cases.‎

Keywords: atherosclerosis, fluid-structure interaction‎, coronary arteries‎, pulsatile flow

Procedia PDF Downloads 168
18830 A Safety Analysis Method for Multi-Agent Systems

Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller

Abstract:

Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.

Keywords: multi-agent system, safety analysis, safety model, integration map

Procedia PDF Downloads 413
18829 A Holistic Workflow Modeling Method for Business Process Redesign

Authors: Heejung Lee

Abstract:

In a highly competitive environment, it becomes more important to shorten the whole business process while delivering or even enhancing the business value to the customers and suppliers. Although the workflow management systems receive much attention for its capacity to practically support the business process enactment, the effective workflow modeling method remain still challenging and the high degree of process complexity makes it more difficult to gain the short lead time. This paper presents a workflow structuring method in a holistic way that can reduce the process complexity using activity-needs and formal concept analysis, which eventually enhances the key performance such as quality, delivery, and cost in business process.

Keywords: workflow management, re-engineering, formal concept analysis, business process

Procedia PDF Downloads 405
18828 Fire Smoke Removal over Cu-Mn-Ce Oxide Catalyst with CO₂ Sorbent Addition: Co Oxidation and in-situ CO₂ Sorption

Authors: Jin Lin, Shouxiang Lu, Kim Meow Liew

Abstract:

In a fire accident, fire smoke often poses a serious threat to human safety especially in the enclosed space such as submarine and space-crafts environment. Efficient removal of the hazardous gas products particularly a large amount of CO and CO₂ gases from these confined space is critical for the security of the staff and necessary for the post-fire environment recovery. In this work, Cu-Mn-Ce composite oxide catalysts coupled with CO₂ sorbents were prepared using wet impregnation method, solid-state impregnation method and wet/solid-state impregnation method. The as-prepared samples were tested dynamically and isothermally for CO oxidation and CO₂ sorption and further characterized by the X-ray diffraction (XRD), nitrogen adsorption and desorption, and field emission scanning electron microscopy (FE-SEM). The results showed that all the samples were able to catalyze CO into CO₂ and capture CO₂ in situ by chemisorption. Among all the samples, the sample synthesized by the wet/solid-state impregnation method showed the highest catalytic activity toward CO oxidation and the fine ability of CO₂ sorption. The sample prepared by the solid-state impregnation method showed the second CO oxidation performance, while the coupled sample using the wet impregnation method exhibited much poor CO oxidation activity. The various CO oxidation and CO₂ sorption properties of the samples might arise from the different dispersed states of the CO₂ sorbent in the CO catalyst, owing to the different preparation methods. XRD results confirmed the high-dispersed sorbent phase in the samples prepared by the wet and solid impregnation method, while that of the sample prepared by wet/solid-state impregnation method showed the larger bulk phase as indicated by the high-intensity diffraction peaks. Nitrogen adsorption and desorption results further revealed that the latter sample had a higher surface area and pore volume, which were beneficial for the CO oxidation over the catalyst. Hence, the Cu-Mn-Ce oxide catalyst coupled with CO₂ sorbent using wet/solid-state impregnation method could be a good choice for fire smoke removal in the enclosed space.

Keywords: CO oxidation, CO₂ sorption, preparation methods, smoke removal

Procedia PDF Downloads 136
18827 K-Means Clustering-Based Infinite Feature Selection Method

Authors: Seyyedeh Faezeh Hassani Ziabari, Sadegh Eskandari, Maziar Salahi

Abstract:

Infinite Feature Selection (IFS) algorithm is an efficient feature selection algorithm that selects a subset of features of all sizes (including infinity). In this paper, we present an improved version of it, called clustering IFS (CIFS), by clustering the dataset in advance. To do so, first, we apply the K-means algorithm to cluster the dataset, then we apply IFS. In the CIFS method, the spatial and temporal complexities are reduced compared to the IFS method. Experimental results on 6 datasets show the superiority of CIFS compared to IFS in terms of accuracy, running time, and memory consumption.

Keywords: feature selection, infinite feature selection, clustering, graph

Procedia PDF Downloads 121