Search results for: Initial modes selection.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2419

Search results for: Initial modes selection.

169 Removal of Elemental Mercury from Dry Methane Gas with Manganese Oxides

Authors: Junya Takenami, Md. Azhar Uddin, Eiji Sasaoka, Yasushi Shioya, Tsuneyoshi Takase

Abstract:

In this study, we sought to investigate the mercury removal efficiency of manganese oxides from natural gas. The fundamental studies on mercury removal with manganese oxides sorbents were carried out in a laboratory scale fixed bed reactor at 30 °C with a mixture of methane (20%) and nitrogen gas laden with 4.8 ppb of elemental mercury. Manganese oxides with varying surface area and crystalline phase were prepared by conventional precipitation method in this study. The effects of surface area, crystallinity and other metal oxides on mercury removal efficiency were investigated. Effect of Ag impregnation on mercury removal efficiency was also investigated. Ag supported on metal oxide such titania and zirconia as reference materials were also used in this study for comparison. The characteristics of mercury removal reaction with manganese oxide was investigated using a temperature programmed desorption (TPD) technique. Manganese oxides showed very high Hg removal activity (about 73-93% Hg removal) for first time use. Surface area of the manganese oxide samples decreased after heat-treatment and resulted in complete loss of Hg removal ability for repeated use after Hg desorption in the case of amorphous MnO2, and 75% loss of the initial Hg removal activity for the crystalline MnO2. Mercury desorption efficiency of crystalline MnO2 was very low (37%) for first time use and high (98%) after second time use. Residual potassium content in MnO2 may have some effect on the thermal stability of the adsorbed Hg species. Desorption of Hg from manganese oxides occurs at much higher temperatures (with a peak at 400 °C) than Ag/TiO2 or Ag/ZrO2. Mercury may be captured on manganese oxides in the form of mercury manganese oxide.

Keywords: Mercury removal, Metal and metal oxide sorbents, Methane, Natural gas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2101
168 An Embedded System for Artificial Intelligence Applications

Authors: Ioannis P. Panagopoulos, Christos C. Pavlatos, George K. Papakonstantinou

Abstract:

Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.

Keywords: Attribute Grammars, Logic Programming, RISC microprocessor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5087
167 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru

Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar

Abstract:

Nowadays, Heritage Building Information Modeling (HBIM) is considered an efficient tool to represent and manage information of Cultural Heritage (CH). The basis of this tool relies on a 3D model generally obtained from a Cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired Level of Development (LOD), Level of Information (LOI), Grade of Generation (GOG) as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models’ families respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources, since the BIM software used has a free student license.

Keywords: Cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 925
166 Using Data Mining in Automotive Safety

Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler

Abstract:

Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.

Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2842
165 Cold Flow Investigation of Primary Zone Characteristics in Combustor Utilizing Axial Air Swirler

Authors: Yehia A. Eldrainy, Mohammad Nazri Mohd. Jaafar, Tholudin Mat Lazim

Abstract:

This paper presents a cold flow simulation study of a small gas turbine combustor performed using laboratory scale test rig. The main objective of this investigation is to obtain physical insight of the main vortex, responsible for the efficient mixing of fuel and air. Such models are necessary for predictions and optimization of real gas turbine combustors. Air swirler can control the combustor performance by assisting in the fuel-air mixing process and by producing recirculation region which can act as flame holders and influences residence time. Thus, proper selection of a swirler is needed to enhance combustor performance and to reduce NOx emissions. Three different axial air swirlers were used based on their vane angles i.e., 30°, 45°, and 60°. Three-dimensional, viscous, turbulent, isothermal flow characteristics of the combustor model operating at room temperature were simulated via Reynolds- Averaged Navier-Stokes (RANS) code. The model geometry has been created using solid model, and the meshing has been done using GAMBIT preprocessing package. Finally, the solution and analysis were carried out in a FLUENT solver. This serves to demonstrate the capability of the code for design and analysis of real combustor. The effects of swirlers and mass flow rate were examined. Details of the complex flow structure such as vortices and recirculation zones were obtained by the simulation model. The computational model predicts a major recirculation zone in the central region immediately downstream of the fuel nozzle and a second recirculation zone in the upstream corner of the combustion chamber. It is also shown that swirler angles changes have significant effects on the combustor flowfield as well as pressure losses.

Keywords: cold flow, numerical simulation, combustor;turbulence, axial swirler.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2203
164 Contribution of Vitaton (Β-Carotene) to the Rearing Factors Survival Rate and Visual Flesh Color of Rainbow Trout Fish in Comparison With Astaxanthin

Authors: M.Ghotbi, M.Ghotbi, Gh. Azari Takami

Abstract:

In this study Vitaton (an organic supplement which contains fermentative β-carotene) and synthetic astaxanthin (CAROPHYLL® Pink) were evaluated as pro-growth factors in Rainbow trout diet. An 8 week feeding trial was conducted to determine the effects of Vitaton versus astaxanthin on rearing factors, survival rate and visual flesh color of Rainbow trout (Oncorhnchynchus mykiss) with initial weight of 196±5. Four practical diets were formulated to contain 50 and 80 (ppm) of β- carotene and astaxanthin and also a control diet was prepared without any pigment. Each diet was fed to triplicate groups of fish rearing in fresh water. Fish were fed twice daily. The water temperature fluctuated from 12 to 15 (C˚) and also dissolved oxygen content was between 7 to 7.5 (mg/lit) during the experimental period. At the end of the experiment, growth and food utilization parameters and survival rate were unaffected by dietary treatments (p>0.05). Also, there was no significant difference between carcass yield within treatments (p>0.05). No significant difference recognized between visual flesh color (SalmoFan score) of fish fed Vitaton-containing diets. On the contrary, feeding on diets containing 50 and 80 (ppm) of astaxanthin, increased SalmoFan score (flesh astaxanthin concentration) from <20 (<1 mg/kg) to 23.33 (2.03 mg/kg) and 27.67 (5.74 mg/kg), respectively. Ultimately, a significant difference was seen between flesh carotenoid concentrations of fish feeding on astaxanthin containing treatments and control treatment (P<0.05). It should be mentioned that just raw fillet color of fish belonged to 80 (ppm) of astaxanthin treatment was seen to be close to color targets (SalmoFan scores) adopted for harvest-size fish.

Keywords: Astaxanthin, Flesh color, Rainbow trout, Vitaton, β- carotene,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3427
163 Physicochemical and Thermal Characterization of Starch from Three Different Plantain Cultivars in Puerto Rico

Authors: Carmen E. Pérez-Donado, Fernando Pérez-Muñoz, Rosa N. Chávez-Jáuregui

Abstract:

Plantain contains starch as the main component and represents a relevant source of this carbohydrate. Starches from different cultivars of plantain and bananas have been studied for industrialization purposes due to their morphological and thermal characteristics and their influence in food products. This study aimed to characterize the physical, chemical, and thermal properties of starch from three different plantain cultivated in Puerto Rico: Maricongo, Maiden and FHIA 20. Amylose and amylopectin content, color, granular size, morphology, and thermal properties were determined. According to the amylose content in starches, FHIA 20 presented lowest content of the three cultivars studied. In terms of color, Maiden and FHIA 20 starches exhibited significantly higher whiteness indexes compared to Maricongo starch. Starches of the three cultivars had an elongated-ovoid morphology, with a smooth surface and a non-porous appearance. Regardless of similarities in their morphology, FHIA 20 exhibited a lower aspect ratio since its granules tended to be more elongated. Comparison of the thermal properties of starches showed that initial starch gelatinization temperature was similar among cultivars. However, FHIA 20 starch presented a noticeably higher final gelatinization temperature (87.95°C) and transition enthalpy than Maricongo (79.69°C) and Maiden (77.40°C). Despite similarities, starches from plantain cultivars showed differences in their composition and thermal behavior. This represents an opportunity to diversify plantain starch use in food-related applications.

Keywords: aspect ratio, morphology, Musa spp., starch, thermal properties, amylose content

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 672
162 Effect of Two Different Biochars on Germination and Seedlings Growth of Salad, Cress and Barley

Authors: L. Bouqbis, H.W. Koyro, M. C. Harrouni, S. Daoud, L. F. Z. Ainlhout, C. I. Kammann

Abstract:

The application of biochar to soils is becoming more and more common. Its application which is generally reported to improve the physical, chemical, and biological properties of soils, has an indirect effect on soil health and increased crop yields. However, many of the previous results are highly variable and dependent mainly on the initial soil properties, biochar characteristics, and production conditions. In this study, two biochars which are biochar II (BC II) derived from a blend of paper sludge and wheat husks and biochar 005 (BC 005) derived from sewage sludge with a KCl additive, are used, and the physical and chemical properties of BC II are characterized. To determine the potential impact of salt stress and toxic and volatile substances, the second part of this study focused on the effect biochars have on germination of salad (Lactuca sativa L.), barley (Hordeum vulgare), and cress (Lepidium sativum) respectively. Our results indicate that Biochar II showed some unique properties compared to the soil, such as high EC, high content of K, Na, Mg, and low content of heavy metals. Concerning salad and barley germination test, no negative effect of BC II and BC 005 was observed. However, a negative effect of BC 005 at 8% level was revealed. The test of the effect of volatile substances on germination of cress revealed a positive effect of BC II, while a negative effect was observed for BC 005. Moreover, the water holding capacities of biochar-sand mixtures increased with increasing biochar application. Collectively, BC II could be safely used for agriculture and could provide the potential for a better plant growth.

Keywords: Biochar, phytotoxic tests, seedlings growth, water holding capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077
161 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images

Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj

Abstract:

Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.

Keywords: Image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1178
160 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kr. Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 429
159 In-situ Chemical Oxidation of Residual TCE by Permanganate in Epikarst

Authors: Nihat Hakan Akyol, Irfan Yolcubal

Abstract:

In-situ chemical oxidation (ISCO) has been widely used for source zone remediation of Dense Nonaqueous Phase Liquids (DNAPLs) in subsurface environments. DNAPL source zones for karst aquifers are generally located in epikarst where the DNAPL mass is trapped either in karst soil or at the regolith contact with carbonate bedrock. This study aims to investigate the performance of oxidation of residual trichloroethylene found in such environments by potassium permanganate. Batch and flow cell experiments were conducted to determine the kinetics and the mass removal rate of TCE. pH change, Cl production, TCE and MnO4 destruction were monitored routinely during experiments. Nonreactive tracer tests were also conducted prior and after the oxidation process to determine the influence of oxidation on flow conditions. The results show that oxidant consumption rate of the calcareous epikarst soil was significant and the oxidant demand was determined to be 20 g KMnO4/kg soil. Oxidation rate of residual TCE (1.26x10-3 s-1) was faster than the oxidant consumption rate of the soil (2.54 - 2.92x10-4 s-1) at only high oxidant concentrations (> 40 mM KMnO4). Half life of TCE oxidation ranged from 7.9 to 10.7 min. Although highly significant fraction of residual TCE mass in the system was destroyed by permanganate oxidation, TCE concentration in the effluent remained above its MCL. Flow interruption tests indicate that efficiency of ISCO was limited by the rate of TCE dissolution and the rate-limited desorption of TCE. The residence time and the initial concentration of the oxidant in the source zone also controlled the efficiency of ISCO in epikarst.

Keywords: Epikarst, in-situ chemical oxidation, permanganate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2024
158 Rigorous Electromagnetic Model of Fourier Transform Infrared (FT-IR) Spectroscopic Imaging Applied to Automated Histology of Prostate Tissue Specimens

Authors: Rohith K Reddy, David Mayerich, Michael Walsh, P Scott Carney, Rohit Bhargava

Abstract:

Fourier transform infrared (FT-IR) spectroscopic imaging is an emerging technique that provides both chemically and spatially resolved information. The rich chemical content of data may be utilized for computer-aided determinations of structure and pathologic state (cancer diagnosis) in histological tissue sections for prostate cancer. FT-IR spectroscopic imaging of prostate tissue has shown that tissue type (histological) classification can be performed to a high degree of accuracy [1] and cancer diagnosis can be performed with an accuracy of about 80% [2] on a microscopic (≈ 6μm) length scale. In performing these analyses, it has been observed that there is large variability (more than 60%) between spectra from different points on tissue that is expected to consist of the same essential chemical constituents. Spectra at the edges of tissues are characteristically and consistently different from chemically similar tissue in the middle of the same sample. Here, we explain these differences using a rigorous electromagnetic model for light-sample interaction. Spectra from FT-IR spectroscopic imaging of chemically heterogeneous samples are different from bulk spectra of individual chemical constituents of the sample. This is because spectra not only depend on chemistry, but also on the shape of the sample. Using coupled wave analysis, we characterize and quantify the nature of spectral distortions at the edges of tissues. Furthermore, we present a method of performing histological classification of tissue samples. Since the mid-infrared spectrum is typically assumed to be a quantitative measure of chemical composition, classification results can vary widely due to spectral distortions. However, we demonstrate that the selection of localized metrics based on chemical information can make our data robust to the spectral distortions caused by scattering at the tissue boundary.

Keywords: Infrared, Spectroscopy, Imaging, Tissue classification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
157 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company

Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze

Abstract:

As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.

Keywords: Taguchi Robust Design, signal to noise ratio, Single Minute Exchange of Dies, lean production system, waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 973
156 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
155 Probabilistic Damage Tolerance Methodology for Solid Fan Blades and Discs

Authors: Andrej Golowin, Viktor Denk, Axel Riepe

Abstract:

Solid fan blades and discs in aero engines are subjected to high combined low and high cycle fatigue loads especially around the contact areas between blade and disc. Therefore, special coatings (e.g. dry film lubricant) and surface treatments (e.g. shot peening or laser shock peening) are applied to increase the strength with respect to combined cyclic fatigue and fretting fatigue, but also to improve damage tolerance capability. The traditional deterministic damage tolerance assessment based on fracture mechanics analysis, which treats service damage as an initial crack, often gives overly conservative results especially in the presence of vibratory stresses. A probabilistic damage tolerance methodology using crack initiation data has been developed for fan discs exposed to relatively high vibratory stresses in cross- and tail-wind conditions at certain resonance speeds for limited time periods. This Monte-Carlo based method uses a damage databank from similar designs, measured vibration levels at typical aircraft operations and wind conditions and experimental crack initiation data derived from testing of artificially damaged specimens with representative surface treatment under combined fatigue conditions. The proposed methodology leads to a more realistic prediction of the minimum damage tolerance life for the most critical locations applicable to modern fan disc designs.

Keywords: Damage tolerance, Monte-Carlo method, fan blade and disc, laser shock peening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
154 Entrepreneurial Characteristics and Attitude of Pineapple Growers

Authors: Kaushal Kumar Jha

Abstract:

Nagaland, the 16th state of India in order of statehood, is situated between 25° 6' and 27° 4' latitude north and between 93º 20' E and 95º 15' E longitude of equator in the North Eastern part of the India. Endowed with varied topography, soil and agro climatic conditions it is known for its potentiality to grow all most all kinds of horticultural crops. Pineapple being grown since long organically by default is one of the most promising crops of the state with emphasis being laid for commercialization by the government of Nagaland. In light of commercialization, globalization and scope of setting small-scale industries, a research study was undertaken to examine the socio-economic and personal characteristics, entrepreneurial characteristics and attitude of the pineapple growers towards improved package of practices of pineapple cultivation. The study was conducted in Medziphema block of Dimapur district of the Nagaland state of India following ex post facto research design. Ninety pineapple growers were selected from four different villages of Medziphema block based on proportionate random selection procedure. Findings of the study revealed that majority of the respondents had medium level of entrepreneurial characteristics in terms of knowledge level, risk orientation, self confidence, management orientation, farm decision making ability and leadership ability and most of them had favourable attitude towards improved package of practices of pineapple cultivation. The variables age, education, farm size, risk orientation, management orientation and sources of information utilized were found important to influence the attitude of the respondents. The study revealed that favourable attitude and entrepreneurial characteristics of the pineapple cultivators might be harnessed for increased production of pineapple in the state thereby bringing socio economic upliftment of the marginal and small-scale farmers.

Keywords: Attitude, Entrepreneurial characteristics, Pineapple, Socio economic upliftment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2312
153 Corporate Social Responsibility Reporting, State Ownership, and Corporate Performance in China: Proof from Longitudinal Data of Publicly Traded Enterprises from 2006 to 2020

Authors: Wanda Luen-Wun Siu, Xiaowen Zhang

Abstract:

This paper offered the primary methodical proof on how Corporate Social Responsibility (CSR) reporting related to enterprise earnings in listed firms in China in light of most evidence focusing on cross-sectional data or data in a short span of time. Using full economic and business panel data on China’s publicly listed enterprises from 2006 to 2020 over two decades in the China Stock Market & Accounting Research database, we found initial evidence of significant direct relations between CSR reporting and firm corporate performance in both state-owned and privately-owned firms over this period, supporting the stakeholder theory. Results also revealed that state-owned enterprises performed as well as private enterprises in the current period. But private enterprises performed better than state-owned enterprises in the subsequent years. Moreover, the release of social responsibility reports had the more significant impact on the financial performance of state-owned and private enterprises in the current period than in the subsequent periods. Specifically, CSR release was not significantly associated to the financial performance of state-owned enterprises on the lag of the first, second, and third periods. But it had an impact on the lag of the first, second, and third periods among private enterprises. Such findings suggested that CSR reporting helped improve the corporate financial performance of state-owned and private enterprises in the current period, but this kind of effect was more significant among private enterprises in the lag periods.

Keywords: China’s Listed Firm, CSR reporting, financial performance, panel analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 374
152 Buckling Optimization of Radially-Graded, Thin-Walled, Long Cylinders under External Pressure

Authors: Karam Y. Maalawi

Abstract:

This paper presents a generalized formulation for the problem of buckling optimization of anisotropic, radially graded, thin-walled, long cylinders subject to external hydrostatic pressure. The main structure to be analyzed is built of multi-angle fibrous laminated composite lay-ups having different volume fractions of the constituent materials within the individual plies. This yield to a piecewise grading of the material in the radial direction; that is the physical and mechanical properties of the composite material are allowed to vary radially. The objective function is measured by maximizing the critical buckling pressure while preserving the total structural mass at a constant value equals to that of a baseline reference design. In the selection of the significant optimization variables, the fiber volume fractions adjoin the standard design variables including fiber orientation angles and ply thicknesses. The mathematical formulation employs the classical lamination theory, where an analytical solution that accounts for the effective axial and flexural stiffness separately as well as the inclusion of the coupling stiffness terms is presented. The proposed model deals with dimensionless quantities in order to be valid for thin shells having arbitrary thickness-to-radius ratios. The critical buckling pressure level curves augmented with the mass equality constraint are given for several types of cylinders showing the functional dependence of the constrained objective function on the selected design variables. It was shown that material grading can have significant contribution to the whole optimization process in achieving the required structural designs with enhanced stability limits.

Keywords: Buckling instability, structural optimization, functionally graded material, laminated cylindrical shells, externalhydrostatic pressure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2358
151 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method

Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan

Abstract:

The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.

Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2344
150 Privacy Concerns and Law Enforcement Data Collection to Tackle Domestic and Sexual Violence

Authors: Francesca Radice

Abstract:

It has been observed that violent or coercive behaviour has been apparent from initial conversations on dating apps like Tinder. Child pornography, stalking, and coercive control are some criminal offences from dating apps, including women murdered after finding partners through Tinder. Police databases and predictive policing are novel approaches taken to prevent crime before harm is done. This research will investigate how police databases can be used in a privacy-preserving way to characterise users in terms of their potential for violent crime. Using the COPS database of NSW Police, we will explore how the past criminal record can be interpreted to yield a category of potential danger for each dating app user. It is up to the judgement of each subscriber on what degree of the potential danger they are prepared to enter into. Sentiment analysis is an area where research into natural language processing has made great progress over the last decade. This research will investigate how sentiment analysis can be used to interpret interchanges between dating app users to detect manipulative or coercive sentiments. These can be used to alert law enforcement if continued for a defined number of communications. One of the potential problems of this approach is the potential prejudice a categorisation can cause. Another drawback is the possibility of misinterpreting communications and involving law enforcement without reason. The approach will be thoroughly tested with cross-checks by human readers who verify both the level of danger predicted by the interpretation of the criminal record and the sentiment detected from personal messages. Even if only a few violent crimes can be prevented, the approach will have a tangible value for real people.

Keywords: Sentiment Analysis, data mining, predictive policing, virtual manipulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 251
149 Navigation and Guidance System Architectures for Small Unmanned Aircraft Applications

Authors: Roberto Sabatini, Celia Bartel, Anish Kaharkar, Tesheen Shaid, Subramanian Ramasamy

Abstract:

Two multisensor system architectures for navigation and guidance of small Unmanned Aircraft (UA) are presented and compared. The main objective of our research is to design a compact, light and relatively inexpensive system capable of providing the required navigation performance in all phases of flight of small UA, with a special focus on precision approach and landing, where Vision Based Navigation (VBN) techniques can be fully exploited in a multisensor integrated architecture. Various existing techniques for VBN are compared and the Appearance-Based Navigation (ABN) approach is selected for implementation. Feature extraction and optical flow techniques are employed to estimate flight parameters such as roll angle, pitch angle, deviation from the runway centreline and body rates. Additionally, we address the possible synergies of VBN, Global Navigation Satellite System (GNSS) and MEMS-IMU (Micro-Electromechanical System Inertial Measurement Unit) sensors, and the use of Aircraft Dynamics Model (ADM) to provide additional information suitable to compensate for the shortcomings of VBN and MEMS-IMU sensors in high-dynamics attitude determination tasks. An Extended Kalman Filter (EKF) is developed to fuse the information provided by the different sensors and to provide estimates of position, velocity and attitude of the UA platform in real-time. The key mathematical models describing the two architectures i.e., VBN-IMU-GNSS (VIG) system and VIGADM (VIGA) system are introduced. The first architecture uses VBN and GNSS to augment the MEMS-IMU. The second mode also includes the ADM to provide augmentation of the attitude channel. Simulation of these two modes is carried out and the performances of the two schemes are compared in a small UA integration scheme (i.e., AEROSONDE UA platform) exploring a representative cross-section of this UA operational flight envelope, including high dynamics manoeuvres and CAT-I to CAT-III precision approach tasks. Simulation of the first system architecture (i.e., VIG system) shows that the integrated system can reach position, velocity and attitude accuracies compatible with the Required Navigation Performance (RNP) requirements. Simulation of the VIGA system also shows promising results since the achieved attitude accuracy is higher using the VBN-IMU-ADM than using VBN-IMU only. A comparison of VIG and VIGA system is also performed and it shows that the position and attitude accuracy of the proposed VIG and VIGA systems are both compatible with the RNP specified in the various UA flight phases, including precision approach down to CAT-II.

Keywords: Global Navigation Satellite System (GNSS), Lowcost Navigation Sensors, MEMS Inertial Measurement Unit (IMU), Unmanned Aerial Vehicle, Vision Based Navigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3214
148 Development and Optimization of Colon Targeted Drug Delivery System of Ayurvedic Churna Formulation Using Eudragit L100 and Ethyl Cellulose as Coating Material

Authors: Anil Bhandari, Imran Khan Pathan, Peeyush K. Sharma, Rakesh K. Patel, Suresh Purohit

Abstract:

The purpose of this study was to prepare time and pH dependent release tablets of Ayurvedic Churna formulation and evaluate their advantages as colon targeted drug delivery system. The Vidangadi Churna was selected for this study which contains Embelin and Gallic acid. Embelin is used in Helminthiasis as therapeutic agent. Embelin is insoluble in water and unstable in gastric environment so it was formulated in time and pH dependent tablets coated with combination of two polymers Eudragit L100 and ethyl cellulose. The 150mg of core tablet of dried extract and lactose were prepared by wet granulation method. The compression coating was used in the polymer concentration of 150mg for both the layer as upper and lower coating tablet was investigated. The results showed that no release was found in 0.1 N HCl and pH 6.8 phosphate buffers for initial 5 hours and about 98.97% of the drug was released in pH 7.4 phosphate buffer in total 17 Hours. The in vitro release profiles of drug from the formulation could be best expressed first order kinetics as highest linearity (r2= 0.9943). The results of the present study have demonstrated that the time and pH dependent tablets system is a promising vehicle for preventing rapid hydrolysis in gastric environment and improving oral bioavailability of Embelin and Gallic acid for treatment of Helminthiasis.

Keywords: Embelin, Gallic acid, Vidangadi Churna, Colon targeted drug delivery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
147 A Challenge to Acquire Serious Victims’ Locations during Acute Period of Giant Disasters

Authors: Keiko Shimazu, Yasuhiro Maida, Tetsuya Sugata, Daisuke Tamakoshi, Kenji Makabe, Haruki Suzuki

Abstract:

In this paper, we report how to acquire serious victims’ locations in the Acute Stage of Large-scale Disasters, in an Emergency Information Network System designed by us. The background of our concept is based on the Great East Japan Earthquake occurred on March 11th, 2011. Through many experiences of national crises caused by earthquakes and tsunamis, we have established advanced communication systems and advanced disaster medical response systems. However, Japan was devastated by huge tsunamis swept a vast area of Tohoku causing a complete breakdown of all the infrastructures including telecommunications. Therefore, we noticed that we need interdisciplinary collaboration between science of disaster medicine, regional administrative sociology, satellite communication technology and systems engineering experts. Communication of emergency information was limited causing a serious delay in the initial rescue and medical operation. For the emergency rescue and medical operations, the most important thing is to identify the number of casualties, their locations and status and to dispatch doctors and rescue workers from multiple organizations. In the case of the Tohoku earthquake, the dispatching mechanism and/or decision support system did not exist to allocate the appropriate number of doctors and locate disaster victims. Even though the doctors and rescue workers from multiple government organizations have their own dedicated communication system, the systems are not interoperable.

Keywords: Crisis management, disaster mitigation, messing, MGRS, Satellite communication system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 828
146 Rolling Element Bearing Diagnosis by Improved Envelope Spectrum: Optimal Frequency Band Selection

Authors: Juan David Arango, Alejandro Restrepo-Martinez

Abstract:

The Rolling Element Bearing (REB) vibration diagnosis is worth of special interest by the variety of REB and the wide necessity of those elements in industrial applications. The presence of a localized fault in a REB gives rise to a vibrational response, characterized by the modulation of a carrier signal. Frequency content of carrier signal (Spectral Frequency –f) is mainly related to resonance frequencies of the REB. This carrier signal is modulated by another signal, governed by the periodicity of the fault impact (Cyclic Frequency –α). In this sense, REB fault vibration response gives rise to a second-order cyclostationary signal. Second order cyclostationary signals could be represented in a bi-spectral map, where Spectral Coherence –SCoh are plotted against f and α. The Improved Envelope Spectrum –IES, is a useful approach to execute REB fault diagnosis. IES could be applied by the integration of SCoh over a predefined bandwidth on the f axis. Approaches to select f-bandwidth have been recently exposed by the definition of a metric which intends to evaluate the magnitude of the IES at the fault characteristics frequencies. This metric is represented in a 1/3-binary tree as a function of the frequency bandwidth and centre. Based on this binary tree the optimal frequency band is selected. However, some advantages have been seen if the metric is changed, which in fact tends to dictate different optimal f-bandwidth and so improve the IES representation. This paper evaluates the behaviour of the IES from a different metric optimization. This metric is based on the sample correlation coefficient, detecting high peaks in the selected frequencies while penalizing high peaks in the neighbours of the selected frequencies. Prior results indicate an improvement on the signal-noise ratio (SNR) on around 86% of samples analysed, which belong to IMS database.

Keywords: Sample Correlation IESFOgram, cyclostationary analysis, improved envelope spectrum, IES, rolling element bearing diagnosis, spectral coherence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 740
145 Concept for Knowledge out of Sri Lankan Non-State Sector: Performances of Higher Educational Institutes and Successes of Its Sector

Authors: S. Jeyarajan

Abstract:

Concept of knowledge is discovered from conducted study for successive Competition in Sri Lankan Non-State Higher Educational Institutes. The Concept discovered out of collected Knowledge Management Practices from Emerald inside likewise reputed literatures and of Non-State Higher Educational sector. A test is conducted to reveal existences and its reason behind of these collected practices in Sri Lankan Non-State Higher Education Institutes. Further, unavailability of such study and uncertain on number of participants for data collection in the Sri Lankan context contributed selection of research method as qualitative method, which used attributes of Delphi Method to manage those likewise uncertainty. Data are collected under Dramaturgical Method, which contributes efficient usage of the Delphi method. Grounded theory is selected as data analysis techniques, which is conducted in intermixed discourse to manage different perspectives of data that are collected systematically through perspective and modified snowball sampling techniques. Data are then analysed using Grounded Theory Development Techniques in Intermix discourses to manage differences in Data. Consequently, Agreement in the results of Grounded theories and of finding in the Foreign Study is discovered in the analysis whereas present study conducted as Qualitative Research and The Foreign Study conducted as Quantitative Research. As such, the Present study widens the discovery in the Foreign Study. Further, having discovered reason behind of the existences, the Present result shows Concept for Knowledge from Sri Lankan Non-State sector to manage higher educational Institutes in successful manner.

Keywords: Adherence of snowball sampling into perspective sampling, Delphi method in qualitative method, grounded theory development in intermix discourses of analysis, knowledge management for success of higher educational institutes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 771
144 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Authors: C. Xavier Mendieta, J. J McArthur

Abstract:

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1022
143 Performance Analysis of Organic Rankine Cycle Technology to Exploit Low-Grade Waste Heat to Power Generation in Indian Industry

Authors: Bipul Krishna Saha, Basab Chakraborty, Ashish Alex Sam, Parthasarathi Ghosh

Abstract:

The demand for energy is cumulatively increasing with time.  Since the availability of conventional energy resources is dying out gradually, significant interest is being laid on searching for alternate energy resources and minimizing the wastage of energy in various fields.  In such perspective, low-grade waste heat from several industrial sources can be reused to generate electricity. The present work is to further the adoption of the Organic Rankine Cycle (ORC) technology in Indian industrial sector.  The present paper focuses on extending the previously reported idea to the next level through a comparative review with three different working fluids using practical data from an Indian industrial plant. For comprehensive study in the simulation platform of Aspen Hysys®, v8.6, the waste heat data has been collected from a current coke oven gas plant in India.  A parametric analysis of non-regenerative ORC and regenerative ORC is executed using the working fluids R-123, R-11 and R-21 for subcritical ORC system.  The primary goal is to determine the optimal working fluid considering various system parameters like turbine work output, obtained system efficiency, irreversibility rate and second law efficiency under applied multiple heat source temperature (160 °C- 180 °C).  Selection of the turbo-expanders is one of the most crucial tasks for low-temperature applications in ORC system. The present work is an attempt to make suitable recommendation for the appropriate configuration of the turbine. In a nutshell, this study justifies the proficiency of integrating the ORC technology in Indian perspective and also finds the appropriate parameter of all components integrated in ORC system for building up an ORC prototype.

Keywords: Organic rankine cycle, regenerative organic rankine cycle, waste heat recovery, Indian industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1268
142 Assessing Storage of Stability and Mercury Reduction of Freeze-Dried Pseudomonas putida within Different Types of Lyoprotectant

Authors: A. A. M. Azoddein, Y. Nuratri, A. B. Bustary, F. A. M. Azli, S. C. Sayuti

Abstract:

Pseudomonas putida is a potential strain in biological treatment to remove mercury contained in the effluent of petrochemical industry due to its mercury reductase enzyme that able to reduce ionic mercury to elementary mercury. Freeze-dried P. putida allows easy, inexpensive shipping, handling and high stability of the product. This study was aimed to freeze dry P. putida cells with addition of lyoprotectant. Lyoprotectant was added into the cells suspension prior to freezing. Dried P. putida obtained was then mixed with synthetic mercury. Viability of recovery P. putida after freeze dry was significantly influenced by the type of lyoprotectant. Among the lyoprotectants, tween 80/ sucrose was found to be the best lyoprotectant. Sucrose able to recover more than 78% (6.2E+09 CFU/ml) of the original cells (7.90E+09CFU/ml) after freeze dry and able to retain 5.40E+05 viable cells after 4 weeks storage in 4oC without vacuum. Polyethylene glycol (PEG) pre-treated freeze dry cells and broth pre-treated freeze dry cells after freeze-dry recovered more than 64% (5.0 E+09CFU/ml) and >0.1% (5.60E+07CFU/ml). Freeze-dried P. putida cells in PEG and broth cannot survive after 4 weeks storage. Freeze dry also does not really change the pattern of growth P. putida but extension of lag time was found 1 hour after 3 weeks of storage. Additional time was required for freeze-dried P. putida cells to recover before introduce freeze-dried cells to more complicated condition such as mercury solution. The maximum mercury reduction of PEG pre-treated freeze-dried cells after freeze dry and after storage 3 weeks was 56.78% and 17.91%. The maximum of mercury reduction of tween 80/sucrose pre-treated freeze-dried cells after freeze dry and after storage 3 weeks were 26.35% and 25.03%. Freeze dried P. putida was found to have lower mercury reduction compare to the fresh P. putida that has been growth in agar. Result from this study may be beneficial and useful as initial reference before commercialize freeze-dried P. putida.

Keywords: Pseudomonas putida, freeze-dry, PEG, Tween80/Sucrose, mercury, cell viability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1117
141 A Refined Application of QFD in SCM, A New Approach

Authors: Nooshin La'l Mohamadi

Abstract:

Due to the fact that in the new century customers tend to express globally increasing demands, networks of interconnected businesses have been established in societies and the management of such networks seems to be a major key through gaining competitive advantages. Supply chain management encompasses such managerial activities. Within a supply chain, a critical role is played by quality. QFD is a widely-utilized tool which serves the purpose of not only bringing quality to the ultimate provision of products or service packages required by the end customer or the retailer, but it can also initiate us into a satisfactory relationship with our initial customer; that is the wholesaler. However, the wholesalers- cooperation is considerably based on the capabilities that are heavily dependent on their locations and existing circumstances. Therefore, it is undeniable that for all companies each wholesaler possesses a specific importance ratio which can heavily influence the figures calculated in the House of Quality in QFD. Moreover, due to the competitiveness of the marketplace today, it-s been widely recognized that consumers- expression of demands has been highly volatile in periods of production. Apparently, such instability and proneness to change has been very tangibly noticed and taking it into account during the analysis of HOQ is widely influential and doubtlessly required. For a more reliable outcome in such matters, this article demonstrates the application viability of Analytic Network Process for considering the wholesalers- reputation and simultaneously introduces a mortality coefficient for the reliability and stability of the consumers- expressed demands in course of time. Following to this, the paper provides further elaboration on the relevant contributory factors and approaches through the calculation of such coefficients. In the end, the article concludes that an empirical application is needed to achieve broader validity.

Keywords: Analytic Network Process, Quality Function Deployment, QFD flaws, Supply Chain Management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1425
140 The Fracture Resistance of Zirconia Based Dental Crowns from Cyclic Loading: A Function of Relative Wear Depth

Authors: T. Qasim, B. El Masoud, D. Ailabouni

Abstract:

This in vitro study focused on investigating the fatigue resistance of veneered zirconia molar crowns with different veneering ceramic thicknesses, simulating the relative wear depths under simulated cyclic loading. A mandibular first molar was prepared and then scanned using computer-aided design/computer-aided manufacturing (CAD/CAM) technology to fabricate 32 zirconia copings of uniform 0.5 mm thickness. The manufactured copings then veneered with 1.5 mm, 1.0 mm, 0.5 mm, and 0.0 mm representing 0%, 33%, 66%, and 100% relative wear of a normal ceramic thickness of 1.5 mm. All samples were thermally aged to 6000 thermo-cycles for 2 minutes with distilled water between 5 ˚C and 55 ˚C. The samples subjected to cyclic fatigue and fracture testing using SD Mechatronik chewing simulator. These samples are loaded up to 1.25x10⁶ cycles or until they fail. During fatigue, testing, extensive cracks were observed in samples with 0.5 mm veneering layer thickness. Veneering layer thickness 1.5-mm group and 1.0-mm group were not different in terms of resisting loads necessary to cause an initial crack or final failure. All ceramic zirconia-based crown restorations with varying occlusal veneering layer thicknesses appeared to be fatigue resistant. Fracture load measurement for all tested groups before and after fatigue loading exceeded the clinical chewing forces in the posterior region. In general, the fracture loads increased after fatigue loading and with the increase in the thickness of the occlusal layering ceramic.

Keywords: All ceramic, dental crowns, relative wear, chewing simulator, cyclic loading, thermally ageing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 909