Search results for: Curve Length Method
4452 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
Authors: I. V. Pinto, M. R. Sooriyarachchi
Abstract:
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.
Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7334451 Judicial Review of Indonesia's Position as the First Archipelagic State to implement the Traffic Separation Scheme to Establish Maritime Safety and Security
Authors: Rosmini Yanti, Safira Aviolita, Marsetio
Abstract:
Indonesia has several straits that are very important as a shipping lane, including the Sunda Strait and the Lombok Strait, which are the part of the Indonesian Archipelagic Sea Lane (IASL). An increase in traffic on the Marine Archipelago makes the task of monitoring sea routes increasingly difficult. Indonesia has proposed the establishment of a Traffic Separation Scheme (TSS) in the Sunda Strait and the Lombok Strait and the country now has the right to be able to conceptualize the TSS as well as the obligation to regulate it. Indonesia has the right to maintain national safety and sovereignty. In setting the TSS, Indonesia needs to issue national regulations that are in accordance with international law and the general provisions of the IMO (International Maritime Organization) can then be used as guidelines for maritime safety and security in the Sunda Strait and the Lombok Strait. The research method used is a qualitative method with the concept of linguistic and visual data collection. The source of the data is the analysis of documents and regulations. The results show that the determination of TSS was justified by International Law, in accordance with article 22, article 41, and article 53 of the United Nations Convention on the Law of the Sea (UNCLOS) 1982. The determination of TSS by the Indonesian government would be in accordance with COLREG (International Convention on Preventing Collisions at Sea) 10, which has been designed to follow IASL. Thus, TSS can provide a function as a safety and monitoring medium to minimize ship accidents or collisions, including the warship and aircraft of other countries that cross the IASL.
Keywords: Archipelago State, maritime law, maritime security, traffic separation scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7514450 Investigation of the Operational Principle and Flow Analysis of a Newly Developed Dry Separator
Authors: Sung Uk Park, Young Su Kang, Sangmo Kang, Yong Kweon Suh
Abstract:
Mineral product, waste concrete (fine aggregates), waste in the optical field, industry, and construction employ separators to separate solids and classify them according to their size. Various sorting machines are used in the industrial field such as those operating under electrical properties, centrifugal force, wind power, vibration, and magnetic force. Study on separators has been carried out to contribute to the environmental industry. In this study, we perform CFD analysis for understanding the basic mechanism of the separation of waste concrete (fine aggregate) particles from air with a machine built with a rotor with blades. In CFD, we first performed two-dimensional particle tracking for various particle sizes for the model with 1 degree, 1.5 degree, and 2 degree angle between each blade to verify the boundary conditions and the method of rotating domain method to be used in 3D. Then we developed 3D numerical model with ANSYS CFX to calculate the air flow and track the particles. We judged the capability of particle separation for given size by counting the number of particles escaping from the domain toward the exit among 10 particles issued at the inlet. We confirm that particles experience stagnant behavior near the exit of the rotating blades where the centrifugal force acting on the particles is in balance with the air drag force. It was also found that the minimum particle size that can be separated by the machine with the rotor is determined by its capability to stay at the outlet of the rotor channels.Keywords: Environmental industry, Separator, CFD, Fine aggregate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18074449 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure
Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar
Abstract:
This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.Keywords: Collapse capacity, fragility analysis, spectral shape effects, IDA method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17954448 Moderation in Temperature Dependence on Counter Frictional Coefficient and Prevention of Wear of C/C Composites by Synthesizing SiC around Surface and Internal Vacancies
Authors: Noboru Wakamoto, Kiyotaka Obunai, Kazuya Okubo, Toru Fujii
Abstract:
The aim of this study is to moderate the dependence of counter frictional coefficient on temperature between counter surfaces and to reduce the wear of C/C composites at low temperature. To modify the C/C composites, Silica (SiO2) powders were added into phenolic resin for carbon precursor. The preform plate of the precursor of C/C composites was prepared by conventional filament winding method. The C/C composites plates were obtained by carbonizing preform plate at 2200 °C under an argon atmosphere. At that time, the silicon carbides (SiC) were synthesized around the surfaces and the internal vacancies of the C/C composites. The frictional coefficient on the counter surfaces and specific wear volumes of the C/C composites were measured by our developed frictional test machine like pin-on disk type. The XRD indicated that SiC was synthesized in the body of C/C composite fabricated by current method. The results of friction test showed that coefficient of friction of unmodified C/C composites have temperature dependence when the test condition was changed. In contrast, frictional coefficient of the C/C composite modified with SiO2 powders was almost constant at about 0.27 when the temperature condition was changed from Room Temperature (RT) to 300 °C. The specific wear rate decreased from 25×10-6 mm2/N to 0.1×10-6 mm2/N. The observations of the surfaces after friction tests showed that the frictional surface of the modified C/C composites was covered with a film produced by the friction. This study found that synthesizing SiC around surface and internal vacancies of C/C composites was effective to moderate the dependence on the frictional coefficient and reduce to the abrasion of C/C composites.
Keywords: C/C composites, frictional coefficient, SiC, wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8314447 An Approach to the Solving Non-Steiner Minimum Link Path Problem
Authors: V. Tereshchenko, A. Tregubenko
Abstract:
In this study we survey the method for fast finding a minimum link path between two arbitrary points within a simple polygon, which can pass only through the vertices, with preprocessing.
Keywords: Minimum link path, simple polygon, Steiner points, optimal algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15134446 Synthesis and Applications of Heteronanostructured ZnO Nanowires Array
Authors: Minsu Seol, Youngjo Tak, Guenjai Kwak, Kijung Yong
Abstract:
ZnO heteronanostructured nanowires arrays have been fabricated by low temperature solution method. Various heterostructures were synthesized including CdS/ZnO, CdSe/CdS/ZnO nanowires and Co3O4/ZnO, ZnO/SiC nanowires. These multifunctional heterostructure nanowires showed important applications in photocatalysts, sensors, wettability control and solar energy conversion.Keywords: ZnO nanowires, Heterostructure nanowires, solarenergy conversion, photocatalsis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21184445 Case Study on Innovative Aquatic-Based Bioeconomy for Chlorella sorokiniana
Authors: Iryna Atamaniuk, Hannah Boysen, Nils Wieczorek, Natalia Politaeva, Iuliia Bazarnova, Kerstin Kuchta
Abstract:
Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.Keywords: Bioeconomy, lipids, microalgae, proteins, saccharides.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9024444 Effect of Three Drying Methods on Antioxidant Efficiency and Vitamin C Content of Moringa oleifera Leaf Extract
Authors: Kenia Martínez, Geniel Talavera, Juan Alonso
Abstract:
Moringa oleifera is a plant containing many nutrients that are mostly concentrated within the leaves. Commonly, the separation process of these nutrients involves solid-liquid extraction followed by evaporation and drying to obtain a concentrated extract, which is rich in proteins, vitamins, carbohydrates, and other essential nutrients that can be used in the food industry. In this work, three drying methods were used, which involved very different temperature and pressure conditions, to evaluate the effect of each method on the vitamin C content and the antioxidant efficiency of the extracts. Solid-liquid extractions of Moringa leaf (LE) were carried out by employing an ethanol solution (35% v/v) at 50 °C for 2 hours. The resulting extracts were then dried i) in a convective oven (CO) at 100 °C and at an atmospheric pressure of 750 mbar for 8 hours, ii) in a vacuum evaporator (VE) at 50 °C and at 300 mbar for 2 hours, and iii) in a freeze-drier (FD) at -40 °C and at 0.050 mbar for 36 hours. The antioxidant capacity (EC50, mg solids/g DPPH) of the dry solids was calculated by the free radical inhibition method employing DPPH˙ at 517 nm, resulting in a value of 2902.5 ± 14.8 for LE, 3433.1 ± 85.2 for FD, 3980.1 ± 37.2 for VE, and 8123.5 ± 263.3 for CO. The calculated antioxidant efficiency (AE, g DPPH/(mg solids·min)) was 2.920 × 10-5 for LE, 2.884 × 10-5 for FD, 2.512 × 10-5 for VE, and 1.009 × 10-5 for CO. Further, the content of vitamin C (mg/L) determined by HPLC was 59.0 ± 0.3 for LE, 49.7 ± 0.6 for FD, 45.0 ± 0.4 for VE, and 23.6 ± 0.7 for CO. The results indicate that the convective drying preserves vitamin C and antioxidant efficiency to 40% and 34% of the initial value, respectively, while vacuum drying to 76% and 86%, and freeze-drying to 84% and 98%, respectively.
Keywords: Antioxidant efficiency, convective drying, freeze-drying, Moringa oleifera, vacuum drying, vitamin C content.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17994443 Effect of Injection Moulding Process Parameter on Tensile Strength Using Taguchi Method
Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma
Abstract:
The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. Therefore, to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence, optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.
Keywords: Injection moulding, tensile strength, Taguchi method, poly-propylene.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37684442 Actual Nursing Competency among Nurses in Hospital in Vietnam
Authors: Do Thi Ha, Khanitta Nuntaboot
Abstract:
Background: Competency of nurses is vital to safe nursing practice as well as essential component to drive quality of nursing services. There exists little up to date information concerning actual competency among Vietnamese nurses. Purposes: The purpose of this study is to identify the actual nursing competency among nurses in clinical settings in Vietnam. Methods: A qualitative study, ethnographic method, comprised of the participant-observation, in-depth interview, and focus group discussion with multidisciplinary groups of nurses employing in Cho Ray hospital, Vietnam, managers/administrators, nurse teachers, medical doctors, other health care providers, patients and family members which derived from purposeful sampling technique. Content analysis was used for data analysis. Results: Five essential themes of nursing competencies among nurses were identified include (1) knowledge, (2) skills, (3) attitude and value-based nursing practice, (4) legal and ethical competencies, and (5) transcultural competencies. Basic and advanced knowledge were identified as further two dimensions of knowledge. There were five sub themes identified as further dimensions of skills include technical skills, communication skills, organizing and management skills, teamwork and interrelationship, and critical thinking skills. Conclusions: The findings from this study provide valuable information and understanding of the actual competency among nurses in clinical settings in Vietnam. It is expected that this understanding would assist in developing a guide to nursing education and training, nursing practice and relevant policy regulation used for promoting nursing competency among nurses.
Keywords: Nursing competency, qualitative design, ethnographic method, Vietnam.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24894441 Profile Calculation in Water Phantom of Symmetric and Asymmetric Photon Beam
Authors: N. Chegeni, M. J. Tahmasebi Birgani
Abstract:
Nowadays, in most radiotherapy departments, the commercial treatment planning systems (TPS) used to calculate dose distributions needs to be verified; therefore, quick, easy-to-use and low cost dose distribution algorithms are desirable to test and verify the performance of the TPS. In this paper, we put forth an analytical method to calculate the phantom scatter contribution and depth dose on the central axis based on the equivalent square concept. Then, this method was generalized to calculate the profiles at any depth and for several field shapes regular or irregular fields under symmetry and asymmetry photon beam conditions. Varian 2100 C/D and Siemens Primus Plus Linacs with 6 and 18 MV photon beam were used for irradiations. Percentage depth doses (PDDs) were measured for a large number of square fields for both energies, and for 45º wedges which were employed to obtain the profiles in any depth. To assess the accuracy of the calculated profiles, several profile measurements were carried out for some treatment fields. The calculated and measured profiles were compared by gamma-index calculation. All γ–index calculations were based on a 3% dose criterion and a 3 mm dose-to-agreement (DTA) acceptance criterion. The γ values were less than 1 at most points. However, the maximum γ observed was about 1.10 in the penumbra region in most fields and in the central area for the asymmetric fields. This analytical approach provides a generally quick and fairly accurate algorithm to calculate dose distribution for some treatment fields in conventional radiotherapy.
Keywords: Dose distribution, equivalent field, asymmetric field, irregular field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30454440 Combined Sewer Overflow forecasting with Feed-forward Back-propagation Artificial Neural Network
Authors: Achela K. Fernando, Xiujuan Zhang, Peter F. Kinley
Abstract:
A feed-forward, back-propagation Artificial Neural Network (ANN) model has been used to forecast the occurrences of wastewater overflows in a combined sewerage reticulation system. This approach was tested to evaluate its applicability as a method alternative to the common practice of developing a complete conceptual, mathematical hydrological-hydraulic model for the sewerage system to enable such forecasts. The ANN approach obviates the need for a-priori understanding and representation of the underlying hydrological hydraulic phenomena in mathematical terms but enables learning the characteristics of a sewer overflow from the historical data. The performance of the standard feed-forward, back-propagation of error algorithm was enhanced by a modified data normalizing technique that enabled the ANN model to extrapolate into the territory that was unseen by the training data. The algorithm and the data normalizing method are presented along with the ANN model output results that indicate a good accuracy in the forecasted sewer overflow rates. However, it was revealed that the accurate forecasting of the overflow rates are heavily dependent on the availability of a real-time flow monitoring at the overflow structure to provide antecedent flow rate data. The ability of the ANN to forecast the overflow rates without the antecedent flow rates (as is the case with traditional conceptual reticulation models) was found to be quite poor.Keywords: Artificial Neural Networks, Back-propagationlearning, Combined sewer overflows, Forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15354439 Service Flow in Multilayer Networks: A Method for Evaluating the Layout of Urban Medical Resources
Authors: Guanglin Song
Abstract:
Situated within the context of China's tiered medical treatment system, this study aims to analyze spatial causes of urban healthcare access difficulties from the perspective of the configuration of healthcare facilities. A social network analysis approach is employed to construct a healthcare demand and supply flow network between major residential clusters and various tiers of hospitals in the city. The findings reveal that: 1) There exists overall maldistribution and over-concentration of healthcare resources in the study area, characterized by structural imbalance. 2) The low rate of primary care utilization in the study area is a key factor contributing to congestion at higher-tier hospitals, as excessive reliance on these institutions by neighboring communities exacerbates the problem. 3) Gradual optimization of the healthcare facility layout in the study area, encompassing holistic, local, and individual institutional levels, can enhance systemic efficiency and resource balance. This research proposes a method for evaluating urban healthcare resource distribution structures based on service flows within hierarchical networks. It offers spatially targeted optimization suggestions for promoting the implementation of the tiered healthcare system and alleviating challenges related to accessibility and congestion in seeking medical care. In addition, the study provides some new ideas for researchers and healthcare managers in countries, cities, and healthcare management around the world with similar challenges.
Keywords: Flow of public services, healthcare facilities, spatial planning, urban networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 904438 Entropy Generation and Heat Transfer of Cu–Water Nanofluid Mixed Convection in a Cavity
Authors: Mliki Bouchmel, Belgacem Nabil, Abbassi Mohamed Ammar, Geudri Kamel, Omri Ahmed
Abstract:
In this numerical work, mixed convection and entropy generation of Cu–water nanofluid in a lid-driven square cavity have been investigated numerically using the Lattice Boltzmann Method. Horizontal walls of the cavity are adiabatic and vertical walls have constant temperature but different values. The top wall has been considered as moving from left to right at a constant speed, U0. The effects of different parameters such as nanoparticle volume concentration (0–0.05), Rayleigh number (104–106) and Reynolds numbers (1, 10 and 100) on the entropy generation, flow and temperature fields are studied. The results have shown that addition of nanoparticles to the base fluid affects the entropy generation, flow pattern and thermal behavior especially at higher Rayleigh and low Reynolds numbers. For pure fluid as well as nanofluid, the increase of Reynolds number increases the average Nusselt number and the total entropy generation, linearly. The maximum entropy generation occurs in nanofluid at low Rayleigh number and at high Reynolds number. The minimum entropy generation occurs in pure fluid at low Rayleigh and Reynolds numbers. Also at higher Reynolds number, the effect of Cu nanoparticles on enhancement of heat transfer was decreased because the effect of lid-driven cavity was increased. The present results are validated by favorable comparisons with previously published results. The results of the problem are presented in graphical and tabular forms and discussed.Keywords: Entropy generation, mixed convection, nanofluid, lattice Boltzmann method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19514437 Modeling, Simulation and Monitoring of Nuclear Reactor Using Directed Graph and Bond Graph
Authors: A. Badoud, M. Khemliche, S. Latreche
Abstract:
The main objective developed in this paper is to find a graphic technique for modeling, simulation and diagnosis of the industrial systems. This importance is much apparent when it is about a complex system such as the nuclear reactor with pressurized water of several form with various several non-linearity and time scales. In this case the analytical approach is heavy and does not give a fast idea on the evolution of the system. The tool Bond Graph enabled us to transform the analytical model into graphic model and the software of simulation SYMBOLS 2000 specific to the Bond Graphs made it possible to validate and have the results given by the technical specifications. We introduce the analysis of the problem involved in the faults localization and identification in the complex industrial processes. We propose a method of fault detection applied to the diagnosis and to determine the gravity of a detected fault. We show the possibilities of application of the new diagnosis approaches to the complex system control. The industrial systems became increasingly complex with the faults diagnosis procedures in the physical systems prove to become very complex as soon as the systems considered are not elementary any more. Indeed, in front of this complexity, we chose to make recourse to Fault Detection and Isolation method (FDI) by the analysis of the problem of its control and to conceive a reliable system of diagnosis making it possible to apprehend the complex dynamic systems spatially distributed applied to the standard pressurized water nuclear reactor.Keywords: Bond Graph, Modeling, Simulation, Monitoring, Analytical Redundancy Relations, Pressurized Water Reactor, Directed Graph.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19804436 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain
Authors: Bita Payami-Shabestari, Dariush Eslami
Abstract:
The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6904435 Existence of Solution for Boundary Value Problems of Differential Equations with Delay
Authors: Xiguang Li
Abstract:
In this paper , by using fixed point theorem , upper and lower solution-s method and monotone iterative technique , we prove the existence of maximum and minimum solutions of differential equations with delay , which improved and generalize the result of related paper.
Keywords: Banach space, boundary value problem, differential equation, delay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12414434 The Mechanism Study of Degradative Solvent Extraction of Biomass by Liquid Membrane-Fourier Transform Infrared Spectroscopy
Authors: W. Ketren, J. Wannapeera, Z. Heishun, A. Ryuichi, K. Toshiteru, M. Kouichi, O. Hideaki
Abstract:
Degradative solvent extraction is the method developed for biomass upgrading by dewatering and fractionation of biomass under the mild condition. However, the conversion mechanism of the degradative solvent extraction method has not been fully understood so far. The rice straw was treated in 1-methylnaphthalene (1-MN) at a different solvent-treatment temperature varied from 250 to 350 oC with the residence time for 60 min. The liquid membrane-Fourier Transform Infrared Spectroscopy (FTIR) technique is applied to study the processing mechanism in-depth without separation of the solvent. It has been found that the strength of the oxygen-hydrogen stretching (3600-3100 cm-1) decreased slightly with increasing temperature in the range of 300-350 oC. The decrease of the hydroxyl group in the solvent soluble suggested dehydration reaction taking place between 300 and 350 oC. FTIR spectra in the carbonyl stretching region (1800-1600 cm-1) revealed the presence of esters groups, carboxylic acid and ketonic groups in the solvent-soluble of biomass. The carboxylic acid increased in the range of 200 to 250 oC and then decreased. The prevailing of aromatic groups showed that the aromatization took place during extraction at above 250 oC. From 300 to 350 oC, the carbonyl functional groups in the solvent-soluble noticeably decreased. The removal of the carboxylic acid and the decrease of esters into the form of carbon dioxide indicated that the decarboxylation reaction occurred during the extraction process.
Keywords: Biomass upgrading, liquid membrane-Fourier transform infrared spectroscopy, FTIR, degradative solvent extraction, mechanism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10244433 Medical Image Segmentation Based On Vigorous Smoothing and Edge Detection Ideology
Authors: Jagadish H. Pujar, Pallavi S. Gurjal, Shambhavi D. S, Kiran S. Kunnur
Abstract:
Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.
Keywords: Image Segmentation, Image smoothing, Edge Detection, Impulsive noise, Gaussian noise, Median filter, Canny edge, Eigen values, Eigen vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19164432 Geosynthetic Reinforced Unpaved Road: Literature Study and Design Example
Authors: D. Jayalakshmi, S. Bhosale
Abstract:
This paper, in its first part, presents the state-of-the-art literature of design approaches for geosynthetic reinforced unpaved roads. The literature starting since 1970 and the critical appraisal of flexible pavement design by Giroud and Han (2004) and Jonathan Fannin (2006) is presented. The design example is illustrated for Indian conditions. The example emphasizes the results computed by Giroud and Han's (2004) design method with the Indian road congress guidelines by IRC SP 72 -2015. The input data considered are related to the subgrade soil condition of Maharashtra State in India. The unified soil classification of the subgrade soil is inorganic clay with high plasticity (CH), which is expansive with a California bearing ratio (CBR) of 2% to 3%. The example exhibits the unreinforced case and geotextile as reinforcement by varying the rut depth from 25 mm to 100 mm. The present result reveals the base thickness for the unreinforced case from the IRC design catalogs is in good agreement with Giroud and Han (2004) approach for a range of 75 mm to 100 mm rut depth. Since Giroud and Han (2004) method is applicable for both reinforced and unreinforced cases, for the same data with appropriate Nc factor, for the same rut depth, the base thickness for the reinforced case has arrived for the Indian condition. From this trial, for the CBR of 2%, the base thickness reduction due to geotextile inclusion is 35%. For the CBR range of 2% to 5% with different stiffness in geosynthetics, the reduction in base course thickness will be evaluated, and the validation will be executed by the full-scale accelerated pavement testing set up at the College of Engineering Pune (COE), India.
Keywords: Base thickness, design approach, equation, full scale accelerated pavement set up, Indian condition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6564431 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: Clustering algorithm, potential function, speech signal, the UBSS model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6824430 Frequency Response of Complex Systems with Localized Nonlinearities
Authors: E. Menga, S. Hernandez
Abstract:
Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.Keywords: Frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13164429 Improved Estimation of Evolutionary Spectrum based on Short Time Fourier Transforms and Modified Magnitude Group Delay by Signal Decomposition
Authors: H K Lakshminarayana, J S Bhat, H M Mahesh
Abstract:
A new estimator for evolutionary spectrum (ES) based on short time Fourier transform (STFT) and modified group delay function (MGDF) by signal decomposition (SD) is proposed. The STFT due to its built-in averaging, suppresses the cross terms and the MGDF preserves the frequency resolution of the rectangular window with the reduction in the Gibbs ripple. The present work overcomes the magnitude distortion observed in multi-component non-stationary signals with STFT and MGDF estimation of ES using SD. The SD is achieved either through discrete cosine transform based harmonic wavelet transform (DCTHWT) or perfect reconstruction filter banks (PRFB). The MGDF also improves the signal to noise ratio by removing associated noise. The performance of the present method is illustrated for cross chirp and frequency shift keying (FSK) signals, which indicates that its performance is better than STFT-MGDF (STFT-GD) alone. Further its noise immunity is better than STFT. The SD based methods, however cannot bring out the frequency transition path from band to band clearly, as there will be gap in the contour plot at the transition. The PRFB based STFT-SD shows good performance than DCTHWT decomposition method for STFT-GD.Keywords: Evolutionary Spectrum, Modified Group Delay, Discrete Cosine Transform, Harmonic Wavelet Transform, Perfect Reconstruction Filter Banks, Short Time Fourier Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16124428 Continuous FAQ Updating for Service Incident Ticket Resolution
Authors: Kohtaroh Miyamoto
Abstract:
As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use a FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such a FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.
Keywords: FAQ System, Resolution Time, Service Incident Tickets, IT System Maintenance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24964427 Studying the Temperature Field of Hypersonic Vehicle Structure with Aero-Thermo-Elasticity Deformation
Authors: Geng Xiangren, Liu Lei, Gui Ye-Wei, Tang Wei, Wang An-ling
Abstract:
The malfunction of thermal protection system (TPS) caused by aerodynamic heating is a latent trouble to aircraft structure safety. Accurately predicting the structure temperature field is quite important for the TPS design of hypersonic vehicle. Since Thornton’s work in 1988, the coupled method of aerodynamic heating and heat transfer has developed rapidly. However, little attention has been paid to the influence of structural deformation on aerodynamic heating and structural temperature field. In the flight, especially the long-endurance flight, the structural deformation, caused by the aerodynamic heating and temperature rise, has a direct impact on the aerodynamic heating and structural temperature field. Thus, the coupled interaction cannot be neglected. In this paper, based on the method of static aero-thermo-elasticity, considering the influence of aero-thermo-elasticity deformation, the aerodynamic heating and heat transfer coupled results of hypersonic vehicle wing model were calculated. The results show that, for the low-curvature region, such as fuselage or center-section wing, structure deformation has little effect on temperature field. However, for the stagnation region with high curvature, the coupled effect is not negligible. Thus, it is quite important for the structure temperature prediction to take into account the effect of elastic deformation. This work has laid a solid foundation for improving the prediction accuracy of the temperature distribution of aircraft structures and the evaluation capacity of structural performance.
Keywords: Aero-thermo-elasticity, elastic deformation, structural temperature, multi-field coupling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8944426 Computer Aided X-Ray Diffraction Intensity Analysis for Spinels: Hands-On Computing Experience
Authors: Ashish R. Tanna, Hiren H. Joshi
Abstract:
The mineral having chemical compositional formula MgAl2O4 is called “spinel". The ferrites crystallize in spinel structure are known as spinel-ferrites or ferro-spinels. The spinel structure has a fcc cage of oxygen ions and the metallic cations are distributed among tetrahedral (A) and octahedral (B) interstitial voids (sites). The X-ray diffraction (XRD) intensity of each Bragg plane is sensitive to the distribution of cations in the interstitial voids of the spinel lattice. This leads to the method of determination of distribution of cations in the spinel oxides through XRD intensity analysis. The computer program for XRD intensity analysis has been developed in C language and also tested for the real experimental situation by synthesizing the spinel ferrite materials Mg0.6Zn0.4AlxFe2- xO4 and characterized them by X-ray diffractometry. The compositions of Mg0.6Zn0.4AlxFe2-xO4(x = 0.0 to 0.6) ferrites have been prepared by ceramic method and powder X-ray diffraction patterns were recorded. Thus, the authenticity of the program is checked by comparing the theoretically calculated data using computer simulation with the experimental ones. Further, the deduced cation distributions were used to fit the magnetization data using Localized canting of spins approach to explain the “recovery" of collinear spin structure due to Al3+ - substitution in Mg-Zn ferrites which is the case if A-site magnetic dilution and non-collinear spin structure. Since the distribution of cations in the spinel ferrites plays a very important role with regard to their electrical and magnetic properties, it is essential to determine the cation distribution in spinel lattice.
Keywords: Spinel ferrites, Localized canting of spins, X-ray diffraction, Programming in Borland C.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38074425 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir
Authors: Ahmad Fahim Nasiry, Shigeo Honma
Abstract:
We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.Keywords: Numerical simulation, immiscible, finite difference, IADI, IADE, waterflooding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10904424 Vibration Analysis of Magnetostrictive Nano-Plate by Using Modified Couple Stress and Nonlocal Elasticity Theories
Authors: Hamed Khani Arani, Mohammad Shariyat, Armaghan Mohammadian
Abstract:
In the present study, the free vibration of magnetostrictive nano-plate (MsNP) resting on the Pasternak foundation is investigated. Firstly, the modified couple stress (MCS) and nonlocal elasticity theories are compared together and taken into account to consider the small scale effects; in this paper not only two theories are analyzed but also it improves the MCS theory is more accurate than nonlocal elasticity theory in such problems. A feedback control system is utilized to investigate the effects of a magnetic field. First-order shear deformation theory (FSDT), Hamilton’s principle and energy method are utilized in order to drive the equations of motion and these equations are solved by differential quadrature method (DQM) for simply supported boundary conditions. The MsNP undergoes in-plane forces in x and y directions. In this regard, the dimensionless frequency is plotted to study the effects of small scale parameter, magnetic field, aspect ratio, thickness ratio and compression and tension loads. Results indicate that these parameters play a key role on the natural frequency. According to the above results, MsNP can be used in the communications equipment, smart control vibration of nanostructure especially in sensor and actuators such as wireless linear micro motor and smart nano valves in injectors.
Keywords: Feedback control system, magnetostrictive nano-plate, modified couple stress theory, nonlocal elasticity theory, vibration analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6234423 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG
Authors: Mamta Garg
Abstract:
While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679