Search results for: analytical hierarchy process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16752

Search results for: analytical hierarchy process

16362 Transitivity Analysis in Reading Passage of English Text Book for Senior High School

Authors: Elitaria Bestri Agustina Siregar, Boni Fasius Siregar

Abstract:

The paper concerned with the transitivity in the reading passage of English textbook for Senior High School. The six types of process were occurred in the passages with percentage as follows: Material Process is 166 (42%), Relational Process is 155 (39%), Mental Process is 39 (10%), Verbal Process is 21 (5%), Existential Process is 13 (3), and Behavioral Process is 5 (1%). The material processes were found to be the most frequently used process type in the samples in our corpus (41,60 %). This indicates that the twenty reading passages are centrally concerned with action and events. Related to developmental psychology theory, this book fits the needs of students of this age.

Keywords: transitivity, types of processes, reading passages, developmental psycholoy

Procedia PDF Downloads 375
16361 Generating a Functional Grammar for Architectural Design from Structural Hierarchy in Combination of Square and Equal Triangle

Authors: Sanaz Ahmadzadeh Siyahrood, Arghavan Ebrahimi, Mohammadjavad Mahdavinejad

Abstract:

Islamic culture was accountable for a plethora of development in astronomy and science in the medieval term, and in geometry likewise. Geometric patterns are reputable in a considerable number of cultures, but in the Islamic culture the patterns have specific features that connect the Islamic faith to mathematics. In Islamic art, three fundamental shapes are generated from the circle shape: triangle, square and hexagon. Originating from their quiddity, each of these geometric shapes has its own specific structure. Even though the geometric patterns were generated from such simple forms as the circle and the square, they can be combined, duplicated, interlaced, and arranged in intricate combinations. So in order to explain geometrical interaction principles between square and equal triangle, in the first definition step, all types of their linear forces individually and in the second step, between them, would be illustrated. In this analysis, some angles will be created from intersection of their directions. All angles are categorized to some groups and the mathematical expressions among them are analyzed. Since the most geometric patterns in Islamic art and architecture are based on the repetition of a single motif, the evaluation results which are obtained from a small portion, is attributable to a large-scale domain while the development of infinitely repeating patterns can represent the unchanging laws. Geometric ornamentation in Islamic art offers the possibility of infinite growth and can accommodate the incorporation of other types of architectural layout as well, so the logic and mathematical relationships which have been obtained from this analysis are applicable in designing some architecture layers and developing the plan design.

Keywords: angle, equal triangle, square, structural hierarchy

Procedia PDF Downloads 167
16360 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 57
16359 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 364
16358 Derivation of Fragility Functions of Marine Drilling Risers Under Ocean Environment

Authors: Pranjal Srivastava, Piyali Sengupta

Abstract:

The performance of marine drilling risers is crucial in the offshore oil and gas industry to ensure safe drilling operation with minimum downtime. Experimental investigations on marine drilling risers are limited in the literature owing to the expensive and exhaustive test setup required to replicate the realistic riser model and ocean environment in the laboratory. Therefore, this study presents an analytical model of marine drilling riser for determining its fragility under ocean environmental loading. In this study, the marine drilling riser is idealized as a continuous beam having a concentric circular cross-section. Hydrodynamic loading acting on the marine drilling riser is determined by Morison’s equations. By considering the equilibrium of forces on the marine drilling riser for the connected and normal drilling conditions, the governing partial differential equations in terms of independent variables z (depth) and t (time) are derived. Subsequently, the Runge Kutta method and Finite Difference Method are employed for solving the partial differential equations arising from the analytical model. The proposed analytical approach is successfully validated with respect to the experimental results from the literature. From the dynamic analysis results of the proposed analytical approach, the critical design parameters peak displacements, upper and lower flex joint rotations and von Mises stresses of marine drilling risers are determined. An extensive parametric study is conducted to explore the effects of top tension, drilling depth, ocean current speed and platform drift on the critical design parameters of the marine drilling riser. Thereafter, incremental dynamic analysis is performed to derive the fragility functions of shallow water and deep-water marine drilling risers under ocean environmental loading. The proposed methodology can also be adopted for downtime estimation of marine drilling risers incorporating the ranges of uncertainties associated with the ocean environment, especially at deep and ultra-deepwater.

Keywords: drilling riser, marine, analytical model, fragility

Procedia PDF Downloads 123
16357 Effects of Magnetization Patterns on Characteristics of Permanent Magnet Linear Synchronous Generator for Wave Energy Converter Applications

Authors: Sung-Won Seo, Jang-Young Choi

Abstract:

The rare earth magnets used in synchronous generators offer many advantages, including high efficiency, greatly reduced the size, and weight. The permanent magnet linear synchronous generator (PMLSG) allows for direct drive without the need for a mechanical device. Therefore, the PMLSG is well suited to translational applications, such as wave energy converters and free piston energy converters. This manuscript compares the effects of different magnetization patterns on the characteristics of double-sided PMLSGs in slotless stator structures. The Halbach array has a higher flux density in air-gap than the Vertical array, and the advantages of its performance and efficiency are widely known. To verify the advantage of Halbach array, we apply a finite element method (FEM) and analytical method. In general, a FEM and an analytical method are used in the electromagnetic analysis for determining model characteristics, and the FEM is preferable to magnetic field analysis. However, the FEM is often slow and inflexible. On the other hand, the analytical method requires little time and produces accurate analysis of the magnetic field. Therefore, the flux density in air-gap and the Back-EMF can be obtained by FEM. In addition, the results from the analytical method correspond well with the FEM results. The model of the Halbach array reveals less copper loss than the model of the Vertical array, because of the Halbach array’s high output power density. The model of the Vertical array is lower core loss than the model of Halbach array, because of the lower flux density in air-gap. Therefore, the current density in the Vertical model is higher for identical power output. The completed manuscript will include the magnetic field characteristics and structural features of both models, comparing various results, and specific comparative analysis will be presented for the determination of the best model for application in a wave energy converting system.

Keywords: wave energy converter, permanent magnet linear synchronous generator, finite element method, analytical method

Procedia PDF Downloads 272
16356 Knowledge Discovery from Production Databases for Hierarchical Process Control

Authors: Pavol Tanuska, Pavel Vazan, Michal Kebisek, Dominika Jurovata

Abstract:

The paper gives the results of the project that was oriented on the usage of knowledge discoveries from production systems for needs of the hierarchical process control. One of the main project goals was the proposal of knowledge discovery model for process control. Specifics data mining methods and techniques was used for defined problems of the process control. The gained knowledge was used on the real production system, thus, the proposed solution has been verified. The paper documents how it is possible to apply new discovery knowledge to be used in the real hierarchical process control. There are specified the opportunities for application of the proposed knowledge discovery model for hierarchical process control.

Keywords: hierarchical process control, knowledge discovery from databases, neural network, process control

Procedia PDF Downloads 452
16355 Preservation of High Quality Fruit Products: Microwave Freeze Drying as a Substitute for the Conventional Freeze Drying Process

Authors: Sabine Ambros, Ulrich Kulozik

Abstract:

Berries such as blue- and raspberries belong to the most valuable fruits. To preserve the characteristic flavor and the high contents of vitamins and anthocyanins, the very sensitive berries are usually dried by lyophilization. As this method is very time- and energy-consuming, the dried fruit is extremely expensive. However, healthy snack foods are growing in popularity. Especially dried fruit free of any additives or additional sugar are more and more asked for. To make these products affordable, the fruits have to be dried by a method that is more energy-efficient than freeze drying but reveals the same high product quality. The additional insertion of microwaves to a freeze drying process was examined in this work to overcome the inconveniences of freeze drying. As microwaves penetrate the product volumetrically, sublimation takes place simultaneously all over the product and leads to a many times shorter process duration. A range of microwave and pressure settings was applied to find the optimum drying condition. The influence of the process parameters microwave power and chamber pressure on drying kinetics, product temperature and product quality was investigated to find the best condition for an energy-efficient process with high product quality. The product quality was evaluated by rehydration capacitiy, crispiness, shrinkage, color, vitamin C content and antioxidative capacity. The conclusion could be drawn that microwave freeze dried berries were almost equal to freeze dried fruit in all measured quality parameters or even could overcome it. Additionally, sensory evaluations could confirm the analytical studies. Drying time could be reduced by more than 75% at much lower energy consumption rates. Thus, an energy-efficient and cost saving method compared to the conventional freeze drying technique for the gentle production of tasty fruit or vegetable snacks has been found. This technique will make dried high-quality snacks available for many of consumers.

Keywords: blueberries, freeze drying, microwave freeze drying, process parameters, product quality

Procedia PDF Downloads 212
16354 Research on the Teaching Quality Evaluation of China’s Network Music Education APP

Authors: Guangzhuang Yu, Chun-Chu Liu

Abstract:

With the advent of the Internet era in recent years, social music education has gradually shifted from the original entity education mode to the mode of entity plus network teaching. No matter for school music education, professional music education or social music education, the teaching quality is the most important evaluation index. Regarding the research on teaching quality evaluation, scholars at home and abroad have contributed a lot of research results on the basis of multiple methods and evaluation subjects. However, to our best knowledge the complete evaluation model for the virtual teaching interaction mode of the emerging network music education Application (APP) has not been established. This research firstly found out the basic dimensions that accord with the teaching quality required by the three parties, constructing the quality evaluation index system; and then, on the basis of expounding the connotation of each index, it determined the weight of each index by using method of fuzzy analytic hierarchy process, providing ideas and methods for scientific, objective and comprehensive evaluation of the teaching quality of network education APP.

Keywords: network music education APP, teaching quality evaluation, index and connotation

Procedia PDF Downloads 97
16353 Decoding Democracy's Notion in Aung San Suu Kyi's Speeches

Authors: Woraya Som-Indra

Abstract:

This article purposes to decode the notion of democracy embedded in the political speeches of Aung San Su Kyi by adopting critical discourse analysis approach, using Systemic Function Linguistics (SFL) and transitivity as a vital analytical tool. Two main objectives of the study are 1) to analyze linguistic strategies constituted the crucial characteristics of Su Kyi's political speeches by employing SFL and transitivity and 2) to examine ideology manifested the notion of democracy behind Su Kyi’s political speeches. The data consists of four speeches of Su Kyi delivering in different places within the year 2011 broadcasted through the website of US campaign for Burma. By employing linguistic tool and the concept of ideology as an analytical frame, the word choice selection found in the speeches assist explaining the manifestation of Su Kyi’s ideology toward democracy and power struggle. The finding revealed eight characters of word choice projected from Su Kyi’s political speeches, as follows; 1) support, hope and encouragement which render the recipients to uphold with the mutual aim to fight for democracy together and moving forwards for change and solution in the future, 2) aim and achievement evoke the recipients to attach with the purpose to fight for democracy, 3) challenge and change release energy to challenge the present political regime of Burma to change to the new political regime of democracy, 4) action, doing and taking signify the action and practical process to call for a new political regime, 5) struggle represents power struggle during the process of democracy requesting and it could refer to her long period of house arrest in Burma, 6) freedom implies what she has been long fighting for- to be released from house arrest, be able to access to the freedom of speech related to political ideology, and moreover, be able to speak out for the people of Burmese about their desirable political regime and political participation, 7) share and scarify call the recipients to have the spirit of shared value in the process of acquiring democracy, and 8) solution and achievement remind her recipients of what they have been long fighting for, and what could lead them to reach out the mutual achievement of a new political regime, i.e. democracy. Those word choice selections are plausible representation of democracy notion in Su Kyi’s terms. Due to her long journey of fighting for democracy in Burma, Suu Kyi’s political speeches always possess tremendously strong leadership characteristic, using words of wisdom and moreover, they are encoded with a wide range of words related to democracy ideology in order to push forward the future change into the Burma’s political regime.

Keywords: Aung San Su Kyi’s speeches, critical discourse analysis, democracy ideology, systemic function linguistics, transitivity

Procedia PDF Downloads 247
16352 Covariance of the Queue Process Fed by Isonormal Gaussian Input Process

Authors: Samaneh Rahimirshnani, Hossein Jafari

Abstract:

In this paper, we consider fluid queueing processes fed by an isonormal Gaussian process. We study the correlation structure of the queueing process and the rate of convergence of the running supremum in the queueing process. The Malliavin calculus techniques are applied to obtain relations that show the workload process inherits the dependence properties of the input process. As examples, we consider two isonormal Gaussian processes, the sub-fractional Brownian motion (SFBM) and the fractional Brownian motion (FBM). For these examples, we obtain upper bounds for the covariance function of the queueing process and its rate of convergence to zero. We also discover that the rate of convergence of the queueing process is related to the structure of the covariance function of the input process.

Keywords: queue length process, Malliavin calculus, covariance function, fractional Brownian motion, sub-fractional Brownian motion

Procedia PDF Downloads 31
16351 A Nonlinear Stochastic Differential Equation Model for Financial Bubbles and Crashes with Finite-Time Singularities

Authors: Haowen Xi

Abstract:

We propose and solve exactly a class of non-linear generalization of the Black-Scholes process of stochastic differential equations describing price bubble and crashes dynamics. As a result of nonlinear positive feedback, the faster-than-exponential price positive growth (bubble forming) and negative price growth (crash forming) are found to be the power-law finite-time singularity in which bubbles and crashes price formation ending at finite critical time tc. While most literature on the market bubble and crash process focuses on the nonlinear positive feedback mechanism aspect, very few studies concern the noise level on the same process. The present work adds to the market bubble and crashes literature by studying the external sources noise influence on the critical time tc of the bubble forming and crashes forming. Two main results will be discussed: (1) the analytical expression of expected value of the critical time is found and unexpected critical slowing down due to the coupling external noise is predicted; (2) numerical simulations of the nonlinear stochastic equation is presented, and the probability distribution of Prob(tc) is found to be the inverse gamma function.

Keywords: bubble, crash, finite-time-singular, numerical simulation, price dynamics, stochastic differential equations

Procedia PDF Downloads 106
16350 Principal Component Analysis in Drug-Excipient Interactions

Authors: Farzad Khajavi

Abstract:

Studies about the interaction between active pharmaceutical ingredients (API) and excipients are so important in the pre-formulation stage of development of all dosage forms. Analytical techniques such as differential scanning calorimetry (DSC), Thermal gravimetry (TG), and Furrier transform infrared spectroscopy (FTIR) are commonly used tools for investigating regarding compatibility and incompatibility of APIs with excipients. Sometimes the interpretation of data obtained from these techniques is difficult because of severe overlapping of API spectrum with excipients in their mixtures. Principal component analysis (PCA) as a powerful factor analytical method is used in these situations to resolve data matrices acquired from these analytical techniques. Binary mixtures of API and interested excipients are considered and produced. Peaks of FTIR, DSC, or TG of pure API and excipient and their mixtures at different mole ratios will construct the rows of the data matrix. By applying PCA on the data matrix, the number of principal components (PCs) is determined so that it contains the total variance of the data matrix. By plotting PCs or factors obtained from the score of the matrix in two-dimensional spaces if the pure API and its mixture with the excipient at the high amount of API and the 1:1mixture form a separate cluster and the other cluster comprise of the pure excipient and its blend with the API at the high amount of excipient. This confirms the existence of compatibility between API and the interested excipient. Otherwise, the incompatibility will overcome a mixture of API and excipient.

Keywords: API, compatibility, DSC, TG, interactions

Procedia PDF Downloads 100
16349 Design and Characterization of a CMOS Process Sensor Utilizing Vth Extractor Circuit

Authors: Rohana Musa, Yuzman Yusoff, Chia Chieu Yin, Hanif Che Lah

Abstract:

This paper presents the design and characterization of a low power Complementary Metal Oxide Semiconductor (CMOS) process sensor. The design is targeted for implementation using Silterra’s 180 nm CMOS process technology. The proposed process sensor employs a voltage threshold (Vth) extractor architecture for detection of variations in the fabrication process. The process sensor generates output voltages in the range of 401 mV (fast-fast corner) to 443 mV (slow-slow corner) at nominal condition. The power dissipation for this process sensor is 6.3 µW with a supply voltage of 1.8V with a silicon area of 190 µm X 60 µm. The preliminary result of this process sensor that was fabricated indicates a close resemblance between test and simulated results.

Keywords: CMOS process sensor, PVT sensor, threshold extractor circuit, Vth extractor circuit

Procedia PDF Downloads 154
16348 Degradation of Emerging Pharmaceuticals by Gamma Irradiation Process

Authors: W. Jahouach-Rabai, J. Aribi, Z. Azzouz-Berriche, R. Lahsni, F. Hosni

Abstract:

Gamma irradiation applied in removing pharmaceutical contaminants from wastewater is an effective advanced oxidation process (AOP), considered as an alternative to conventional water treatment technologies. In this purpose, the degradation efficiency of several detected contaminants under gamma irradiation was evaluated. In fact, radiolysis of organic pollutants in aqueous solutions produces powerful reactive species, essentially hydroxyl radical ( ·OH), able to destroy recalcitrant pollutants in water. Pharmaceuticals considered in this study are aqueous solutions of paracetamol, ibuprofen, and diclofenac at different concentrations 0.1-1 mmol/L, which were treated with irradiation doses from 3 to 15 kGy. The catalytic oxidation of these compounds by gamma irradiation was investigated using hydrogen peroxide (H₂O₂) as a convenient oxidant. Optimization of the main parameters influencing irradiation process, namely irradiation doses, initial concentration and oxidant volume (H₂O₂) were investigated, in the aim to release high degradation efficiency of considered pharmaceuticals. Significant modifications attributed to these parameters appeared in the variation of degradation efficiency, chemical oxygen demand removal (COD) and concentration of radio-induced radicals, confirming them synergistic effect to attempt total mineralization. Pseudo-first-order reaction kinetics could be used to depict the degradation process of these compounds. A sophisticated analytical study was released to quantify the detected radio-induced radicals (electron paramagnetic resonance spectroscopy (EPR) and high performance liquid chromatography (HPLC)). All results showed that this process is effective for the degradation of many pharmaceutical products in aqueous solutions due to strong oxidative properties of generated radicals mainly hydroxyl radical. Furthermore, the addition of an optimal amount of H₂O₂ was efficient to improve the oxidative degradation and contribute to the high performance of this process at very low doses (0.5 and 1 kGy).

Keywords: AOP, COD, hydroxyl radical, EPR, gamma irradiation, HPLC, pharmaceuticals

Procedia PDF Downloads 145
16347 Business Process Mashup

Authors: Fethia Zenak, Salima Benbernou, Linda Zaoui

Abstract:

Recently, many companies are based on process development from scratch to achieve their business goals. The process development is not trivial and the main objective of enterprise managing processes is to decrease the software development time. Several concepts have been proposed in the field of business process-based reused development, known as BP Mashup. This concept consists of reusing existing business processes which have been modeled in order to respond to a particular goal. To meet user process requirements, our contribution is to mix parts of processes as 'processes fragments' components to build a new process (i.e. process mashup). The main idea of our paper is to offer graphical framework tool for both creating and running processes mashup. Allow users to perform a mixture of fragments, using a simple interface with set of graphical mixture operators based on a proposed formal model. A process mashup and mixture behavior are described within a new specification of a high-level language, language for process mashup (BPML).

Keywords: business process, mashup, fragments, bp mashup

Procedia PDF Downloads 598
16346 Prioritizing Quality Dimensions in ‘Servitised’ Business through AHP

Authors: Mohita Gangwar Sharma

Abstract:

Different factors are compelling manufacturers to move towards the phenomenon of servitization i.e. when firms go beyond giving support to the customers in operating the equipment. The challenges that are being faced in this transition by the manufacturing firms from being a product provider to a product- service provider are multipronged. Product-Service Systems (PSS) lies in between the pure-product and pure-service continuum. Through this study, we wish to understand the dimensions of ‘PSS-quality’. We draw upon the quality literature for both the product and services and through an expert survey for a specific transportation sector using analytical hierarchical process (AHP) derive a conceptual model that can be used as a comprehensive measurement tool for PSS offerings.

Keywords: servitisation, quality, product-service system, AHP

Procedia PDF Downloads 281
16345 Analytical Method for Seismic Analysis of Shaft-Tunnel Junction under Longitudinal Excitations

Authors: Jinghua Zhang

Abstract:

Shaft-tunnel junction is a typical case of the structural nonuniformity in underground structures. The shaft and the tunnel possess greatly different structural features. Even under uniform excitations, they tend to behave discrepantly. Studies on shaft-tunnel junctions are mainly performed numerically. Shaking table tests are also conducted. Although many numerical and experimental data are obtained, an analytical solution still has great merits of gaining more insights into the shaft-tunnel problem. This paper will try to remedy the situation. Since the seismic responses of shaft-tunnel junctions are very related to directions of the excitations, they are studied in two scenarios: the longitudinal-excitation scenario and the transverse-excitation scenario. The former scenario will be addressed in this paper. Given that responses of the tunnel are highly dependent on the shaft, the analytical solutions would be developed firstly for the vertical shaft. Then, the seismic responses of the tunnel would be discussed. Since vertical shafts bear a resemblance to rigid caissons, the solution proposed in this paper is derived by introducing terms of shaft-tunnel and soil-tunnel interactions into equations originally developed for rigid caissons. The validity of the solution is examined by a validation model computed by finite element method. The mutual influence between the shaft and the tunnel is introduced. The soil-structure interactions are discussed parametrically based on the proposed equations. The shaft-tunnel relative displacement and the soil-tunnel relative stiffness are found to be the most important parameters affecting the magnitudes and distributions of the internal forces of the tunnel. A hinge-joint at the shaft-tunnel junction could significantly reduce the degree of stress concentration compared with a rigid joint.

Keywords: analytical solution, longitudinal excitation, numerical validation , shaft-tunnel junction

Procedia PDF Downloads 131
16344 Error Amount in Viscoelasticity Analysis Depending on Time Step Size and Method used in ANSYS

Authors: A. Fettahoglu

Abstract:

Theory of viscoelasticity is used by many researchers to represent behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain like pavements of bridges can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell elements and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Subsequently, a practical problem, which has an analytical solution given in literature, is used to verify the applicability of viscoelasticity tool embedded in ANSYS. Finally, amount of error in the results of ANSYS is compared with the analytical results to indicate the influence of used method and time step size.

Keywords: generalized Maxwell model, finite element method, prony series, time step size, viscoelasticity

Procedia PDF Downloads 343
16343 Application of Failure Mode and Effects Analysis (FMEA) on the Virtual Process Hazard Analysis of Acetone Production Process

Authors: Princes Ann E. Prieto, Denise F. Alpuerto, John Rafael C. Unlayao, Neil Concibido, Monet Concepcion Maguyon-Detras

Abstract:

Failure Mode and Effects Analysis (FMEA) has been used in the virtual Process Hazard Analysis (PHA) of the Acetone production process through the dehydrogenation of isopropyl alcohol, for which very limited process risk assessment has been published. In this study, the potential failure modes, effects, and possible causes of selected major equipment in the process were identified. During the virtual FMEA mock sessions, the risks in the process were evaluated and recommendations to reduce and/or mitigate the process risks were formulated. The risk was estimated using the calculated risk priority number (RPN) and was classified into four (4) levels according to their effects on acetone production. Results of this study were also used to rank the criticality of equipment in the process based on the calculated criticality rating (CR). Bow tie diagrams were also created for the critical hazard scenarios identified in the study.

Keywords: chemical process safety, failure mode and effects analysis (FMEA), process hazard analysis (PHA), process safety management (PSM)

Procedia PDF Downloads 106
16342 Total-Reflection X-Ray Spectroscopy as a Tool for Element Screening in Food Samples

Authors: Hagen Stosnach

Abstract:

The analytical demands on modern instruments for element analysis in food samples include the analysis of major, trace and ultra-trace essential elements as well as potentially toxic trace elements. In this study total reflection, X-ray fluorescence analysis (TXRF) is presented as an analytical technique, which meets the requirements, defined by the Association of Official Agricultural Chemists (AOAC) regarding the limit of quantification, repeatability, reproducibility and recovery for most of the target elements. The advantages of TXRF are the small sample mass required, the broad linear range from µg/kg up to wt.-% values, no consumption of gases or cooling water, and the flexible and easy sample preparation. Liquid samples like alcoholic or non-alcoholic beverages can be analyzed without any preparation. For solid food samples, the most common sample pre-treatment methods are mineralization, direct deposition of the sample onto the reflector without/with minimal treatment, mainly as solid suspensions or after extraction. The main disadvantages are due to the possible peaks overlapping, which may lower the accuracy of quantitative analysis and the limit in the element identification. This analytical technique will be presented by several application examples, covering a broad range of liquid and solid food types.

Keywords: essential elements, toxic metals, XRF, spectroscopy

Procedia PDF Downloads 110
16341 A Holistic Workflow Modeling Method for Business Process Redesign

Authors: Heejung Lee

Abstract:

In a highly competitive environment, it becomes more important to shorten the whole business process while delivering or even enhancing the business value to the customers and suppliers. Although the workflow management systems receive much attention for its capacity to practically support the business process enactment, the effective workflow modeling method remain still challenging and the high degree of process complexity makes it more difficult to gain the short lead time. This paper presents a workflow structuring method in a holistic way that can reduce the process complexity using activity-needs and formal concept analysis, which eventually enhances the key performance such as quality, delivery, and cost in business process.

Keywords: workflow management, re-engineering, formal concept analysis, business process

Procedia PDF Downloads 384
16340 Structural Behavior of Laterally Loaded Precast Foamed Concrete Sandwich Panel

Authors: Y. H. Mugahed Amran, Raizal S. M. Rashid, Farzad Hejazi, Nor Azizi Safiee, A. A. Abang Ali

Abstract:

Experimental and analytical studies were carried out to investigate the structural behavior of precast foamed concrete sandwich panels (PFCSP) of total number (6) as one-way action slab tested under lateral load. The details of the test setup and procedures were illustrated. The results obtained from the experimental tests were discussed which include the observation of cracking patterns and influence of aspect ratio (L/b). Analytical study of finite element analysis was implemented and degree of composite action of the test panels was also examined in both experimental and analytical studies. Result shows that crack patterns appeared in only one-direction, similar to reports on solid slabs, particularly when both concrete wythes act in a composite manner. Foamed concrete was briefly reviewed and experimental results were compared with the finite element analyses data which gives a reasonable degree of accuracy. Therefore, based on the results obtained, PFCSP slab can be used as an alternative to conventional flooring system.

Keywords: aspect ratio (L/b), finite element analyses (FEA), foamed concrete (FC), precast foamed concrete sandwich panel (PFCSP), ultimate flexural strength capacity

Procedia PDF Downloads 291
16339 Theoretical Modeling of Self-Healing Polymers Crosslinked by Dynamic Bonds

Authors: Qiming Wang

Abstract:

Dynamic polymer networks (DPNs) crosslinked by dynamic bonds have received intensive attention because of their special crack-healing capability. Diverse DPNs have been synthesized using a number of dynamic bonds, including dynamic covalent bond, hydrogen bond, ionic bond, metal-ligand coordination, hydrophobic interaction, and others. Despite the promising success in the polymer synthesis, the fundamental understanding of their self-healing mechanics is still at the very beginning. Especially, a general analytical model to understand the interfacial self-healing behaviors of DPNs has not been established. Here, we develop polymer-network based analytical theories that can mechanistically model the constitutive behaviors and interfacial self-healing behaviors of DPNs. We consider that the DPN is composed of interpenetrating networks crosslinked by dynamic bonds. bonds obey a force-dependent chemical kinetics. During the self-healing process, we consider the The network chains follow inhomogeneous chain-length distributions and the dynamic polymer chains diffuse across the interface to reform the dynamic bonds, being modeled by a diffusion-reaction theory. The theories can predict the stress-stretch behaviors of original and self-healed DPNs, as well as the healing strength in a function of healing time. We show that the theoretically predicted healing behaviors can consistently match the documented experimental results of DPNs with various dynamic bonds, including dynamic covalent bonds (diarylbibenzofuranone and olefin metathesis), hydrogen bonds, and ionic bonds. We expect our model to be a powerful tool for the self-healing community to invent, design, understand, and optimize self-healing DPNs with various dynamic bonds.

Keywords: self-healing polymers, dynamic covalent bonds, hydrogen bonds, ionic bonds

Procedia PDF Downloads 151
16338 Iron Recovery from Red Mud as Zero-Valent Iron Metal Powder Using Direct Electrochemical Reduction Method

Authors: Franky Michael Hamonangan Siagian, Affan Maulana, Himawan Tri Bayu Murti Petrus, Widi Astuti

Abstract:

In this study, the feasibility of the direct electrowinning method was used to produce zero-valent iron from red mud. The bauxite residue sample came from the Tayan mine, Indonesia, which contains high hematite (Fe₂O₃). Before electrolysis, the samples were characterized by various analytical techniques (ICP-AES, SEM, XRD) to determine their chemical composition and mineralogy. The direct electrowinning method of red mud suspended in NaOH was introduced at low temperatures ranging from 30 - 110 °C. Variations of current density, red mud: NaOH ratio and temperature were carried out to determine the optimum operation of the direct electrowinning process. Cathode deposits and residues in electrochemical cells were analyzed using XRD, XRF, and SEM to determine the chemical composition and current recovery. The low-temperature electrolysis current efficiency on Redmud can reach 20% recovery at a current density of 920,945 A/m². The moderate performance of the process was investigated with red mud, which was attributed to the troublesome adsorption of red mud particles on the cathode, making the reduction far less efficient than that with hematite.

Keywords: red mud, electrochemical reduction, Iron production, hematite

Procedia PDF Downloads 48
16337 Defining Priority Areas for Biodiversity Conservation to Support for Zoning Protected Areas: A Case Study from Vietnam

Authors: Xuan Dinh Vu, Elmar Csaplovics

Abstract:

There has been an increasing need for methods to define priority areas for biodiversity conservation since the effectiveness of biodiversity conservation in protected areas largely depends on the availability of material resources. The identification of priority areas requires the integration of biodiversity data together with social data on human pressures and responses. However, the deficit of comprehensive data and reliable methods becomes a key challenge in zoning where the demand for conservation is most urgent and where the outcomes of conservation strategies can be maximized. In order to fill this gap, the study applied an environmental model Condition–Pressure–Response to suggest a set of criteria to identify priority areas for biodiversity conservation. Our empirical data has been compiled from 185 respondents, categorizing into three main groups: governmental administration, research institutions, and protected areas in Vietnam by using a well - designed questionnaire. Then, the Analytic Hierarchy Process (AHP) theory was used to identify the weight of all criteria. Our results have shown that priority level for biodiversity conservation could be identified by three main indicators: condition, pressure, and response with the value of the weight of 26%, 41%, and 33%, respectively. Based on the three indicators, 7 criteria and 15 sub-criteria were developed to support for defining priority areas for biodiversity conservation and zoning protected areas. In addition, our study also revealed that the groups of governmental administration and protected areas put a focus on the 'Pressure' indicator while the group of Research Institutions emphasized the importance of 'Response' indicator in the evaluation process. Our results provided recommendations to apply the developed criteria for identifying priority areas for biodiversity conservation in Vietnam.

Keywords: biodiversity conservation, condition–pressure–response model, criteria, priority areas, protected areas

Procedia PDF Downloads 135
16336 Statistical Correlation between Logging-While-Drilling Measurements and Wireline Caliper Logs

Authors: Rima T. Alfaraj, Murtadha J. Al Tammar, Khaqan Khan, Khalid M. Alruwaili

Abstract:

OBJECTIVE/SCOPE (25-75): Caliper logging data provides critical information about wellbore shape and deformations, such as stress-induced borehole breakouts or washouts. Multiarm mechanical caliper logs are often run using wireline, which can be time-consuming, costly, and/or challenging to run in certain formations. To minimize rig time and improve operational safety, it is valuable to develop analytical solutions that can estimate caliper logs using available Logging-While-Drilling (LWD) data without the need to run wireline caliper logs. As a first step, the objective of this paper is to perform statistical analysis using an extensive datasetto identify important physical parameters that should be considered in developing such analytical solutions. METHODS, PROCEDURES, PROCESS (75-100): Caliper logs and LWD data of eleven wells, with a total of more than 80,000 data points, were obtained and imported into a data analytics software for analysis. Several parameters were selected to test the relationship of the parameters with the measured maximum and minimum caliper logs. These parameters includegamma ray, porosity, shear, and compressional sonic velocities, bulk densities, and azimuthal density. The data of the eleven wells were first visualized and cleaned.Using the analytics software, several analyses were then preformed, including the computation of Pearson’s correlation coefficients to show the statistical relationship between the selected parameters and the caliper logs. RESULTS, OBSERVATIONS, CONCLUSIONS (100-200): The results of this statistical analysis showed that some parameters show good correlation to the caliper log data. For instance, the bulk density and azimuthal directional densities showedPearson’s correlation coefficients in the range of 0.39 and 0.57, which wererelatively high when comparedto the correlation coefficients of caliper data with other parameters. Other parameters such as porosity exhibited extremely low correlation coefficients to the caliper data. Various crossplots and visualizations of the data were also demonstrated to gain further insights from the field data. NOVEL/ADDITIVE INFORMATION (25-75): This study offers a unique and novel look into the relative importance and correlation between different LWD measurements and wireline caliper logs via an extensive dataset. The results pave the way for a more informed development of new analytical solutions for estimating the size and shape of the wellbore in real-time while drilling using LWD data.

Keywords: LWD measurements, caliper log, correlations, analysis

Procedia PDF Downloads 92
16335 Commissioning of a Flattening Filter Free (FFF) using an Anisotropic Analytical Algorithm (AAA)

Authors: Safiqul Islam, Anamul Haque, Mohammad Amran Hossain

Abstract:

Aim: To compare the dosimetric parameters of the flattened and flattening filter free (FFF) beam and to validate the beam data using anisotropic analytical algorithm (AAA). Materials and Methods: All the dosimetric data’s (i.e. depth dose profiles, profile curves, output factors, penumbra etc.) required for the beam modeling of AAA were acquired using the Blue Phantom RFA for 6 MV, 6 FFF, 10MV & 10FFF. Progressive resolution Optimizer and Dose Volume Optimizer algorithm for VMAT and IMRT were are also configured in the beam model. Beam modeling of the AAA were compared with the measured data sets. Results: Due to the higher and lover energy component in 6FFF and 10 FFF the surface doses are 10 to 15% higher compared to flattened 6 MV and 10 MV beams. FFF beam has a lower mean energy compared to the flattened beam and the beam quality index were 6 MV 0.667, 6FFF 0.629, 10 MV 0.74 and 10 FFF 0.695 respectively. Gamma evaluation with 2% dose and 2 mm distance criteria for the Open Beam, IMRT and VMAT plans were also performed and found a good agreement between the modeled and measured data. Conclusion: We have successfully modeled the AAA algorithm for the flattened and FFF beams and achieved a good agreement with the calculated and measured value.

Keywords: commissioning of a Flattening Filter Free (FFF) , using an Anisotropic Analytical Algorithm (AAA), flattened beam, parameters

Procedia PDF Downloads 277
16334 A Case Study of Conceptual Framework for Process Performance

Authors: Ljubica Milanović Glavan, Vesna Bosilj Vukšić, Dalia Suša

Abstract:

In order to gain a competitive advantage, many companies are focusing on reorganization of their business processes and implementing process-based management. In this context, assessing process performance is essential because it enables individuals and groups to assess where they stand in comparison to their competitors. In this paper, it is argued that process performance measurement is a necessity for a modern process-oriented company and it should be supported by a holistic process performance measurement system. It seems very unlikely that a universal set of performance indicators can be applied successfully to all business processes. Thus, performance indicators must be process-specific and have to be derived from both the strategic enterprise-wide goals and the process goals. Based on the extensive literature review and interviews conducted in Croatian company a conceptual framework for process performance measurement system was developed. The main objective of such system is to help process managers by providing comprehensive and timely information on the performance of business processes. This information can be used to communicate goals and current performance of a business process directly to the process team, to improve resource allocation and process output regarding quantity and quality, to give early warning signals, to make a diagnosis of the weaknesses of a business process, to decide whether corrective actions are needed and to assess the impact of actions taken.

Keywords: Croatia, key performance indicators, performance measurement, process performance

Procedia PDF Downloads 645
16333 A Pedagogical Case Study on Consumer Decision Making Models: A Selection of Smart Phone Apps

Authors: Yong Bum Shin

Abstract:

This case focuses on Weighted additive difference, Conjunctive, Disjunctive, and Elimination by aspects methodologies in consumer decision-making models and the Simple additive weighting (SAW) approach in the multi-criteria decision-making (MCDM) area. Most decision-making models illustrate that the rank reversal phenomenon is unpreventable. This paper presents that rank reversal occurs in popular managerial methods such as Weighted Additive Difference (WAD), Conjunctive Method, Disjunctive Method, Elimination by Aspects (EBA) and MCDM methods as well as such as the Simple Additive Weighting (SAW) and finally Unified Commensurate Multiple (UCM) models which successfully addresses these rank reversal problems in most popular MCDM methods in decision-making area.

Keywords: multiple criteria decision making, rank inconsistency, unified commensurate multiple, analytic hierarchy process

Procedia PDF Downloads 61