Search results for: fuzzy analytical hierarchy process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17395

Search results for: fuzzy analytical hierarchy process

16525 Signature Verification System for a Banking Business Process Management

Authors: A. Rahaf, S. Liyakathunsia

Abstract:

In today’s world, unprecedented operational pressure is faced by banks that test the efficiency, effectiveness, and agility of their business processes. In a typical banking process, a person’s authorization is usually based on his signature on most all of the transactions. Signature verification is considered as one of the highly significant information needed for any bank document processing. Banks usually use Signature Verification to authenticate the identity of individuals. In this paper, a business process model has been proposed in order to increase the quality of the verification process and to reduce time and needed resources. In order to understand the current process, a survey has been conducted and distributed among bank employees. After analyzing the survey, a process model has been created using Bizagi modeler which helps in simulating the process after assigning time and cost of it. The outcomes show that the automation of signature verification process is highly recommended for a banking business process.

Keywords: business process management, process modeling, quality, Signature Verification

Procedia PDF Downloads 407
16524 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 207
16523 UV-Vis Spectroscopy as a Tool for Online Tar Measurements in Wood Gasification Processes

Authors: Philip Edinger, Christian Ludwig

Abstract:

The formation and control of tars remain one of the major challenges in the implementation of biomass gasification technologies. Robust, on-line analytical methods are needed to investigate the fate of tar compounds when different measures for their reduction are applied. This work establishes an on-line UV-Vis method, based on a liquid quench sampling system, to monitor tar compounds in biomass gasification processes. Recorded spectra from the liquid phase were analyzed for their tar composition by means of a classical least squares (CLS) and partial least squares (PLS) approach. This allowed for the detection of UV-Vis active tar compounds with detection limits in the low part per million by volume (ppmV) region. The developed method was then applied to two case studies. The first involved a lab-scale reactor, intended to investigate the decomposition of a limited number of tar compounds across a catalyst. The second study involved a gas scrubber as part of a pilot scale wood gasification plant. Tar compound quantification results showed good agreement with off-line based reference methods (GC-FID) when the complexity of tar composition was limited. The two case studies show that the developed method can provide rapid, qualitative information on the tar composition for the purpose of process monitoring. In cases with a limited number of tar species, quantitative information about the individual tar compound concentrations provides an additional benefit of the analytical method.

Keywords: biomass gasification, on-line, tar, UV-Vis

Procedia PDF Downloads 245
16522 Modeling and Temperature Control of Water-cooled PEMFC System Using Intelligent Algorithm

Authors: Chen Jun-Hong, He Pu, Tao Wen-Quan

Abstract:

Proton exchange membrane fuel cell (PEMFC) is the most promising future energy source owing to its low operating temperature, high energy efficiency, high power density, and environmental friendliness. In this paper, a comprehensive PEMFC system control-oriented model is developed in the Matlab/Simulink environment, which includes the hydrogen supply subsystem, air supply subsystem, and thermal management subsystem. Besides, Improved Artificial Bee Colony (IABC) is used in the parameter identification of PEMFC semi-empirical equations, making the maximum relative error between simulation data and the experimental data less than 0.4%. Operation temperature is essential for PEMFC, both high and low temperatures are disadvantageous. In the thermal management subsystem, water pump and fan are both controlled with the PID controller to maintain the appreciate operation temperature of PEMFC for the requirements of safe and efficient operation. To improve the control effect further, fuzzy control is introduced to optimize the PID controller of the pump, and the Radial Basis Function (RBF) neural network is introduced to optimize the PID controller of the fan. The results demonstrate that Fuzzy-PID and RBF-PID can achieve a better control effect with 22.66% decrease in Integral Absolute Error Criterion (IAE) of T_st (Temperature of PEMFC) and 77.56% decrease in IAE of T_in (Temperature of inlet cooling water) compared with traditional PID. In the end, a novel thermal management structure is proposed, which uses the cooling air passing through the main radiator to continue cooling the secondary radiator. In this thermal management structure, the parasitic power dissipation can be reduced by 69.94%, and the control effect can be improved with a 52.88% decrease in IAE of T_in under the same controller.

Keywords: PEMFC system, parameter identification, temperature control, Fuzzy-PID, RBF-PID, parasitic power

Procedia PDF Downloads 64
16521 Generalization of Clustering Coefficient on Lattice Networks Applied to Criminal Networks

Authors: Christian H. Sanabria-Montaña, Rodrigo Huerta-Quintanilla

Abstract:

A lattice network is a special type of network in which all nodes have the same number of links, and its boundary conditions are periodic. The most basic lattice network is the ring, a one-dimensional network with periodic border conditions. In contrast, the Cartesian product of d rings forms a d-dimensional lattice network. An analytical expression currently exists for the clustering coefficient in this type of network, but the theoretical value is valid only up to certain connectivity value; in other words, the analytical expression is incomplete. Here we obtain analytically the clustering coefficient expression in d-dimensional lattice networks for any link density. Our analytical results show that the clustering coefficient for a lattice network with density of links that tend to 1, leads to the value of the clustering coefficient of a fully connected network. We developed a model on criminology in which the generalized clustering coefficient expression is applied. The model states that delinquents learn the know-how of crime business by sharing knowledge, directly or indirectly, with their friends of the gang. This generalization shed light on the network properties, which is important to develop new models in different fields where network structure plays an important role in the system dynamic, such as criminology, evolutionary game theory, econophysics, among others.

Keywords: clustering coefficient, criminology, generalized, regular network d-dimensional

Procedia PDF Downloads 390
16520 The Victim as a Public Actor: Understanding the Victim’s Role as an Agent of Accountability

Authors: Marie Manikis

Abstract:

This paper argues that the scholarship to date on victims in the criminal process has mainly adopted a private conception of victims –as bearers of individual interests, rights, and remedies– rather than a conception of the victim as an actor with public functions and interests, who has historically and continuously taken on an active role in the common law tradition. This conception enables a greater understanding of the various developments around victim participation in common law criminal justice systems and provides a useful analytical tool to understand the different roles of victims in England and Wales and the United States. Indeed, the main focus on individual rights and the conception of the victim as a private entity undermines the distinctive and increasing role victims play in the wider criminal justice process as agents of accountability through administrative-based processes within and outside courts, including private prosecutions, internal review processes within prosecutorial agencies, judicial review, and ombudsmen processes.

Keywords: victims, participation, criminal justice, accountability

Procedia PDF Downloads 110
16519 Uncertain Time-Cost Trade off Problems of Construction Projects Using Fuzzy Set Theory

Authors: V. S. S. Kumar, B. Vikram

Abstract:

The development of effective decision support tools that adopted in the construction industry is vital in the world we live in today, since it can lead to substantial cost reduction and efficient resource consumption. Solving the time-cost trade off problems and its related variants is at the heart of scientific research for optimizing construction planning problems. In general, the classical optimization techniques have difficulties in dealing with TCT problems. One of the main reasons of their failure is that they can easily be entrapped in local minima. This paper presents an investigation on the application of meta-heuristic techniques to two particular variants of the time-cost trade of analysis, the time-cost trade off problem (TCT), and time-cost trade off optimization problem (TCO). In first problem, the total project cost should be minimized, and in the second problem, the total project cost and total project duration should be minimized simultaneously. Finally it is expected that, the optimization models developed in this paper will contribute significantly for efficient planning and management of construction project.

Keywords: fuzzy sets, uncertainty, optimization, time cost trade off problems

Procedia PDF Downloads 340
16518 Experimental and Analytical Investigation of Seismic Behavior of Concrete Beam-Column Joints Strengthened by Fiber-Reinforced Polymers Jacketing

Authors: Ebrahim Zamani Beydokhti, Hashem Shariatmadar

Abstract:

This paper presents an experimental and analytical investigation on the behavior of retrofitted beam-column joints subjected to reversed cyclic loading. The experimental program comprises 8 external beam–column joint connection subassemblages tested in 2 phases; one was the damaging phase and second was the repairing phase. The beam-column joints were no seismically designed, i.e. the joint, beam and column critical zones had no special transverse stirrups. The joins were tested under cyclic loading in previous research. The experiment had two phases named damage phase and retrofit phase. Then the experimental results compared with analytical results achieved from modeling in OpenSees software. The presence of lateral slab and the axial load amount were analytically investigated. The results showed that increasing the axial load and presence of lateral slab increased the joint capacity. The presence of lateral slab increased the dissipated energy, while the axial load had no significant effect on it.

Keywords: concrete beam-column joints, CFRP sheets, lateral slab, axial load

Procedia PDF Downloads 127
16517 The Optimization Process of Aortic Heart Valve Stent Geometry

Authors: Arkadiusz Mezyk, Wojciech Klein, Mariusz Pawlak, Jacek Gnilka

Abstract:

The aortic heart valve stents should fulfill many criterions. These criteria have a strong impact on the geometrical shape of the stent. Usually, the final construction of stent is a result of many year experience and knowledge. Depending on patents claims, different stent shapes are produced by different companies. This causes difficulties for biomechanics engineers narrowing the domain of feasible solutions. The paper present optimization method for stent geometry defining by a specific analytical equation based on various mathematical functions. This formula was implemented as APDL script language in ANSYS finite element environment. For the purpose of simulation tests, a few parameters were separated from developed equation. The application of the genetic algorithms allows finding the best solution due to selected objective function. Obtained solution takes into account parameters such as radial force, compression ratio and coefficient of expansion on the transverse axial.

Keywords: aortic stent, optimization process, geometry, finite element method

Procedia PDF Downloads 269
16516 A Geospatial Consumer Marketing Campaign Optimization Strategy: Case of Fuzzy Approach in Nigeria Mobile Market

Authors: Adeolu O. Dairo

Abstract:

Getting the consumer marketing strategy right is a crucial and complex task for firms with a large customer base such as mobile operators in a competitive mobile market. While empirical studies have made efforts to identify key constructs, no geospatial model has been developed to comprehensively assess the viability and interdependency of ground realities regarding the customer, competition, channel and the network quality of mobile operators. With this research, a geo-analytic framework is proposed for strategy formulation and allocation for mobile operators. Firstly, a fuzzy analytic network using a self-organizing feature map clustering technique based on inputs from managers and literature, which depicts the interrelationships amongst ground realities is developed. The model is tested with a mobile operator in the Nigeria mobile market. As a result, a customer-centric geospatial and visualization solution is developed. This provides a consolidated and integrated insight that serves as a transparent, logical and practical guide for strategic, tactical and operational decision making.

Keywords: geospatial, geo-analytics, self-organizing map, customer-centric

Procedia PDF Downloads 164
16515 Generating a Functional Grammar for Architectural Design from Structural Hierarchy in Combination of Square and Equal Triangle

Authors: Sanaz Ahmadzadeh Siyahrood, Arghavan Ebrahimi, Mohammadjavad Mahdavinejad

Abstract:

Islamic culture was accountable for a plethora of development in astronomy and science in the medieval term, and in geometry likewise. Geometric patterns are reputable in a considerable number of cultures, but in the Islamic culture the patterns have specific features that connect the Islamic faith to mathematics. In Islamic art, three fundamental shapes are generated from the circle shape: triangle, square and hexagon. Originating from their quiddity, each of these geometric shapes has its own specific structure. Even though the geometric patterns were generated from such simple forms as the circle and the square, they can be combined, duplicated, interlaced, and arranged in intricate combinations. So in order to explain geometrical interaction principles between square and equal triangle, in the first definition step, all types of their linear forces individually and in the second step, between them, would be illustrated. In this analysis, some angles will be created from intersection of their directions. All angles are categorized to some groups and the mathematical expressions among them are analyzed. Since the most geometric patterns in Islamic art and architecture are based on the repetition of a single motif, the evaluation results which are obtained from a small portion, is attributable to a large-scale domain while the development of infinitely repeating patterns can represent the unchanging laws. Geometric ornamentation in Islamic art offers the possibility of infinite growth and can accommodate the incorporation of other types of architectural layout as well, so the logic and mathematical relationships which have been obtained from this analysis are applicable in designing some architecture layers and developing the plan design.

Keywords: angle, equal triangle, square, structural hierarchy

Procedia PDF Downloads 181
16514 A Design for Supply Chain Model by Integrated Evaluation of Design Value and Supply Chain Cost

Authors: Yuan-Jye Tseng, Jia-Shu Li

Abstract:

To design a product with the given product requirement and design objective, there can be alternative ways to propose the detailed design specifications of the product. In the design modeling stage, alternative design cases with detailed specifications can be modeled to fulfill the product requirement and design objective. Therefore, in the design evaluation stage, it is required to perform an evaluation of the alternative design cases for deciding the final design. The purpose of this research is to develop a product evaluation model for evaluating the alternative design cases by integrated evaluating the criteria of functional design, Kansei design, and design for supply chain. The criteria in the functional design group include primary function, expansion function, improved function, and new function. The criteria in the Kansei group include geometric shape, dimension, surface finish, and layout. The criteria in the design for supply chain group include material, manufacturing process, assembly, and supply chain operation. From the point of view of value and cost, the criteria in the functional design group and Kansei design group represent the design value of the product. The criteria in the design for supply chain group represent the supply chain and manufacturing cost of the product. It is required to evaluate the design value and the supply chain cost to determine the final design. For the purpose of evaluating the criteria in the three criteria groups, a fuzzy analytic network process (FANP) method is presented to evaluate a weighted index by calculating the total relational values among the three groups. A method using the technique for order preference by similarity to ideal solution (TOPSIS) is used to compare and rank the design alternative cases according to the weighted index using the total relational values of the criteria. The final decision of a design case can be determined by using the ordered ranking. For example, the design case with the top ranking can be selected as the final design case. Based on the criteria in the evaluation, the design objective can be achieved with a combined and weighted effect of the design value and manufacturing cost. An example product is demonstrated and illustrated in the presentation. It shows that the design evaluation model is useful for integrated evaluation of functional design, Kansei design, and design for supply chain to determine the best design case and achieve the design objective.

Keywords: design for supply chain, design evaluation, functional design, Kansei design, fuzzy analytic network process, technique for order preference by similarity to ideal solution

Procedia PDF Downloads 304
16513 Step Method for Solving Nonlinear Two Delays Differential Equation in Parkinson’s Disease

Authors: H. N. Agiza, M. A. Sohaly, M. A. Elfouly

Abstract:

Parkinson's disease (PD) is a heterogeneous disorder with common age of onset, symptoms, and progression levels. In this paper we will solve analytically the PD model as a non-linear delay differential equation using the steps method. The step method transforms a system of delay differential equations (DDEs) into systems of ordinary differential equations (ODEs). On some numerical examples, the analytical solution will be difficult. So we will approximate the analytical solution using Picard method and Taylor method to ODEs.

Keywords: Parkinson's disease, step method, delay differential equation, two delays

Procedia PDF Downloads 191
16512 Utility Assessment Model for Wireless Technology in Construction

Authors: Yassir AbdelRazig, Amine Ghanem

Abstract:

Construction projects are information intensive in nature and involve many activities that are related to each other. Wireless technologies can be used to improve the accuracy and timeliness of data collected from construction sites and shares it with appropriate parties. Nonetheless, the construction industry tends to be conservative and shows hesitation to adopt new technologies. A main concern for owners, contractors or any person in charge on a job site is the cost of the technology in question. Wireless technologies are not cheap. There are a lot of expenses to be taken into consideration, and a study should be completed to make sure that the importance and savings resulting from the usage of this technology is worth the expenses. This research attempts to assess the effectiveness of using the appropriate wireless technologies based on criteria such as performance, reliability, and risk. The assessment is based on a utility function model that breaks down the selection issue into alternatives attribute. Then the attributes are assigned weights and single attributes are measured. Finally, single attribute are combined to develop one single aggregate utility index for each alternative.

Keywords: analytic hierarchy process, decision theory, utility function, wireless technologies

Procedia PDF Downloads 325
16511 Derivation of Fragility Functions of Marine Drilling Risers Under Ocean Environment

Authors: Pranjal Srivastava, Piyali Sengupta

Abstract:

The performance of marine drilling risers is crucial in the offshore oil and gas industry to ensure safe drilling operation with minimum downtime. Experimental investigations on marine drilling risers are limited in the literature owing to the expensive and exhaustive test setup required to replicate the realistic riser model and ocean environment in the laboratory. Therefore, this study presents an analytical model of marine drilling riser for determining its fragility under ocean environmental loading. In this study, the marine drilling riser is idealized as a continuous beam having a concentric circular cross-section. Hydrodynamic loading acting on the marine drilling riser is determined by Morison’s equations. By considering the equilibrium of forces on the marine drilling riser for the connected and normal drilling conditions, the governing partial differential equations in terms of independent variables z (depth) and t (time) are derived. Subsequently, the Runge Kutta method and Finite Difference Method are employed for solving the partial differential equations arising from the analytical model. The proposed analytical approach is successfully validated with respect to the experimental results from the literature. From the dynamic analysis results of the proposed analytical approach, the critical design parameters peak displacements, upper and lower flex joint rotations and von Mises stresses of marine drilling risers are determined. An extensive parametric study is conducted to explore the effects of top tension, drilling depth, ocean current speed and platform drift on the critical design parameters of the marine drilling riser. Thereafter, incremental dynamic analysis is performed to derive the fragility functions of shallow water and deep-water marine drilling risers under ocean environmental loading. The proposed methodology can also be adopted for downtime estimation of marine drilling risers incorporating the ranges of uncertainties associated with the ocean environment, especially at deep and ultra-deepwater.

Keywords: drilling riser, marine, analytical model, fragility

Procedia PDF Downloads 129
16510 Effects of Magnetization Patterns on Characteristics of Permanent Magnet Linear Synchronous Generator for Wave Energy Converter Applications

Authors: Sung-Won Seo, Jang-Young Choi

Abstract:

The rare earth magnets used in synchronous generators offer many advantages, including high efficiency, greatly reduced the size, and weight. The permanent magnet linear synchronous generator (PMLSG) allows for direct drive without the need for a mechanical device. Therefore, the PMLSG is well suited to translational applications, such as wave energy converters and free piston energy converters. This manuscript compares the effects of different magnetization patterns on the characteristics of double-sided PMLSGs in slotless stator structures. The Halbach array has a higher flux density in air-gap than the Vertical array, and the advantages of its performance and efficiency are widely known. To verify the advantage of Halbach array, we apply a finite element method (FEM) and analytical method. In general, a FEM and an analytical method are used in the electromagnetic analysis for determining model characteristics, and the FEM is preferable to magnetic field analysis. However, the FEM is often slow and inflexible. On the other hand, the analytical method requires little time and produces accurate analysis of the magnetic field. Therefore, the flux density in air-gap and the Back-EMF can be obtained by FEM. In addition, the results from the analytical method correspond well with the FEM results. The model of the Halbach array reveals less copper loss than the model of the Vertical array, because of the Halbach array’s high output power density. The model of the Vertical array is lower core loss than the model of Halbach array, because of the lower flux density in air-gap. Therefore, the current density in the Vertical model is higher for identical power output. The completed manuscript will include the magnetic field characteristics and structural features of both models, comparing various results, and specific comparative analysis will be presented for the determination of the best model for application in a wave energy converting system.

Keywords: wave energy converter, permanent magnet linear synchronous generator, finite element method, analytical method

Procedia PDF Downloads 285
16509 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 375
16508 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 67
16507 Transitivity Analysis in Reading Passage of English Text Book for Senior High School

Authors: Elitaria Bestri Agustina Siregar, Boni Fasius Siregar

Abstract:

The paper concerned with the transitivity in the reading passage of English textbook for Senior High School. The six types of process were occurred in the passages with percentage as follows: Material Process is 166 (42%), Relational Process is 155 (39%), Mental Process is 39 (10%), Verbal Process is 21 (5%), Existential Process is 13 (3), and Behavioral Process is 5 (1%). The material processes were found to be the most frequently used process type in the samples in our corpus (41,60 %). This indicates that the twenty reading passages are centrally concerned with action and events. Related to developmental psychology theory, this book fits the needs of students of this age.

Keywords: transitivity, types of processes, reading passages, developmental psycholoy

Procedia PDF Downloads 391
16506 Principal Component Analysis in Drug-Excipient Interactions

Authors: Farzad Khajavi

Abstract:

Studies about the interaction between active pharmaceutical ingredients (API) and excipients are so important in the pre-formulation stage of development of all dosage forms. Analytical techniques such as differential scanning calorimetry (DSC), Thermal gravimetry (TG), and Furrier transform infrared spectroscopy (FTIR) are commonly used tools for investigating regarding compatibility and incompatibility of APIs with excipients. Sometimes the interpretation of data obtained from these techniques is difficult because of severe overlapping of API spectrum with excipients in their mixtures. Principal component analysis (PCA) as a powerful factor analytical method is used in these situations to resolve data matrices acquired from these analytical techniques. Binary mixtures of API and interested excipients are considered and produced. Peaks of FTIR, DSC, or TG of pure API and excipient and their mixtures at different mole ratios will construct the rows of the data matrix. By applying PCA on the data matrix, the number of principal components (PCs) is determined so that it contains the total variance of the data matrix. By plotting PCs or factors obtained from the score of the matrix in two-dimensional spaces if the pure API and its mixture with the excipient at the high amount of API and the 1:1mixture form a separate cluster and the other cluster comprise of the pure excipient and its blend with the API at the high amount of excipient. This confirms the existence of compatibility between API and the interested excipient. Otherwise, the incompatibility will overcome a mixture of API and excipient.

Keywords: API, compatibility, DSC, TG, interactions

Procedia PDF Downloads 111
16505 Decoding Democracy's Notion in Aung San Suu Kyi's Speeches

Authors: Woraya Som-Indra

Abstract:

This article purposes to decode the notion of democracy embedded in the political speeches of Aung San Su Kyi by adopting critical discourse analysis approach, using Systemic Function Linguistics (SFL) and transitivity as a vital analytical tool. Two main objectives of the study are 1) to analyze linguistic strategies constituted the crucial characteristics of Su Kyi's political speeches by employing SFL and transitivity and 2) to examine ideology manifested the notion of democracy behind Su Kyi’s political speeches. The data consists of four speeches of Su Kyi delivering in different places within the year 2011 broadcasted through the website of US campaign for Burma. By employing linguistic tool and the concept of ideology as an analytical frame, the word choice selection found in the speeches assist explaining the manifestation of Su Kyi’s ideology toward democracy and power struggle. The finding revealed eight characters of word choice projected from Su Kyi’s political speeches, as follows; 1) support, hope and encouragement which render the recipients to uphold with the mutual aim to fight for democracy together and moving forwards for change and solution in the future, 2) aim and achievement evoke the recipients to attach with the purpose to fight for democracy, 3) challenge and change release energy to challenge the present political regime of Burma to change to the new political regime of democracy, 4) action, doing and taking signify the action and practical process to call for a new political regime, 5) struggle represents power struggle during the process of democracy requesting and it could refer to her long period of house arrest in Burma, 6) freedom implies what she has been long fighting for- to be released from house arrest, be able to access to the freedom of speech related to political ideology, and moreover, be able to speak out for the people of Burmese about their desirable political regime and political participation, 7) share and scarify call the recipients to have the spirit of shared value in the process of acquiring democracy, and 8) solution and achievement remind her recipients of what they have been long fighting for, and what could lead them to reach out the mutual achievement of a new political regime, i.e. democracy. Those word choice selections are plausible representation of democracy notion in Su Kyi’s terms. Due to her long journey of fighting for democracy in Burma, Suu Kyi’s political speeches always possess tremendously strong leadership characteristic, using words of wisdom and moreover, they are encoded with a wide range of words related to democracy ideology in order to push forward the future change into the Burma’s political regime.

Keywords: Aung San Su Kyi’s speeches, critical discourse analysis, democracy ideology, systemic function linguistics, transitivity

Procedia PDF Downloads 257
16504 Preservation of High Quality Fruit Products: Microwave Freeze Drying as a Substitute for the Conventional Freeze Drying Process

Authors: Sabine Ambros, Ulrich Kulozik

Abstract:

Berries such as blue- and raspberries belong to the most valuable fruits. To preserve the characteristic flavor and the high contents of vitamins and anthocyanins, the very sensitive berries are usually dried by lyophilization. As this method is very time- and energy-consuming, the dried fruit is extremely expensive. However, healthy snack foods are growing in popularity. Especially dried fruit free of any additives or additional sugar are more and more asked for. To make these products affordable, the fruits have to be dried by a method that is more energy-efficient than freeze drying but reveals the same high product quality. The additional insertion of microwaves to a freeze drying process was examined in this work to overcome the inconveniences of freeze drying. As microwaves penetrate the product volumetrically, sublimation takes place simultaneously all over the product and leads to a many times shorter process duration. A range of microwave and pressure settings was applied to find the optimum drying condition. The influence of the process parameters microwave power and chamber pressure on drying kinetics, product temperature and product quality was investigated to find the best condition for an energy-efficient process with high product quality. The product quality was evaluated by rehydration capacitiy, crispiness, shrinkage, color, vitamin C content and antioxidative capacity. The conclusion could be drawn that microwave freeze dried berries were almost equal to freeze dried fruit in all measured quality parameters or even could overcome it. Additionally, sensory evaluations could confirm the analytical studies. Drying time could be reduced by more than 75% at much lower energy consumption rates. Thus, an energy-efficient and cost saving method compared to the conventional freeze drying technique for the gentle production of tasty fruit or vegetable snacks has been found. This technique will make dried high-quality snacks available for many of consumers.

Keywords: blueberries, freeze drying, microwave freeze drying, process parameters, product quality

Procedia PDF Downloads 224
16503 Hybrid Wavelet-Adaptive Neuro-Fuzzy Inference System Model for a Greenhouse Energy Demand Prediction

Authors: Azzedine Hamza, Chouaib Chakour, Messaoud Ramdani

Abstract:

Energy demand prediction plays a crucial role in achieving next-generation power systems for agricultural greenhouses. As a result, high prediction quality is required for efficient smart grid management and therefore low-cost energy consumption. The aim of this paper is to investigate the effectiveness of a hybrid data-driven model in day-ahead energy demand prediction. The proposed model consists of Discrete Wavelet Transform (DWT), and Adaptive Neuro-Fuzzy Inference System (ANFIS). The DWT is employed to decompose the original signal in a set of subseries and then an ANFIS is used to generate the forecast for each subseries. The proposed hybrid method (DWT-ANFIS) was evaluated using a greenhouse energy demand data for a week and compared with ANFIS. The performances of the different models were evaluated by comparing the corresponding values of Mean Absolute Percentage Error (MAPE). It was demonstrated that discret wavelet transform can improve agricultural greenhouse energy demand modeling.

Keywords: wavelet transform, ANFIS, energy consumption prediction, greenhouse

Procedia PDF Downloads 69
16502 A Nonlinear Stochastic Differential Equation Model for Financial Bubbles and Crashes with Finite-Time Singularities

Authors: Haowen Xi

Abstract:

We propose and solve exactly a class of non-linear generalization of the Black-Scholes process of stochastic differential equations describing price bubble and crashes dynamics. As a result of nonlinear positive feedback, the faster-than-exponential price positive growth (bubble forming) and negative price growth (crash forming) are found to be the power-law finite-time singularity in which bubbles and crashes price formation ending at finite critical time tc. While most literature on the market bubble and crash process focuses on the nonlinear positive feedback mechanism aspect, very few studies concern the noise level on the same process. The present work adds to the market bubble and crashes literature by studying the external sources noise influence on the critical time tc of the bubble forming and crashes forming. Two main results will be discussed: (1) the analytical expression of expected value of the critical time is found and unexpected critical slowing down due to the coupling external noise is predicted; (2) numerical simulations of the nonlinear stochastic equation is presented, and the probability distribution of Prob(tc) is found to be the inverse gamma function.

Keywords: bubble, crash, finite-time-singular, numerical simulation, price dynamics, stochastic differential equations

Procedia PDF Downloads 115
16501 Knowledge Discovery from Production Databases for Hierarchical Process Control

Authors: Pavol Tanuska, Pavel Vazan, Michal Kebisek, Dominika Jurovata

Abstract:

The paper gives the results of the project that was oriented on the usage of knowledge discoveries from production systems for needs of the hierarchical process control. One of the main project goals was the proposal of knowledge discovery model for process control. Specifics data mining methods and techniques was used for defined problems of the process control. The gained knowledge was used on the real production system, thus, the proposed solution has been verified. The paper documents how it is possible to apply new discovery knowledge to be used in the real hierarchical process control. There are specified the opportunities for application of the proposed knowledge discovery model for hierarchical process control.

Keywords: hierarchical process control, knowledge discovery from databases, neural network, process control

Procedia PDF Downloads 461
16500 Analytical Method for Seismic Analysis of Shaft-Tunnel Junction under Longitudinal Excitations

Authors: Jinghua Zhang

Abstract:

Shaft-tunnel junction is a typical case of the structural nonuniformity in underground structures. The shaft and the tunnel possess greatly different structural features. Even under uniform excitations, they tend to behave discrepantly. Studies on shaft-tunnel junctions are mainly performed numerically. Shaking table tests are also conducted. Although many numerical and experimental data are obtained, an analytical solution still has great merits of gaining more insights into the shaft-tunnel problem. This paper will try to remedy the situation. Since the seismic responses of shaft-tunnel junctions are very related to directions of the excitations, they are studied in two scenarios: the longitudinal-excitation scenario and the transverse-excitation scenario. The former scenario will be addressed in this paper. Given that responses of the tunnel are highly dependent on the shaft, the analytical solutions would be developed firstly for the vertical shaft. Then, the seismic responses of the tunnel would be discussed. Since vertical shafts bear a resemblance to rigid caissons, the solution proposed in this paper is derived by introducing terms of shaft-tunnel and soil-tunnel interactions into equations originally developed for rigid caissons. The validity of the solution is examined by a validation model computed by finite element method. The mutual influence between the shaft and the tunnel is introduced. The soil-structure interactions are discussed parametrically based on the proposed equations. The shaft-tunnel relative displacement and the soil-tunnel relative stiffness are found to be the most important parameters affecting the magnitudes and distributions of the internal forces of the tunnel. A hinge-joint at the shaft-tunnel junction could significantly reduce the degree of stress concentration compared with a rigid joint.

Keywords: analytical solution, longitudinal excitation, numerical validation , shaft-tunnel junction

Procedia PDF Downloads 139
16499 Error Amount in Viscoelasticity Analysis Depending on Time Step Size and Method used in ANSYS

Authors: A. Fettahoglu

Abstract:

Theory of viscoelasticity is used by many researchers to represent behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain like pavements of bridges can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell elements and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Subsequently, a practical problem, which has an analytical solution given in literature, is used to verify the applicability of viscoelasticity tool embedded in ANSYS. Finally, amount of error in the results of ANSYS is compared with the analytical results to indicate the influence of used method and time step size.

Keywords: generalized Maxwell model, finite element method, prony series, time step size, viscoelasticity

Procedia PDF Downloads 352
16498 Degradation of Emerging Pharmaceuticals by Gamma Irradiation Process

Authors: W. Jahouach-Rabai, J. Aribi, Z. Azzouz-Berriche, R. Lahsni, F. Hosni

Abstract:

Gamma irradiation applied in removing pharmaceutical contaminants from wastewater is an effective advanced oxidation process (AOP), considered as an alternative to conventional water treatment technologies. In this purpose, the degradation efficiency of several detected contaminants under gamma irradiation was evaluated. In fact, radiolysis of organic pollutants in aqueous solutions produces powerful reactive species, essentially hydroxyl radical ( ·OH), able to destroy recalcitrant pollutants in water. Pharmaceuticals considered in this study are aqueous solutions of paracetamol, ibuprofen, and diclofenac at different concentrations 0.1-1 mmol/L, which were treated with irradiation doses from 3 to 15 kGy. The catalytic oxidation of these compounds by gamma irradiation was investigated using hydrogen peroxide (H₂O₂) as a convenient oxidant. Optimization of the main parameters influencing irradiation process, namely irradiation doses, initial concentration and oxidant volume (H₂O₂) were investigated, in the aim to release high degradation efficiency of considered pharmaceuticals. Significant modifications attributed to these parameters appeared in the variation of degradation efficiency, chemical oxygen demand removal (COD) and concentration of radio-induced radicals, confirming them synergistic effect to attempt total mineralization. Pseudo-first-order reaction kinetics could be used to depict the degradation process of these compounds. A sophisticated analytical study was released to quantify the detected radio-induced radicals (electron paramagnetic resonance spectroscopy (EPR) and high performance liquid chromatography (HPLC)). All results showed that this process is effective for the degradation of many pharmaceutical products in aqueous solutions due to strong oxidative properties of generated radicals mainly hydroxyl radical. Furthermore, the addition of an optimal amount of H₂O₂ was efficient to improve the oxidative degradation and contribute to the high performance of this process at very low doses (0.5 and 1 kGy).

Keywords: AOP, COD, hydroxyl radical, EPR, gamma irradiation, HPLC, pharmaceuticals

Procedia PDF Downloads 157
16497 Prioritizing Quality Dimensions in ‘Servitised’ Business through AHP

Authors: Mohita Gangwar Sharma

Abstract:

Different factors are compelling manufacturers to move towards the phenomenon of servitization i.e. when firms go beyond giving support to the customers in operating the equipment. The challenges that are being faced in this transition by the manufacturing firms from being a product provider to a product- service provider are multipronged. Product-Service Systems (PSS) lies in between the pure-product and pure-service continuum. Through this study, we wish to understand the dimensions of ‘PSS-quality’. We draw upon the quality literature for both the product and services and through an expert survey for a specific transportation sector using analytical hierarchical process (AHP) derive a conceptual model that can be used as a comprehensive measurement tool for PSS offerings.

Keywords: servitisation, quality, product-service system, AHP

Procedia PDF Downloads 290
16496 Total-Reflection X-Ray Spectroscopy as a Tool for Element Screening in Food Samples

Authors: Hagen Stosnach

Abstract:

The analytical demands on modern instruments for element analysis in food samples include the analysis of major, trace and ultra-trace essential elements as well as potentially toxic trace elements. In this study total reflection, X-ray fluorescence analysis (TXRF) is presented as an analytical technique, which meets the requirements, defined by the Association of Official Agricultural Chemists (AOAC) regarding the limit of quantification, repeatability, reproducibility and recovery for most of the target elements. The advantages of TXRF are the small sample mass required, the broad linear range from µg/kg up to wt.-% values, no consumption of gases or cooling water, and the flexible and easy sample preparation. Liquid samples like alcoholic or non-alcoholic beverages can be analyzed without any preparation. For solid food samples, the most common sample pre-treatment methods are mineralization, direct deposition of the sample onto the reflector without/with minimal treatment, mainly as solid suspensions or after extraction. The main disadvantages are due to the possible peaks overlapping, which may lower the accuracy of quantitative analysis and the limit in the element identification. This analytical technique will be presented by several application examples, covering a broad range of liquid and solid food types.

Keywords: essential elements, toxic metals, XRF, spectroscopy

Procedia PDF Downloads 121