Search results for: Analytic Network Process (ANP)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18945

Search results for: Analytic Network Process (ANP)

12045 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm

Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam

Abstract:

The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.

Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction

Procedia PDF Downloads 125
12044 Development and Validation of the University of Mindanao Needs Assessment Scale (UMNAS) for College Students

Authors: Ryan Dale B. Elnar

Abstract:

This study developed a multidimensional need assessment scale for college students called The University of Mindanao Needs Assessment Scale (UMNAS). Although there are context-specific instruments measuring the needs of clinical and non-clinical samples, literature reveals no standardized scales to measure the needs of the college students thus a four-phase item development process was initiated to support its content validity. Comprising seven broad facets namely spiritual-moral, intrapersonal, socio-personal, psycho-emotional, cognitive, physical and sexual, a pyramid model of college needs was deconstructed through FGD sample to support the literature review. Using various construct validity procedures, the model was further tested using a total of 881 Filipino college samples. The result of the study revealed evidences of the reliability and validity of the UMNAS. The reliability indices range from .929-.933. Exploratory and confirmatory factor analyses revealed a one-factor-six-dimensional instrument to measure the needs of the college students. Using multivariate regression analysis, year level and course are found predictors of students’ needs. Content analysis attested the usefulness of the instrument to diagnose students’ personal and academic issues and concerns in conjunction with other measures. The norming process includes 1728 students from the different colleges of the University of Mindanao. Further validation is recommended to establish a national norm for the instrument.

Keywords: needs assessment scale, validity, factor analysis, college students

Procedia PDF Downloads 433
12043 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy

Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş

Abstract:

Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.

Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance

Procedia PDF Downloads 232
12042 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study

Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio Domenico Grieco, Emanuela Guerriero

Abstract:

Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from Sanofi Aventis, a French pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.

Keywords: constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries

Procedia PDF Downloads 599
12041 Telecom Infrastructure Outsourcing: An Innovative Approach

Authors: Irfan Zafar

Abstract:

Over the years the Telecom Industry in the country has shown a lot of progress in terms of infrastructure development coupled with the availability of telecom services. This has however led to the cut throat completion among various operators thus leading to reduced tariffs to the customers. The profit margins have seen a reduction thus leading the operators to think of other avenues by adopting new models while keeping the quality of service intact. The outsourcing of the network and the resources is one such model which has shown promising benefits which includes lower costs, less risk, higher levels of customer support and engagement, predictable expenses, access to the emerging technologies, benefiting from a highly skilled workforce, adaptability, focus on the core business while reducing capital costs. A lot of research has been done on outsourcing in terms of reasons of outsourcing and its benefits. However this study is an attempt to analyze the effects of the outsourcing on an organizations performance (Telecommunication Sector) considering the variables (1) Cost Reduction (2) Organizational Performance (3) Flexibility (4) Employee Performance (5) Access to Specialized Skills & Technology and the (6) Outsourcing Risks.

Keywords: outsourcing, ICT, telecommunication, IT, networking

Procedia PDF Downloads 383
12040 Upconversion Nanoparticles for Imaging and Controlled Photothermal Release of Anticancer Drug in Breast Cancer

Authors: Rishav Shrestha, Yong Zhang

Abstract:

The Anti-Stoke upconversion process has been used extensively for bioimaging and is recently being used for photoactivated therapy in cancer utilizing upconversion nanoparticles (UCNs). The UCNs have an excitation band at 980nm; 980nm laser excitation used to produce UV/Visible emissions also produce a heating effect. Light-to-heat conversion has been observed in nanoparticles(NPs) doped with neodymium(Nd) or ytterbium(Yb)/erbium(Er) ions. Despite laser-induced heating in Rare-earth doped NPs being proven to be a relatively efficient process, only few attempts to use them as photothermal agents in biosystems have been made up to now. Gold nanoparticles and carbon nanotubes are the most researched and developed for photothermal applications. Both have large heating efficiency and outstanding biocompatibility. However, they show weak fluorescence which makes them harder to track in vivo. In that regard, UCNs are attractive due to their excellent optical features in addition to their light-to-heat conversion and excitation by NIR, for imaging and spatiotemporally releasing drugs. In this work, we have utilized a simple method to coat Nd doped UCNs with thermoresponsive polymer PNIPAM on which 4-Hydroxytamoxifen (4-OH-T) is loaded. Such UCNs demonstrate a high loading efficiency and low leakage of 4-OH-T. Encouragingly, the release of 4-OH-T can be modulated by varying the power and duration of the NIR. Such UCNs were then used to demonstrate imaging and controlled photothermal release of 4-OH-T in MCF-7 breast cancer cells.

Keywords: cancer therapy, controlled release, photothermal release, upconversion nanoparticles

Procedia PDF Downloads 413
12039 Preparation and Characterization of Phosphate-Nickel-Titanium Composite Coating Obtained by Sol Gel Process for Corrosion Protection

Authors: Khalidou Ba, Abdelkrim Chahine, Mohamed Ebn Touhami

Abstract:

A strong industrial interest is focused on the development of coatings for anticorrosion protection. In this context, phosphate composite materials are expanding strongly due to their chemical characteristics and their interesting physicochemical properties. Sol-gel coatings offer high homogeneity and purity that may lead to obtain coating presenting good adhesion to metal surface. The goal behind this work is to develop efficient coatings for corrosion protection of steel to extend its life. In this context, a sol gel process allowing to obtain thin film coatings on carbon steel with high resistance to corrosion has been developed. The optimization of several experimental parameters such as the hydrolysis time, the temperature, the coating technique, the molar ratio between precursors, the number of layers and the drying mode has been realized in order to obtain a coating showing the best anti-corrosion properties. The effect of these parameters on the microstructure and anticorrosion performance of the films sol gel coating has been investigated using different characterization methods (FTIR, XRD, Raman, XPS, SEM, Profilometer, Salt Spray Test, etc.). An optimized coating presenting good adhesion and very stable anticorrosion properties in salt spray test, which consists of a corrosive attack accelerated by an artificial salt spray consisting of a solution of 5% NaCl, pH neutral, under precise conditions of temperature (35 °C) and pressure has been obtained.

Keywords: sol gel, coating, corrosion, XPS

Procedia PDF Downloads 118
12038 Microstructure and Mechanical Properties Evaluation of Graphene-Reinforced AlSi10Mg Matrix Composite Produced by Powder Bed Fusion Process

Authors: Jitendar Kumar Tiwari, Ajay Mandal, N. Sathish, A. K. Srivastava

Abstract:

Since the last decade, graphene achieved great attention toward the progress of multifunction metal matrix composites, which are highly demanded in industries to develop energy-efficient systems. This study covers the two advanced aspects of the latest scientific endeavor, i.e., graphene as reinforcement in metallic materials and additive manufacturing (AM) as a processing technology. Herein, high-quality graphene and AlSi10Mg powder mechanically mixed by very low energy ball milling with 0.1 wt. % and 0.2 wt. % graphene. Mixed powder directly subjected to the powder bed fusion process, i.e., an AM technique to produce composite samples along with bare counterpart. The effects of graphene on porosity, microstructure, and mechanical properties were examined in this study. The volumetric distribution of pores was observed under X-ray computed tomography (CT). On the basis of relative density measurement by X-ray CT, it was observed that porosity increases after graphene addition, and pore morphology also transformed from spherical pores to enlarged flaky pores due to improper melting of composite powder. Furthermore, the microstructure suggests the grain refinement after graphene addition. The columnar grains were able to cross the melt pool boundaries in case of the bare sample, unlike composite samples. The smaller columnar grains were formed in composites due to heterogeneous nucleation by graphene platelets during solidification. The tensile properties get affected due to induced porosity irrespective of graphene reinforcement. The optimized tensile properties were achieved at 0.1 wt. % graphene. The increment in yield strength and ultimate tensile strength was 22% and 10%, respectively, for 0.1 wt. % graphene reinforced sample in comparison to bare counterpart while elongation decreases 20% for the same sample. The hardness indentations were taken mostly on the solid region in order to avoid the collapse of the pores. The hardness of the composite was increased progressively with graphene content. Around 30% of increment in hardness was achieved after the addition of 0.2 wt. % graphene. Therefore, it can be concluded that powder bed fusion can be adopted as a suitable technique to develop graphene reinforced AlSi10Mg composite. Though, some further process modification required to avoid the induced porosity after the addition of graphene, which can be addressed in future work.

Keywords: graphene, hardness, porosity, powder bed fusion, tensile properties

Procedia PDF Downloads 117
12037 Fault Tree Analysis and Bayesian Network for Fire and Explosion of Crude Oil Tanks: Case Study

Authors: B. Zerouali, M. Kara, B. Hamaidi, H. Mahdjoub, S. Rouabhia

Abstract:

In this paper, a safety analysis for crude oil tanks to prevent undesirable events that may cause catastrophic accidents. The estimation of the probability of damage to industrial systems is carried out through a series of steps, and in accordance with a specific methodology. In this context, this work involves developing an assessment tool and risk analysis at the level of crude oil tanks system, based primarily on identification of various potential causes of crude oil tanks fire and explosion by the use of Fault Tree Analysis (FTA), then improved risk modelling by Bayesian Networks (BNs). Bayesian approach in the evaluation of failure and quantification of risks is a dynamic analysis approach. For this reason, have been selected as an analytical tool in this study. Research concludes that the Bayesian networks have a distinct and effective method in the safety analysis because of the flexibility of its structure; it is suitable for a wide variety of accident scenarios.

Keywords: bayesian networks, crude oil tank, fault tree, prediction, safety

Procedia PDF Downloads 643
12036 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System

Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim

Abstract:

General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.

Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms

Procedia PDF Downloads 374
12035 Interpreting Chopin’s Music Today: Mythologization of Art: Kitsch

Authors: Ilona Bala

Abstract:

The subject of this abstract is related to the notion of 'popular music', a notion that should be treated with extreme care, particularly when applied to Frederic Chopin, one of the greatest composers of Romanticism. By ‘popular music’, we mean a category of everyday music, set against the more intellectual kind, referred to as ‘classical’. We only need to look back to the culture of the nineteenth century to realize that this ‘popular music’ refers to the ‘music of the low’. It can be studied from a sociological viewpoint, or as sociological aesthetics. However, we cannot ignore the fact that, very quickly, this music spread to the wealthiest strata of the European society of the nineteenth century, while likewise the lowest classes often listen to the intellectual classical music, so pleasant to listen to. Further, we can observe that a sort of ‘sacralisation of kitsch’ occurs at the intersection between the classical and popular music. This process is the topic of this contribution. We will start by investigating the notion of kitsch through the study of Chopin’s popular compositions. However, before considering the popularisation of this music in today’s culture, we will have to focus on the use of the word kitsch in Chopin’s times, through his own musical aesthetics. Finally, the objective here will be to negate the theory that art is simply the intellectual definition of aesthetics. A kitsch can, obviously, only work on the emotivity of the masses, as it represents one of the features of culture-language (the words which the masses identify with). All art is transformed, becoming something outdated or even outmoded. Here, we are truly within a process of mythologization of art, through the study of the aesthetic reception of the musical work.

Keywords: F. Chopin, kitsch, musical work, mythologization of art, popular music, romantic music

Procedia PDF Downloads 398
12034 Thermophilic Anaerobic Granular Membrane Distillation Bioreactor for Wastewater Reuse

Authors: Duong Cong Chinh, Shiao-Shing Chen, Le Quang Huy

Abstract:

Membrane distillation (MD) is actually claimed to be a cost-effective separation process when waste heat, alternative energy sources, or wastewater are used. To the best of our knowledge, this is the first study that a thermophilic anaerobic granular bioreactor is integrated with membrane distillation (ThAnMDB) was investigated. In this study, the laboratory scale anaerobic bioreactor (1.2 litter) was set-up. The bioreactor was maintained at temperature 55 ± 2°C, hydraulic retention time = 0.5 days, organic loading rates of 7 and 10 kg chemical oxygen demand (COD) m³/day. Side-stream direct contact membrane distillation with the polytetrafluoroethylene membrane area was 150 cm². The temperature of the distillate was kept at 25°C. Results show that distillate flux was 19.6 LMH (Liters per square meter per hour) on the first day and gradually decreased to 6.9 LMH after 10 days, and the membrane was not wet. Notably, by directly using the heat from the thermophilic anaerobic for MD separation process, all distilled water from wastewater was reuse as fresh water (electrical conductivity < 120 µs/cm). The ThAnMDB system showed its high pollutant removal performance: chemical oxygen demand (COD) from 99.6 to 99.9%, NH₄⁺ from 60 to 95%, and PO₄³⁻ complete removal. In addition, methane yield was from 0.28 to 0.34 lit CH₄/gram COD removal (80 – 97% of the theoretical) demonstrated that the ThAnMDB system was quite stable. The achievement of the ThAnMDB is not only in removing pollutants and reusing wastewater but also in absolutely unnecessarily adding alkaline to the anaerobic bioreactor system.

Keywords: high rate anaerobic digestion, membrane distillation, thermophilic anaerobic, wastewater reuse

Procedia PDF Downloads 109
12033 Determining the Effects of Wind-Aided Midge Movement on the Probability of Coexistence of Multiple Bluetongue Virus Serotypes in Patchy Environments

Authors: Francis Mugabi, Kevin Duffy, Joseph J. Y. T Mugisha, Obiora Collins

Abstract:

Bluetongue virus (BTV) has 27 serotypes, with some of them coexisting in patchy (different) environments, which make its control difficult. Wind-aided midge movement is a known mechanism in the spread of BTV. However, its effects on the probability of coexistence of multiple BTV serotypes are not clear. Deterministic and stochastic models for r BTV serotypes in n discrete patches connected by midge and/or cattle movement are formulated and analyzed. For the deterministic model without midge and cattle movement, using the comparison principle, it is shown that if the patch reproduction number R0 < 1, i=1,2,...,n, j=1,2,...,r, all serotypes go extinct. If R^j_i0>1, competitive exclusion takes place. Using numerical simulations, it is shown that when the n patches are connected by midge movement, coexistence takes place. To account for demographic and movement variability, the deterministic model is transformed into a continuous-time Markov chain stochastic model. Utilizing a multitype branching process, it is shown that the midge movement can have a large effect on the probability of coexistence of multiple BTV serotypes. The probability of coexistence can be brought to zero when the control interventions that directly kill the adult midges are applied. These results indicate the significance of wind-aided midge movement and vector control interventions on the coexistence and control of multiple BTV serotypes in patchy environments.

Keywords: bluetongue virus, coexistence, multiple serotypes, midge movement, branching process

Procedia PDF Downloads 135
12032 Preliminary Study of Water-Oil Separation Process in Three-Phase Separators Using Factorial Experimental Designs and Simulation

Authors: Caroline M. B. De Araujo, Helenise A. Do Nascimento, Claudia J. Da S. Cavalcanti, Mauricio A. Da Motta Sobrinho, Maria F. Pimentel

Abstract:

Oil production is often followed by the joint production of water and gas. During the journey up to the surface, due to severe conditions of temperature and pressure, the mixing between these three components normally occurs. Thus, the three phases separation process must be one of the first steps to be performed after crude oil extraction, where the water-oil separation is the most complex and important step, since the presence of water into the process line can increase corrosion and hydrates formation. A wide range of methods can be applied in order to proceed with oil-water separation, being more commonly used: flotation, hydrocyclones, as well as the three phase separator vessels. Facing what has been presented so far, it is the aim of this paper to study a system consisting of a three-phase separator, evaluating the influence of three variables: temperature, working pressure and separator type, for two types of oil (light and heavy), by performing two factorial design plans 23, in order to find the best operating condition. In this case, the purpose is to obtain the greatest oil flow rate in the product stream (m3/h) as well as the lowest percentage of water in the oil stream. The simulation of the three-phase separator was performed using Aspen Hysys®2006 simulation software in stationary mode, and the evaluation of the factorial experimental designs was performed using the software Statistica®. From the general analysis of the four normal probability plots of effects obtained, it was observed that interaction effects of two and three factors did not show statistical significance at 95% confidence, since all the values were very close to zero. Similarly, the main effect "separator type" did not show significant statistical influence in any situation. As in this case, it has been assumed that the volumetric flow of water, oil and gas were equal in the inlet stream, the effect separator type, in fact, may not be significant for the proposed system. Nevertheless, the main effect “temperature” was significant for both responses (oil flow rate and mass fraction of water in the oil stream), considering both light and heavy oil, so that the best operation condition occurs with the temperature at its lowest level (30oC), since the higher the temperature, the liquid oil components pass into the vapor phase, going to the gas stream. Furthermore, the higher the temperature, the higher the formation water vapor, so that ends up going into the lighter stream (oil stream), making the separation process more difficult. Regarding the “working pressure”, this effect showed to be significant only for the oil flow rate, so that the best operation condition occurs with the pressure at its highest level (9bar), since a higher operating pressure, in this case, indicated a lower pressure drop inside the vessel, generating lower level of turbulence inside the separator. In conclusion, the best-operating condition obtained for the proposed system, at the studied range, occurs for temperature is at its lowest level and the working pressure is at its highest level.

Keywords: factorial experimental design, oil production, simulation, three-phase separator

Procedia PDF Downloads 263
12031 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements

Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker

Abstract:

Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.

Keywords: adaptive, CAx, function blocks, turbomachinery

Procedia PDF Downloads 287
12030 Microstructural and Optical Characterization of High-quality ZnO Nano-rods Deposited by Simple Electrodeposition Process

Authors: Somnath Mahato, Minarul Islam Sarkar, Luis Guillermo Gerling, Joaquim Puigdollers, Asit Kumar Kar

Abstract:

Nanostructured Zinc Oxide (ZnO) thin films have been successfully deposited on indium tin oxide (ITO) coated glass substrates by a simple two electrode electrodeposition process at constant potential. The preparative parameters such as deposition time, deposition potential, concentration of solution, bath temperature and pH value of electrolyte have been optimized for deposition of uniform ZnO thin films. X-ray diffraction studies reveal that the prepared ZnO thin films have a high preferential oriented c-axis orientation with compact hexagonal (wurtzite) structure. Surface morphological studies show that the ZnO films are smooth, continuous, uniform without cracks or holes and compact with nanorod-like structure on the top of the surface. Optical properties reveal that films exhibit higher absorbance in the violet region of the optical spectrum; it gradually decreased in the visible range with increases in wavelength and became least at the beginning of NIR region. The photoluminescence spectra shows that the observed peaks are attributed to the various structural defects in the nanostructured ZnO crystal. The microstructural and optical properties suggest that the electrodeposited ZnO thin films are suitable for application in photosensitive devices such as photovoltaic solar cells photoelectrochemical cells and light emitting diodes etc.

Keywords: electrodeposition, microstructure, optical properties, ZnO thin films

Procedia PDF Downloads 302
12029 Atherosclerotic Plagues and Immune Microenvironment: From Lipid-Lowering to Anti-inflammatory and Immunomodulatory Drug Approaches in Cardiovascular Diseases

Authors: Husham Bayazed

Abstract:

A growing number of studies indicate that atherosclerotic coronary artery disease (CAD) has a complex pathogenesis that extends beyond cholesterol intimal infiltration. The atherosclerosis process may involve an immune micro-environmental condition driven by local activation of the adaptive and innate immunity arrays, resulting in the formation of atherosclerotic plaques. Therefore, despite the wide usage of lipid-lowering agents, these devastating coronary diseases are not averted either at primary or secondary prevention levels. Many trials have recently shown an interest in the immune targeting of the inflammatory process of atherosclerotic plaques, with the promised improvement in atherosclerotic cardiovascular disease outcomes. This recently includes the immune-modulatory drug “Canakinumab” as an anti-interleukin-1 beta monoclonal antibody in addition to "Colchicine,” which's established as a broad-effect drug in the management of other inflammatory conditions. Recent trials and studies highlight the importance of inflammation and immune reactions in the pathogenesis of atherosclerosis and plaque formation. This provides an insight to discuss and extend the therapies from old lipid-lowering drugs (statins) to anti-inflammatory drugs (colchicine) and new targeted immune-modulatory therapies like inhibitors of IL-1 beta (canakinumab) currently under investigation.

Keywords: atherosclerotic plagues, immune microenvironment, lipid-lowering agents, and immunomodulatory drugs

Procedia PDF Downloads 52
12028 Impact of Zeolite NaY Synthesized from Kaolin on the Properties of Pyrolytic Oil Derived from Used Tire

Authors: Julius Ilawe Osayi, Peter Osifo

Abstract:

Solid waste disposal, such as used tires is a global challenge as well as energy crisis due to rising energy demand amidst price uncertainty and depleting fossil fuel reserves. Therefore, the effectiveness of pyrolysis as a disposal method that can transform used tires into liquid fuel and other end-products has made the process attractive to researchers. Although used tires have been converted to liquid fuel using pyrolysis, there is the need to improve on the liquid fuel properties. Hence, this paper reports the investigation of zeolite NaY synthesized from kaolin, a locally abundant soil material in the Benin metropolis as a suitable catalyst and its effect on the properties of pyrolytic oil produced from used tires. The pyrolysis process was conducted for a range of 1 to 10 wt.% of catalyst concentration to used tire at a temperature of 600 oC, a heating rate of 15oC/min and particle size of 6mm. Although no significant increase in pyrolytic oil yield was observed compared to the previously investigated non-catalytic pyrolysis of a used tire. However, the Fourier transform infrared (FTIR), Nuclear Magnetic Resonance (NMR); and Gas chromatography-mass spectrometry (GC-MS) characterization results revealed the pyrolytic oil to possess an improved physicochemical and fuel properties alongside valuable industrial chemical species. This confirms the possibility of transforming kaolin into a catalyst suitable for improved fuel properties of the liquid fraction obtainable from thermal cracking of hydrocarbon materials.

Keywords: catalytic pyrolysis, fossil fuel, kaolin, pyrolytic oil, used tyres, Zeolite NaY

Procedia PDF Downloads 161
12027 Information Communication Technology Based Road Traffic Accidents’ Identification, and Related Smart Solution Utilizing Big Data

Authors: Ghulam Haider Haidaree, Nsenda Lukumwena

Abstract:

Today the world of research enjoys abundant data, available in virtually any field, technology, science, and business, politics, etc. This is commonly referred to as big data. This offers a great deal of precision and accuracy, supportive of an in-depth look at any decision-making process. When and if well used, Big Data affords its users with the opportunity to produce substantially well supported and good results. This paper leans extensively on big data to investigate possible smart solutions to urban mobility and related issues, namely road traffic accidents, its casualties, and fatalities based on multiple factors, including age, gender, location occurrences of accidents, etc. Multiple technologies were used in combination to produce an Information Communication Technology (ICT) based solution with embedded technology. Those technologies include principally Geographic Information System (GIS), Orange Data Mining Software, Bayesian Statistics, to name a few. The study uses the Leeds accident 2016 to illustrate the thinking process and extracts thereof a model that can be tested, evaluated, and replicated. The authors optimistically believe that the proposed model will significantly and smartly help to flatten the curve of road traffic accidents in the fast-growing population densities, which increases considerably motor-based mobility.

Keywords: accident factors, geographic information system, information communication technology, mobility

Procedia PDF Downloads 198
12026 Relationship between Personality Traits and Postural Stability among Czech Military Combat Troops

Authors: K. Rusnakova, D. Gerych, M. Stehlik

Abstract:

Postural stability is a complex process involving actions of biomechanical, motor, sensory and central nervous system components. Numerous joint systems, muscles involved, the complexity of sporting movements and situations require perfect coordination of the body's movement patterns. To adapt to a constantly changing situation in such a dynamic environment as physical performance, optimal input of information from visual, vestibular and somatosensory sensors are needed. Combat soldiers are required to perform physically and mentally demanding tasks in adverse conditions, and poor postural stability has been identified as a risk factor for lower extremity musculoskeletal injury. The aim of this study is to investigate whether some personality traits are related to the performance of static postural stability among soldiers of combat troops. NEO personality inventory (NEO-PI-R) was used to identify personality traits and the Nintendo Wii Balance Board was used to assess static postural stability of soldiers. Postural stability performance was assessed by changes in center of pressure (CoP) and center of gravity (CoG). A posturographic test was performed for 60 s with eyes opened during quiet upright standing. The results showed that facets of neuroticism and conscientiousness personality traits were significantly correlated with measured parameters of CoP and CoG. This study can help for better understanding the relationship between personality traits and static postural stability. The results can be used to optimize the training process at the individual level.

Keywords: neuroticism, conscientiousness, postural stability, combat troops

Procedia PDF Downloads 124
12025 Projection of Solar Radiation for the Extreme South of Brazil

Authors: Elison Eduardo Jardim Bierhals, Claudineia Brazil, Rafael Haag, Elton Rossini

Abstract:

This work aims to validate and make the projections of solar energy for the Brazilian period from 2025 to 2100. As the plants designed by the HadGEM2-AO (Global Hadley Model 2 - Atmosphere) General Circulation Model UK Met Office Hadley Center, belonging to Phase 5 of the Intercomparison of Coupled Models (CMIP5). The simulation results of the model are compared with monthly data from 2006 to 2013, measured by a network of meteorological sections of the National Institute of Meteorology (INMET). The performance of HadGEM2-AO is evaluated by the efficiency coefficient (CEF) and bias. The results are shown in the table of maps and maps. HadGEM2-AO, in the most pessimistic scenario, RCP 8.5 had a very good accuracy, presenting efficiency coefficients between 0.94 and 0.98, the perfect setting being Solar radiation, which indicates a horizontal trend, is a climatic alternative for some regions of the Brazilian scenario, especially in spring.

Keywords: climate change, projections, solar radiation, scenarios climate change

Procedia PDF Downloads 138
12024 Understanding the Lithiation/Delithiation Mechanism of Si₁₋ₓGeₓ Alloys

Authors: Laura C. Loaiza, Elodie Salager, Nicolas Louvain, Athmane Boulaoued, Antonella Iadecola, Patrik Johansson, Lorenzo Stievano, Vincent Seznec, Laure Monconduit

Abstract:

Lithium-ion batteries (LIBs) have an important place among energy storage devices due to their high capacity and good cyclability. However, the advancements in portable and transportation applications have extended the research towards new horizons, and today the development is hampered, e.g., by the capacity of the electrodes employed. Silicon and germanium are among the considered modern anode materials as they can undergo alloying reactions with lithium while delivering high capacities. It has been demonstrated that silicon in its highest lithiated state can deliver up to ten times more capacity than graphite (372 mAh/g): 4200 mAh/g for Li₂₂Si₅ and 3579 mAh/g for Li₁₅Si₄, respectively. On the other hand, germanium presents a capacity of 1384 mAh/g for Li₁₅Ge₄, and a better electronic conductivity and Li ion diffusivity as compared to Si. Nonetheless, the commercialization potential of Ge is limited by its cost. The synergetic effect of Si₁₋ₓGeₓ alloys has been proven, the capacity is increased compared to Ge-rich electrodes and the capacity retention is increased compared to Si-rich electrodes, but the exact performance of this type of electrodes will depend on factors like specific capacity, C-rates, cost, etc. There are several reports on various formulations of Si₁₋ₓGeₓ alloys with promising LIB anode performance with most work performed on complex nanostructures resulting from synthesis efforts implying high cost. In the present work, we studied the electrochemical mechanism of the Si₀.₅Ge₀.₅ alloy as a realistic micron-sized electrode formulation using carboxymethyl cellulose (CMC) as the binder. A combination of a large set of in situ and operando techniques were employed to investigate the structural evolution of Si₀.₅Ge₀.₅ during lithiation and delithiation processes: powder X-ray diffraction (XRD), X-ray absorption spectroscopy (XAS), Raman spectroscopy, and 7Li solid state nuclear magnetic resonance spectroscopy (NMR). The results have presented a whole view of the structural modifications induced by the lithiation/delithiation processes. The Si₀.₅Ge₀.₅ amorphization was observed at the beginning of discharge. Further lithiation induces the formation of a-Liₓ(Si/Ge) intermediates and the crystallization of Li₁₅(Si₀.₅Ge₀.₅)₄ at the end of the discharge. At really low voltages a reversible process of overlithiation and formation of Li₁₅₊δ(Si₀.₅Ge₀.₅)₄ was identified and related with a structural evolution of Li₁₅(Si₀.₅Ge₀.₅)₄. Upon charge, the c-Li₁₅(Si₀.₅Ge₀.₅)₄ was transformed into a-Liₓ(Si/Ge) intermediates. At the end of the process an amorphous phase assigned to a-SiₓGey was recovered. Thereby, it was demonstrated that Si and Ge are collectively active along the cycling process, upon discharge with the formation of a ternary Li₁₅(Si₀.₅Ge₀.₅)₄ phase (with a step of overlithiation) and upon charge with the rebuilding of the a-Si-Ge phase. This process is undoubtedly behind the enhanced performance of Si₀.₅Ge₀.₅ compared to a physical mixture of Si and Ge.

Keywords: lithium ion battery, silicon germanium anode, in situ characterization, X-Ray diffraction

Procedia PDF Downloads 270
12023 Multiband Microstrip Slotted Patch Antenna for mmWave 5G Femtocell Applications

Authors: Bhargavi G., Arathi R. Shankar

Abstract:

Transmitter and receiver closer to every other, which creates the twin benefits of better-nice links and more spatial reuse. In a network with nomadic customers, this inevitably includes deploying greater infrastructure, normally in the form of microcells, hot spots, disbursed antennas, or relays. A less pricey alternative is the recent concept of femtocells, additionally known as domestic base stations that are facts get admission to points installed by means of domestic users to get higher indoor voice and records insurance. Femtocells have the potential to offer excessive exceptional community get entry to indoor customers at low cost, even as concurrently reducing the load. gift femtocells that perform in 4G can also be extended for 5G sub-6 GHz band. Designing the femtocell in mmWave band of 5G may have many blessings in terms of bandwidth availability and coverage. Multiband microstrip patch antennas can be considered as a low value and prominent antennas in designing the femtocells because the single antenna helps multiple frequency.

Keywords: 5G, mmWave, antennas, wireless communications, femtocell

Procedia PDF Downloads 62
12022 A Study on Game Theory Approaches for Wireless Sensor Networks

Authors: M. Shoukath Ali, Rajendra Prasad Singh

Abstract:

Game Theory approaches and their application in improving the performance of Wireless Sensor Networks (WSNs) are discussed in this paper. The mathematical modeling and analysis of WSNs may have low success rate due to the complexity of topology, modeling, link quality, etc. However, Game Theory is a field, which can efficiently use to analyze the WSNs. Game Theory is related to applied mathematics that describes and analyzes interactive decision situations. Game theory has the ability to model independent, individual decision makers whose actions affect the surrounding decision makers. The outcome of complex interactions among rational entities can be predicted by a set of analytical tools. However, the rationality demands a stringent observance to a strategy based on measured of perceived results. Researchers are adopting game theory approaches to model and analyze leading wireless communication networking issues, which includes QoS, power control, resource sharing, etc.

Keywords: wireless sensor network, game theory, cooperative game theory, non-cooperative game theory

Procedia PDF Downloads 412
12021 A Comparative Evaluation of Broiler Strains Chickens, Arbor Acres, and Ross in Experimental Coccidiosis

Authors: S. S. R. Shojaei, S. Kord Afshari

Abstract:

The study was initiated to compare the production and defecation of Eimerial oocysts of two internationally reputed broiler strains under the local environmental and management conditions. 40 one-day old male chickens of Arbor Acres strain and ROSS strain (20 chicks from each strain) used in this study and were divided randomly into four control and challenge groups. Feed and water were provided for ad libitum consumption. At 15 d of age, chickens of challenge groups (from each strain) were individually inoculated with a mixture of 50000 of sporulated oocysts of 4 species including of E. acervulina (20%), E. maxima (40%), E. tenella (25%) and E. necatrix (15%) and also from the fourth day after Eimerial challenge, faecal droppings (litter samples) were collected 10 days consecutively for counting oocyst per gram (OPG). The results indicated that in the challenge groups, there was an increasing process of OPG in days of 4 to 7 post challenging and the pick level of OPG was seen at seventh day after challenging. From day 8 to 9, decreasing of OPG was happened. This decreasing continues with mild, fast and mild process to day of 13. Respectively and totally the average of OPG in the Arbor Acres group was lower than the group Ross in all days post inoculation and this difference was significant according to t-test. According to the obtained results in this study and since oocyst index almost always is considered as one of the most important indicators for coccidiosis evaluation, it can be realized that in the same surveillance condition the regarding the severity evaluation of coccidiosis, Arbor Acres strain broilers shed less oocysts than Ross strain broilers.

Keywords: arbor acres, ross, coccidiosis, OPG

Procedia PDF Downloads 486
12020 An Approach for Reliably Transforming Habits Towards Environmental Sustainability Behaviors Among Young Adults

Authors: Dike Felix Okechukwu

Abstract:

Studies and reports from authoritative sources such as the Intergovernmental Panel on Climate Change (IPCC) have stated that to effectively solve environmental sustainability challenges such as pollution, inappropriate waste disposal, and unsustainable consumption, there is a need for more research to seek solutions towards environmentally sustainable behavior. However, literature thus far reports only sporadic developments of TL in Environmental Sustainability because there are scarce reports showing the reliable process(es) to produce TL - for sustainability projects or otherwise. Nonetheless, a recently published article demonstrates how TL can be used to help young adults gain transformed mindsets and habits toward environmental sustainability behaviors and practices. This study, however, does not demonstrate, on a repeated basis, the dependability of the method or reliability of the procedures in using its proposed methodology to help young adults achieve transformed habits towards environmental sustainability behaviors, especially in diverse contexts. In this study, it is demonstrated, through repeated measures, a reliable process that can be used to achieve transformations in habits and mindsets toward environmental sustainability behaviors. To achieve this, the design adopted is multiple case studies and a thematic analysis techniques. Five cases in diverse contexts were used to analyze pieces of evidence of Transformative Learning Outcomes toward environmentally sustainable behaviors. Results from the study offer fresh perspectives on a reliable methodology that can be adopted to achieve Transformations in Habits and mindsets toward environmental sustainability behaviors.

Keywords: environmental sustainability, transformative learning, behaviour, learning, education

Procedia PDF Downloads 80
12019 Flowback Fluids Treatment Technology with Water Recycling and Valuable Metals Recovery

Authors: Monika Konieczyńska, Joanna Fajfer, Olga Lipińska

Abstract:

In Poland works related to the exploration and prospection of unconventional hydrocarbons (natural gas accumulated in the Silurian shale formations) started in 2007, based on the experience of the other countries that have created new possibilities for the use of existing hydrocarbons resources. The highly water-consuming process of hydraulic fracturing is required for the exploitation of shale gas which implies a need to ensure large volume of water available. As a result considerable amount of mining waste is generated, particularly liquid waste, i.e. flowback fluid with variable chemical composition. The chemical composition of the flowback fluid depends on the composition of the fracturing fluid and the chemistry of the fractured geological formations. Typically, flowback fluid is highly salinated, can be enriched in heavy metals, including rare earth elements, naturally occurring radioactive materials and organic compounds. The generated fluids considered as the extractive waste should be properly managed in the recovery or disposal facility. Problematic issue is both high hydration of waste as well as their variable chemical composition. Also the limited capacity of currently operating facilities is a growing problem. Based on the estimates, currently operating facilities will not be sufficient for the need of waste disposal when extraction of unconventional hydrocarbons starts. Further more, the content of metals in flowback fluids including rare earth elements is a considerable incentive to develop technology of metals recovery. Also recycling is a key factor in terms of selection of treatment process, which should provide that the thresholds required for reuse are met. The paper will present the study of the flowback fluids chemical composition, based on samples from hydraulic fracturing processes performed in Poland. The scheme of flowback fluid cleaning and recovering technology will be reviewed along with a discussion of the results and an assessment of environmental impact, including all generated by-products. The presented technology is innovative due to the metal recovery, as well as purified water supply for hydraulic fracturing process, which is significant contribution to reducing water consumption.

Keywords: environmental impact, flowback fluid, management of special waste streams, metals recovery, shale gas

Procedia PDF Downloads 252
12018 Removal of Heavy Metals Pb, Zn and Cu from Sludge Waste of Paper Industries Using Biosurfactant

Authors: Nurul Hidayati

Abstract:

Increasing public awareness of environmental pollution influences the search and development of technologies that help in clean up of organic and inorganic contaminants such as metals. Sludge waste of paper industries as toxic and hazardous material from specific source contains Pb, Zn, and Cu metal from waste soluble ink. An alternative and eco-friendly method of remediation technology is the use of biosurfactants and biosurfactant-producing microorganisms. Soil washing is among the methods available to remove heavy metal from sediments. The purpose of this research is to study effectiveness of biosurfactant with concentration = CMC for the removal of heavy metals, lead, zinc and copper in batch washing test under four different biosurfactant production by microbial origin. Pseudomonas putida T1(8), Bacillus subtilis 3K, Acinetobacter sp, and Actinobacillus sp was grown on mineral salt medium that had been already added with 2% concentration of molasses that it is a low cost application. The samples were kept in a shaker 120 rpm at room temperature for 3 days. Supernatants and sediments of sludge were separated by using a centrifuge and samples from supernatants were measured by atomic absorption spectrophotometer. The highest removal of Pb was up to 14,04% by Acinetobacter sp. Biosurfactant of Pseudomonas putida T1(8) have the highest removal for Zn and Cu up to 6,5% and 2,01% respectively. Biosurfactants have a role for removal process of the metals, including wetting, contact of biosurfactant to the surface of the sediments and detachment of the metals from the sediment. Biosurfactant has proven its ability as a washing agent in heavy metals removal from sediments, but more research is needed to optimize the process of removal heavy metals.

Keywords: biosurfactant, removal of heavy metals, sludge waste, paper industries

Procedia PDF Downloads 310
12017 42CrMo4 Steel Flow Behavior Characterization for High Temperature Closed Dies Hot Forging in Automotive Components Applications

Authors: O. Bilbao, I. Loizaga, F. A. Girot, A. Torregaray

Abstract:

The current energetical situation and the high competitiveness in industrial sectors as the automotive one have become the development of new manufacturing processes with less energy and raw material consumption a real necessity. As consequence, new forming processes related with high temperature hot forging in closed dies have emerged in the last years as new solutions to expand the possibilities of hot forging and iron casting in the automotive industry. These technologies are mid-way between hot forging and semi-solid metal processes, working at temperatures higher than the hot forging but below the solidus temperature or the semi solid range, where no liquid phase is expected. This represents an advantage comparing with semi-solid forming processes as thixoforging, by the reason that no so high temperatures need to be reached in the case of high melting point alloys as steels, reducing the manufacturing costs and the difficulties associated to semi-solid processing of them. Comparing with hot forging, this kind of technologies allow the production of parts with as forged properties and more complex and near-net shapes (thinner sidewalls), enhancing the possibility of designing lightweight components. From the process viewpoint, the forging forces are significantly decreased, and a significant reduction of the raw material, energy consumption, and the forging steps have been demonstrated. Despite the mentioned advantages, from the material behavior point of view, the expansion of these technologies has shown the necessity of developing new material flow behavior models in the process working temperature range to make the simulation or the prediction of these new forming processes feasible. Moreover, the knowledge of the material flow behavior at the working temperature range also allows the design of the new closed dies concept required. In this work, the flow behavior characterization in the mentioned temperature range of the widely used in automotive commercial components 42CrMo4 steel has been studied. For that, hot compression tests have been carried out in a thermomechanical tester in a temperature range that covers the material behavior from the hot forging until the NDT (Nil Ductility Temperature) temperature (1250 ºC, 1275 ºC, 1300 ºC, 1325 ºC, 1350ºC, and 1375 ºC). As for the strain rates, three different orders of magnitudes have been considered (0,1 s-1, 1s-1, and 10s-1). Then, results obtained from the hot compression tests have been treated in order to adapt or re-write the Spittel model, widely used in automotive commercial softwares as FORGE® that restrict the current existing models up to 1250ºC. Finally, the obtained new flow behavior model has been validated by the process simulation in a commercial automotive component and the comparison of the results of the simulation with the already made experimental tests in a laboratory cellule of the new technology. So as a conclusion of the study, a new flow behavior model for the 42CrMo4 steel in the new working temperature range and the new process simulation in its application in automotive commercial components has been achieved and will be shown.

Keywords: 42CrMo4 high temperature flow behavior, high temperature hot forging in closed dies, simulation of automotive commercial components, spittel flow behavior model

Procedia PDF Downloads 116
12016 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 48