Search results for: constriction factor based particle swarm optimization (CPSO)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33572

Search results for: constriction factor based particle swarm optimization (CPSO)

32792 Eco-Fashion Dyeing of Denim and Knitwear with Particle-Dyes

Authors: Adriana Duarte, Sandra Sampaio, Catia Ferreira, Jaime I. N. R. Gomes

Abstract:

With the fashion of faded worn garments the textile industry has moved from indigo and pigments to dyes that are fixed by cationization, with products that can be toxic, and that can show this effect after washing down the dye with friction and/or treating with enzymes in a subsequent operation. Increasingly they are treated with bleaches, such as hypochlorite and permanganate, both toxic substances. An alternative process is presented in this work for both garment and jet dyeing processes, without the use of pre-cationization and the alternative use of “particle-dyes”. These are hybrid products, made up by an inorganic particle and an organic dye. With standard soluble dyes, it is not possible to avoid diffusion into the inside of the fiber unless using previous cationization. Only in this way can diffusion be avoided keeping the centre of the fibres undyed so as to produce the faded effect by removing the surface dye and showing the white fiber beneath. With “particle-dyes”, previous cationization is avoided. By applying low temperatures, the dye does not diffuse completely into the inside of the fiber, since it is a particle and not a soluble dye, being then able to give the faded effect. Even though bleaching can be used it can also be avoided, by the use of friction and enzymes they can be used just as for other dyes. This fashion brought about new ways of applying reactive dyes by the use of previous cationization of cotton, lowering the salt, and temperatures that reactive dyes usually need for reacting and as a side effect the application of a more environmental process. However, cationization is a process that can be problematic in applying it outside garment dyeing, such as jet dyeing, being difficult to obtain level dyeings. It also should be applied by a pad-fix or Pad-batch process due to the low affinity of the pre-cationization products making it a more expensive process, and the risk of unlevelness in processes such as jet dyeing. Wit particle-dyes, since no pre-cationizartion is necessary, they can be applied in jet dyeing. The excess dye is fixed by a fixing agent, fixing the insoluble dye onto the surface of the fibers. By applying the fixing agent only one to 1-3 rinses in water at room temperature are necessary, saving water and improving the washfastness.

Keywords: denim, garment dyeing, worn look, eco-fashion

Procedia PDF Downloads 521
32791 Fama French Four Factor Model: A Study of Nifty Fifty Companies

Authors: Deeksha Arora

Abstract:

The study aims to explore the applicability of the widely used asset pricing models, namely, Capital Asset Pricing Model (CAPM) and the Fama-French Four Factor Model in the Indian equity market. The study will be based on the companies that form part of the Nifty Fifty Index for a period of five years: 2011 to 2016. The asset pricing model is examined by forming portfolios on the basis of three variables – market capitalization (size effect), book-to-market equity ratio (value effect) and profitability. The study provides a basis to test the presence of the Fama-French Four factor model in Indian stock market. This study may provide a basis for future research in the generalized asset pricing model comprising of multiple risk factors.

Keywords: book to market equity, Fama French four factor model, market capitalization, profitability, size effect, value effect

Procedia PDF Downloads 242
32790 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: injection molding, shrinkage, six sigma, Taguchi parameter design

Procedia PDF Downloads 158
32789 Dynamic Behavior of Brain Tissue under Transient Loading

Authors: Y. J. Zhou, G. Lu

Abstract:

In this paper, an analytical study is made for the dynamic behavior of human brain tissue under transient loading. In this analytical model the Mooney-Rivlin constitutive law is coupled with visco-elastic constitutive equations to take into account both the nonlinear and time-dependent mechanical behavior of brain tissue. Five ordinary differential equations representing the relationships of five main parameters (radial stress, circumferential stress, radial strain, circumferential strain, and particle velocity) are obtained by using the characteristic method to transform five partial differential equations (two continuity equations, one motion equation, and two constitutive equations). Analytical expressions of the attenuation properties for spherical wave in brain tissue are analytically derived. Numerical results are obtained based on the five ordinary differential equations. The mechanical responses (particle velocity and stress) of brain are compared at different radii including 5, 6, 10, 15 and 25 mm under four different input conditions. The results illustrate that loading curves types of the particle velocity significantly influences the stress in brain tissue. The understanding of the influence by the input loading cures can be used to reduce the potentially injury to brain under head impact by designing protective structures to control the loading curves types.

Keywords: analytical method, mechanical responses, spherical wave propagation, traumatic brain injury

Procedia PDF Downloads 250
32788 The Role of Optimization and Machine Learning in e-Commerce Logistics in 2030

Authors: Vincenzo Capalbo, Gianpaolo Ghiani, Emanuele Manni

Abstract:

Global e-commerce sales have reached unprecedented levels in the past few years. As this trend is only predicted to go up as we continue into the ’20s, new challenges will be faced by companies when planning and controlling e-commerce logistics. In this paper, we survey the related literature on Optimization and Machine Learning as well as on combined methodologies. We also identify the distinctive features of next-generation planning algorithms - namely scalability, model-and-run features and learning capabilities - that will be fundamental to cope with the scale and complexity of logistics in the next decade.

Keywords: e-commerce, hardware acceleration, logistics, machine learning, mixed integer programming, optimization

Procedia PDF Downloads 220
32787 Comparative Analysis of Two Modeling Approaches for Optimizing Plate Heat Exchangers

Authors: Fábio A. S. Mota, Mauro A. S. S. Ravagnani, E. P. Carvalho

Abstract:

In the present paper the design of plate heat exchangers is formulated as an optimization problem considering two mathematical modeling. The number of plates is the objective function to be minimized, considering implicitly some parameters configuration. Screening is the optimization method used to solve the problem. Thermal and hydraulic constraints are verified, not viable solutions are discarded and the method searches for the convergence to the optimum, case it exists. A case study is presented to test the applicability of the developed algorithm. Results show coherency with the literature.

Keywords: plate heat exchanger, optimization, modeling, simulation

Procedia PDF Downloads 499
32786 Optimization of Feeder Bus Routes at Urban Rail Transit Stations Based on Link Growth Probability

Authors: Yu Song, Yuefei Jin

Abstract:

Urban public transportation can be integrated when there is an efficient connection between urban rail lines, however, there are currently no effective or quick solutions being investigated for this connection. This paper analyzes the space-time distribution and travel demand of passenger connection travel based on taxi track data and data from the road network, excavates potential bus connection stations based on potential connection demand data, and introduces the link growth probability model in the complex network to solve the basic connection bus lines in order to ascertain the direction of the bus lines that are the most connected given the demand characteristics. Then, a tree view exhaustive approach based on constraints is suggested based on graph theory, which can hasten the convergence of findings while doing chain calculations. This study uses WEI QU NAN Station, the Xi'an Metro Line 2 terminal station in Shaanxi Province, as an illustration, to evaluate the model's and the solution method's efficacy. According to the findings, 153 prospective stations have been dug up in total, the feeder bus network for the entire line has been laid out, and the best route adjustment strategy has been found.

Keywords: feeder bus, route optimization, link growth probability, the graph theory

Procedia PDF Downloads 59
32785 Two-Photon-Exchange Effects in the Electromagnetic Production of Pions

Authors: Hui-Yun Cao, Hai-Qing Zhou

Abstract:

The high precision measurements and experiments play more and more important roles in particle physics and atomic physics. To analyse the precise experimental data sets, the corresponding precise and reliable theoretical calculations are necessary. Until now, the form factors of elemental constituents such as pion and proton are still attractive issues in current Quantum Chromodynamics (QCD). In this work, the two-photon-exchange (TPE) effects in ep→enπ⁺ at small -t are discussed within a hadronic model. Under the pion dominance approximation and the limit mₑ→0, the TPE contribution to the amplitude can be described by a scalar function. We calculate TPE contributions to the amplitude, and the unpolarized differential cross section with the only elastic intermediate state is considered. The results show that the TPE corrections to the unpolarized differential cross section are about from -4% to -20% at Q²=1-1.6 GeV². After considering the TPE corrections to the experimental data sets of unpolarized differential cross section, we analyze the TPE corrections to the separated cross sections σ(L,T,LT,TT). We find that the TPE corrections (at Q²=1-1.6 GeV²) to σL are about from -10% to -30%, to σT are about 20%, and to σ(LT,TT) are much larger. By these analyses, we conclude that the TPE contributions in ep→enπ⁺ at small -t are important to extract the separated cross sections σ(L,T,LT,TT) and the electromagnetic form factor of π⁺ in the experimental analysis.

Keywords: differential cross section, form factor, hadronic, two-photon

Procedia PDF Downloads 115
32784 Optimization of Gold Mining Parameters by Cyanidation

Authors: Della Saddam Housseyn

Abstract:

Gold, the quintessential noble metal, is one of the most popular metals today, given its ever-increasing cost in the international market. The Amesmessa gold deposit is one of the gold-producing deposits. The first step in our job is to analyze the ore (considered rich ore). Mineralogical and chemical analysis has shown that the general constitution of the ore is quartz in addition to other phases such as Al2O3, Fe2O3, CaO, dolomite. The second step consists of all the leaching tests carried out in rolling bottles. These tests were carried out on 14 samples to determine the maximum recovery rate and the optimum consumption of reagent (NaCN and CaO). Tests carried out on a pulp density at 50% solid, 500 ppm cyanide concentration and particle size less than 0.6 mm at alkaline pH gave a recovery rate of 94.37%.

Keywords: cyanide, DRX, FX, gold, leaching, rate of recovery, SAA

Procedia PDF Downloads 165
32783 Urban Planning Compilation Problems in China and the Corresponding Optimization Ideas under the Vision of the Hyper-Cycle Theory

Authors: Hong Dongchen, Chen Qiuxiao, Wu Shuang

Abstract:

Systematic science reveals the complex nonlinear mechanisms of behaviour in urban system. However, in China, when the current city planners face with the system, most of them are still taking simple linear thinking to consider the open complex giant system. This paper introduces the hyper-cycle theory, which is one of the basis theories of systematic science, based on the analysis of the reasons why the current urban planning failed, and proposals for optimization ideas that urban planning compilation should change, from controlling quantitative to the changes of relationship, from blueprint planning to progressive planning based on the nonlinear characteristics and from management control to dynamically monitor feedback.

Keywords: systematic science, hyper-cycle theory, urban planning, urban management

Procedia PDF Downloads 381
32782 Cost Reduction Techniques for Provision of Shelter to Homeless

Authors: Mukul Anand

Abstract:

Quality oriented affordable shelter for all has always been the key issue in the housing sector of our country. Homelessness is the acute form of housing need. It is a paradox that in spite of innumerable government initiated programmes for affordable housing, certain section of society is still devoid of shelter. About nineteen million (18.78 million) households grapple with housing shortage in Urban India in 2012. In Indian scenario there is major mismatch between the people for whom the houses are being built and those who need them. The prime force faced by public authorities in facilitation of quality housing for all is high cost of construction. The present paper will comprehend executable techniques for dilution of cost factor in housing the homeless. The key actors responsible for delivery of cheap housing stock such as capacity building, resource optimization, innovative low cost building material and indigenous skeleton housing system will also be incorporated in developing these techniques. Time performance, which is an important angle of above actors, will also be explored so as to increase the effectiveness of low cost housing. Along with this best practices will be taken up as case studies where both conventional techniques of housing and innovative low cost housing techniques would be cited. Transportation consists of approximately 30% of total construction budget. Thus use of alternative local solutions depending upon the region would be covered so as to highlight major components of low cost housing. Government is laid back regarding base line information on use of innovative low cost method and technique of resource optimization. Therefore, the paper would be an attempt to bring to light simpler solutions for achieving low cost housing.

Keywords: construction, cost, housing, optimization, shelter

Procedia PDF Downloads 426
32781 Case Study: Optimization of Contractor’s Financing through Allocation of Subcontractors

Authors: Helen S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

In many countries, the construction industry relies heavily on outsourcing models in executing their projects and expanding their businesses to fit in the diverse market. Such extensive integration of subcontractors is becoming an influential factor in contractor’s cash flow management. Accordingly, subcontractors’ financial terms are important phenomena and pivotal components for the well-being of the contractor’s cash flow. The aim of this research is to study the contractor’s cash flow with respect to the owner and subcontractor’s payment management plans, considering variable advance payment, payment frequency, and lag and retention policies. The model is developed to provide contractors with a decision support tool that can assist in selecting the optimum subcontracting plan to minimize the contractor’s financing limits and optimize the profit values. The model is built using Microsoft Excel VBA coding, and the genetic algorithm is utilized as the optimization tool. Three objective functions are investigated, which are minimizing the highest negative overdraft value, minimizing the net present worth of overdraft, and maximizing the project net profit. The model is validated on a full-scale project which includes both self-performed and subcontracted work packages. The results show potential outputs in optimizing the contractor’s negative cash flow values and, in the meantime, assisting contractors in selecting suitable subcontractors to achieve the objective function.

Keywords: cash flow optimization, payment plan, procurement management, subcontracting plan

Procedia PDF Downloads 106
32780 Impact of Microwave Heating Temperatures on the Pharmaceutical Powder Characteristics

Authors: Maha Al-Ali, Selvakannan Periasamy, Rajarathinam Parthasarathy

Abstract:

Drying temperature is an important factor impacting the physicochemical properties of the dried materials, particularly the pharmaceutical powders. Drying of pharmaceuticals by using microwave radiation is very limited, and the available information about the interaction between the electromagnetic radiations and the pharmaceutical material is still scarce. Therefore, microwave drying process is employed in this work to dry the wet (moisturised) granules of the formulated naproxen-sodium drug. This study aims to investigate the influences of the microwave radiation temperatures on the moisture removal, the crystalline structure, the size and morphology of the dried naproxen-sodium particles, and identify any potential changes in the chemical groups of the drug. In this work, newly formulated naproxen-sodium is prepared and moisturized by wet granulation process and hence dried by using microwave radiation at different temperatures. Moisture analyzer, Fourier-transform infrared spectroscopy, powder X-ray diffraction, and scanning electron microscope are used to characterise the non-moisturised powder (reference powder), the moisturised granules, and the dried particles. The results show that microwave drying of naproxen-sodium at high drying temperature is more efficient than that at low temperatures in terms of the moisture removal. Although there is no significant change in the chemical structure of the dried particles, the particle size, crystallinity and morphology are relatively changed with changing of heating temperature.

Keywords: heating temperature, microwave drying, naproxen-sodium, particle size

Procedia PDF Downloads 142
32779 Catalytic and Non-Catalytic Pyrolysis of Walnut Shell Waste to Biofuel: Characterisation of Catalytic Biochar and Biooil

Authors: Saimatun Nisa

Abstract:

Walnut is an important export product from the Union Territory of Jammy and Kashmir. After extraction of the kernel, the walnut shell forms a solid waste that needs to be managed. Pyrolysis is one interesting option for the utilization of this walnut waste. In this study microwave pyrolysis reactor is used to convert the walnut shell biomass into its value-added products. Catalytic and non-catalytic conversion of walnut shell waste to oil, gas and char was evaluated using a Co-based catalyst. The catalyst was characterized using XPS and SEM analysis. Pyrolysis temperature, reaction time, particle size and sweeping gas (N₂) flow rate were set in the ranges of 400–600 °C, 40 min, <0.6mm to < 4.75mm and 300 ml min−1, respectively. The heating rate was fixed at 40 °C min−1. Maximum gas yield was obtained at 600 °C, 40 min, particle size range 1.18-2.36, 0.5 molar catalytic as 45.2%. The liquid product catalytic and non-catalytic was characterized by GC–MS analyses. In addition, the solid product was analyzed by means of FTIR & SEM.

Keywords: walnut shell, biooil, biochar, microwave pyrolysis

Procedia PDF Downloads 28
32778 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 222
32777 Design and Optimization of Sustainable Buildings by Combined Cooling, Heating and Power System (CCHP) Based on Exergy Analysis

Authors: Saeed Karimi, Ali Behbahaninia

Abstract:

In this study, the design and optimization of combined cooling, heating, and power system (CCHP) for a sustainable building are dealt with. Sustainable buildings are environmentally responsible and help us to save energy also reducing waste, pollution and environmental degradation. CCHP systems are widely used to save energy sources. In these systems, electricity, cooling, and heating are generating using just one primary energy source. The selection of the size of components based on the maximum demand of users will lead to an increase in the total cost of energy and equipment for the building complex. For this purpose, a system was designed in which the prime mover (gas turbine), heat recovery boiler, and absorption chiller are lower than the needed maximum. The difference in months with peak consumption is supplied with the help of electrical absorption chiller and auxiliary boiler (and the national electricity network). In this study, the optimum capacities of each of the equipment are determined based on Thermo economic method, in a way that the annual capital cost and energy consumption will be the lowest. The design was done for a gas turbine prime mover, and finally, the optimum designs were investigated using exergy analysis and were compared with a traditional energy supply system.

Keywords: sustainable building, CCHP, energy optimization, gas turbine, exergy, thermo-economic

Procedia PDF Downloads 74
32776 The Genuine Happiness Scale: Preliminary Results

Authors: Myriam Rudaz, Thomas Ledermann, Frank D. Fincham

Abstract:

We provide initial findings on the development and validation of the Genuine Happiness Scale (GHS). Based on the Buddhist view of happiness, genuine happiness can be described as an unlimited, everlasting inner joy and peace that gives a person the inner resources to deal with whatever comes his or her way in life. The sample consisted of 678 young adults, with 432 adults participating twice, approximately six weeks apart. Exploratory and confirmatory factor analysis supported a unidimensional factor structure of the GHS. Hierarchical regression analysis revealed that caring for bliss, mindfulness, and compassion predicted genuine happiness longitudinally above and beyond genuine happiness at baseline. We discuss the usefulness of the GHS as an outcome measure for evaluating mindfulness- and compassion-based intervention programs.

Keywords: happiness, bliss, well-being, caring for bliss, mindfulness, compassion

Procedia PDF Downloads 96
32775 Developing New Algorithm and Its Application on Optimal Control of Pumps in Water Distribution Network

Authors: R. Rajabpour, N. Talebbeydokhti, M. H. Ahmadi

Abstract:

In recent years, new techniques for solving complex problems in engineering are proposed. One of these techniques is JPSO algorithm. With innovative changes in the nature of the jump algorithm JPSO, it is possible to construct a graph-based solution with a new algorithm called G-JPSO. In this paper, a new algorithm to solve the optimal control problem Fletcher-Powell and optimal control of pumps in water distribution network was evaluated. Optimal control of pumps comprise of optimum timetable operation (status on and off) for each of the pumps at the desired time interval. Maximum number of status on and off for each pumps imposed to the objective function as another constraint. To determine the optimal operation of pumps, a model-based optimization-simulation algorithm was developed based on G-JPSO and JPSO algorithms. The proposed algorithm results were compared well with the ant colony algorithm, genetic and JPSO results. This shows the robustness of proposed algorithm in finding near optimum solutions with reasonable computational cost.

Keywords: G-JPSO, operation, optimization, pumping station, water distribution networks

Procedia PDF Downloads 383
32774 Reliable Soup: Reliable-Driven Model Weight Fusion on Ultrasound Imaging Classification

Authors: Shuge Lei, Haonan Hu, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Yan Tong

Abstract:

It remains challenging to measure reliability from classification results from different machine learning models. This paper proposes a reliable soup optimization algorithm based on the model weight fusion algorithm Model Soup, aiming to improve reliability by using dual-channel reliability as the objective function to fuse a series of weights in the breast ultrasound classification models. Experimental results on breast ultrasound clinical datasets demonstrate that reliable soup significantly enhances the reliability of breast ultrasound image classification tasks. The effectiveness of the proposed approach was verified via multicenter trials. The results from five centers indicate that the reliability optimization algorithm can enhance the reliability of the breast ultrasound image classification model and exhibit low multicenter correlation.

Keywords: breast ultrasound image classification, feature attribution, reliability assessment, reliability optimization

Procedia PDF Downloads 68
32773 Numerical Simulation of Unsteady Cases of Fluid Flow Using Modified Dynamic Boundary Condition (mDBC) in Smoothed Particle Hydrodynamics Models

Authors: Exa Heydemans, Jessica Sjah, Dwinanti Rika Marthanty

Abstract:

This paper presents numerical simulations using an open boundary algorithm with modified dynamic boundary condition (mDBC) for weakly compressible smoothed particle hydrodynamics models from particle-based code Dualsphysics. The problems of piping erosion in dams and dikes are aimed for studying the algorithm. The case 2D model of unsteady fluid flow past around a fixed cylinder is simulated, where various values of Reynold’s numbers (Re40, Re60, Re80, and Re100) and different model’s resolution are considered. A constant velocity with different values of viscosity for generating various Reynold’s numbers and different numbers of particles over a cylinder for the resolution are modeled. The interaction between solid particles of the cylinder and fluid particles is concerned. The cylinder is affected by the hydrodynamics force caused by the flow of fluid particles. The solid particles of the cylinder are the observation points to obtain force and pressure due to the hydrodynamics forces. As results of the simulation, which is to show the capability to model 2D unsteady with various Reynold’s numbers, the pressure coefficient, drag coefficient, lift coefficient, and Strouhal number are compared to the previous work from literature.

Keywords: hydrodynamics, internal erosion, dualsphysics, viscous fluid flow

Procedia PDF Downloads 140
32772 Measurements of Scattering Cross Sections for 5.895 keV Photons in Various Polymers

Authors: H. Duggal, G. Singh, G. Singh, A. Bhalla, S. Kumar, J. S. Shahi, D. Mehta

Abstract:

The total differential cross section for scattering of the 5.895 keV photons by various polymers has been measured at scattering angle of 135o. The experimental measurements were carried out using the energy dispersive setup involving annular source of the 55Fe radioisotope and a low energy germanium (LEGe) detector. The cross section values are measured for 20 polymer targets namely, Paraffin Wax, Polytetrafluoro ethylene (PTFE), Cellulose, Silicone oil, Polyvinyl alcohol (PVA), Polyvinyl purrolidone (PVP), Polymethyl methacrylate (PMMA), Kapton, Mylar, Chitosan, Polyvinyl chloride (PVC), Bakelite, Carbopol, Chlorobutyl rubber (CBR), Polyetylene glycol (PEG), Polysorbate-20, Nylon-6, Cetyl alcohol, Carboxyl methyl sodium cellulose and Sodium starch glucolate. The measurements were performed in vacuum so as to avoid scattering contribution due to air and strong absorption of low energy photons in the air column. In the present investigations, the geometrical factor and efficiency of the detector were determined by measuring the K x-rays emitted from the 22Ti and 23V targets excited by the Mn K x-rays in the same experimental set up. The measured scattering cross sections have been compared with the sum of theoretically calculated elastic and inelastic scattering cross sections. The theoretical elastic (Rayleigh) scattering cross sections based on the various form factor approximations, namely, non-relativistic form factor (NF), relativistic form factor (RF), modified form factor (MF), and MF with anomalous scattering factor (ASF) as well as the second order S-matrix formalisms, and the inelastic scattering differential cross sections based on the Klein-Nishina formula after including the inelastic scattering function (KN+ISF) have been calculated. The experimental results show fairly good agreement with theoretical cross sections.

Keywords: photon, polymers, elastic and inelastic, scattering cross sections

Procedia PDF Downloads 671
32771 Using Confirmatory Factor Analysis to Test the Dimensional Structure of Tourism Service Quality

Authors: Ibrahim A. Elshaer, Alaa M. Shaker

Abstract:

Several previous empirical studies have operationalized service quality as either a multidimensional or unidimensional construct. While few earlier studies investigated some practices of the assumed dimensional structure of service quality, no study has been found to have tested the construct’s dimensionality using confirmatory factor analysis (CFA). To gain a better insight into the dimensional structure of service quality construct, this paper tests its dimensionality using three CFA models (higher order factor model, oblique factor model, and one factor model) on a set of data collected from 390 British tourists visited Egypt. The results of the three tests models indicate that service quality construct is multidimensional. This result helps resolving the problems that might arise from the lack of clarity concerning the dimensional structure of service quality, as without testing the dimensional structure of a measure, researchers cannot assume that the significant correlation is a result of factors measuring the same construct.

Keywords: service quality, dimensionality, confirmatory factor analysis, Egypt

Procedia PDF Downloads 572
32770 Assessment of Adsorption Properties of Neem Leaves Wastes for the Removal of Congo Red and Methyl Orange

Authors: Muhammad B. Ibrahim, Muhammad S. Sulaiman, Sadiq Sani

Abstract:

Neem leaves were studied as plant wastes derived adsorbents for detoxification of Congo Red (CR) and Methyl Orange (MO) from aqueous solutions using batch adsorption technique. The objectives involved determining the effects of the basic adsorption parameters are namely, agitation time, adsorbent dosage, adsorbents particle size, adsorbate loading concentrations and initial pH, on the adsorption process as well as characterizing the adsorbents by determining their physicochemical properties, functional groups responsible for the adsorption process using Fourier Transform Infrared (FTIR) spectroscopy and surface morphology using scanning electron microscopy (SEM) coupled with energy dispersion X – ray spectroscopy (EDS). The adsorption behaviours of the materials were tested against Langmuir, Freundlich, etc. isotherm models. Percent adsorption increased with increase in agitation time (5 – 240 minutes), adsorbent dosage (100-500mg), initial concentration (100-300mg/L), and with decrease in particle size (≥75μm to ≤300μm) of the adsorbents. Both processes are dye pH-dependent, increasing or decreasing percent adsorption in acidic (2-6) or alkaline (8-12) range over the studied pH (2-12) range. From the experimental data the Langmuir’s separation factor (RL) suggests unfavourable adsorption for all processes, Freundlich constant (nF) indicates unfavourable process for CR and MO adsorption; while the mean free energy of adsorption

Keywords: adsorption, congo red, methyl orange, neem leave

Procedia PDF Downloads 340
32769 Sustainable Ionized Gas Thermoelectric Generator: Comparative Theoretical Evaluation and Efficiency Estimation

Authors: Mohammad Bqoor, Mohammad Hamdan, Isam Janajreh, Sufian Abedrabbo

Abstract:

This extensive theoretical study on a novel Ionized Gas Thermoelectric Generator (IG-TEG) system has shown the ability of continuous energy extracting from the thermal energy of ambient air around standard room temperature and even below. This system does not need a temperature gradient in order to work, unlike the other TEGs that use the Seebeck effect, and therefore this new system can be utilized in sustainable energy systems, as well as in green cooling solutions, by extracting energy instead of wasting energy in compressing the gas for cooling. This novel system was designed based on Static Ratchet Potential (SRP), which is known as a spatially asymmetric electric potential produced by an array of positive and negative electrodes. The ratchet potential produces an electrical current from the random Brownian Motion of charged particles that are driven by thermal energy. The key parameter of the system is particle transportation, and it was studied under the condition of flashing ratchet potentials utilizing several methods and examined experimentally, ensuring its functionality. In this study, a different approach is pursued to estimate particle transportation by evaluating the charged particle distribution and applying the other conditions of the SRP, and showing continued energy harvesting potency from the particles’ transportation. Ultimately, power levels of 10 Watt proved to be achievable from a 1 m long system tube of 10 cm radius.

Keywords: thermoelectric generator, ratchet potential, Brownian ratchet, energy harvesting, sustainable energy, green technology

Procedia PDF Downloads 57
32768 Load-Enabled Deployment and Sensing Range Optimization for Lifetime Enhancement of WSNs

Authors: Krishan P. Sharma, T. P. Sharma

Abstract:

Wireless sensor nodes are resource constrained battery powered devices usually deployed in hostile and ill-disposed areas to cooperatively monitor physical or environmental conditions. Due to their limited power supply, the major challenge for researchers is to utilize their battery power for enhancing the lifetime of whole network. Communication and sensing are two major sources of energy consumption in sensor networks. In this paper, we propose a deployment strategy for enhancing the average lifetime of a sensor network by effectively utilizing communication and sensing energy to provide full coverage. The proposed scheme is based on the fact that due to heavy relaying load, sensor nodes near to the sink drain energy at much faster rate than other nodes in the network and consequently die much earlier. To cover this imbalance, proposed scheme finds optimal communication and sensing ranges according to effective load at each node and uses a non-uniform deployment strategy where there is a comparatively high density of nodes near to the sink. Probable relaying load factor at particular node is calculated and accordingly optimal communication distance and sensing range for each sensor node is adjusted. Thus, sensor nodes are placed at locations that optimize energy during network operation. Formal mathematical analysis for calculating optimized locations is reported in present work.

Keywords: load factor, network lifetime, non-uniform deployment, sensing range

Procedia PDF Downloads 363
32767 Effects of Interfacial Modification Techniques on the Mechanical Properties of Natural Particle Based Polymer Composites

Authors: Bahar Basturk, Secil Celik Erbas, Sevket Can Sarikaya

Abstract:

Composites combining the particulates and polymer components have attracted great interest in various application areas such as packaging, furniture, electronics and automotive industries. For strengthening the plastic matrices, the utilization of natural fillers instead of traditional reinforcement materials has received increased attention. The properties of natural filler based polymer composites (NFPC) may be improved by applying proper surface modification techniques to the powder phase of the structures. In this study, acorn powder-epoxy and pine corn powder-epoxy composites containing up to 45% weight percent particulates were prepared by casting method. Alkali treatment and acetylation techniques were carried out to the natural particulates for investigating their influences under mechanical forces. The effects of filler type and content on the tensile properties of the composites were compared with neat epoxy. According to the quasi-static tensile tests, the pine cone based composites showed slightly higher rigidity and strength properties compared to the acorn reinforced samples. Furthermore, the structures independent of powder type and surface modification technique, showed higher tensile properties with increasing the particle content.

Keywords: natural fillers, polymer composites, surface modifications, tensile properties

Procedia PDF Downloads 448
32766 Influence of Cryo-Grinding on Particle Size Distribution of Proso Millet Bran Fraction

Authors: Maja Benkovic, Dubravka Novotni, Bojana Voucko, Duska Curic, Damir Jezek, Nikolina Cukelj

Abstract:

Cryo-grinding is an ultra-fine grinding method used in the pharmaceutical industry, production of herbs and spices and in the production and handling of cereals, due to its ability to produce powders with small particle sizes which maintain their favorable bioactive profile. The aim of this study was to determine the particle size distributions of the proso millet (Panicum miliaceum) bran fraction grinded at cryogenic temperature (using liquid nitrogen (LN₂) cooling, T = - 196 °C), in comparison to non-cooled grinding. Proso millet bran is primarily used as an animal feed, but has a potential in food applications, either as a substrate for extraction of bioactive compounds or raw material in the bakery industry. For both applications finer particle sizes of the bran could be beneficial. Thus, millet bran was ground for 2, 4, 8 and 12 minutes using the ball mill (CryoMill, Retsch GmbH, Haan, Germany) at three grinding modes: (I) without cooling, (II) at cryo-temperature, and (III) at cryo-temperature with included 1 minute of intermediate cryo-cooling step after every 2 minutes of grinding, which is usually applied when samples require longer grinding times. The sample was placed in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm). The oscillation frequency in all three modes was 30 Hz. Particle size distributions of the bran were determined by a laser diffraction particle sizing method (Mastersizer 2000) using the Scirocco 2000 dry dispersion unit (Malvern Instruments, Malvern, UK). Three main effects of the grinding set-up were visible from the results. Firstly, grinding time at all three modes had a significant effect on all particle size parameters: d(0.1), d(0.5), d(0.9), D[3,2], D[4,3], span and specific surface area. Longer grinding times resulted in lower values of the above-listed parameters, e.g. the averaged d(0.5) of the sample (229.57±1.46 µm) dropped to 51.29±1.28 µm after 2 minutes grinding without LN₂, and additionally to 43.00±1.33 µm after 4 minutes of grinding without LN₂. The only exception was the sample ground for 12 minutes without cooling, where an increase in particle diameters occurred (d(0.5)=62.85±2.20 µm), probably due to particles adhering to one another and forming larger particle clusters. Secondly, samples with LN₂ cooling exhibited lower diameters in comparison to non-cooled. For example, after 8 minutes of non-cooled grinding d(0.5)=46.97±1.05 µm was achieved, while the LN₂ cooling enabled collection of particles with average sizes of d(0.5)=18.57±0.18 µm. Thirdly, the application of intermediate cryo-cooling step resulted in similar particle diameters (d(0.5)=15.83±0.36 µm, 12 min of grinding) as cryo-milling without this step (d(0.5)=16.33±2.09 µm, 12 min of grinding). This indicates that intermediate cooling is not necessary for the current application, which consequently reduces the consumption of LN₂. These results point out the potential beneficial effects of millet bran grinding at cryo-temperatures. Further research will show if the lower particle size achieved in comparison to non-cooled grinding could result in increased bioavailability of bioactive compounds, as well as protein digestibility and solubility of dietary fibers of the proso millet bran fraction.

Keywords: ball mill, cryo-milling, particle size distribution, proso millet (Panicum miliaceum) bran

Procedia PDF Downloads 133
32765 Analysis of Interparticle interactions in High Waxy-Heavy Clay Fine Sands for Sand Control Optimization

Authors: Gerald Gwamba

Abstract:

Formation and oil well sand production is one of the greatest and oldest concerns for the Oil and gas industry. The production of sand particles may vary from very small and limited amounts to far elevated levels which has the potential to block or plug the pore spaces near the perforated points to blocking production from surface facilities. Therefore, the timely and reliable investigation of conditions leading to the onset or quantifying sanding while producing is imperative. The challenges of sand production are even more elevated while producing in Waxy and Heavy wells with Clay Fine sands (WHFC). Existing research argues that both waxy and heavy hydrocarbons exhibit far differing characteristics with waxy more paraffinic while heavy crude oils exhibit more asphaltenic properties. Moreover, the combined effect of WHFC conditions presents more complexity in production as opposed to individual effects that could be attributed to a consolidation of a surmountable opposing force. However, research on a combined high WHFC system could depict a better representation of the surmountable effect which in essence is more comparable to field conditions where a one-sided view of either individual effects on sanding has been argued to some extent misrepresentative of actual field conditions since all factors act surmountably. In recognition of the limited customized research on sand production studies with the combined effect of WHFC however, our research seeks to apply the Design of Experiments (DOE) methodology based on latest literature to analyze the relationship between various interparticle factors in relation to selected sand control methods. Our research aims to unearth a better understanding of how the combined effect of interparticle factors including: strength, cementation, particle size and production rate among others could better assist in the design of an optimal sand control system for the WHFC well conditions. In this regard, we seek to answer the following research question: How does the combined effect of interparticle factors affect the optimization of sand control systems for WHFC wells? Results from experimental data collection will inform a better justification for a sand control design for WHFC. In doing so, we hope to contribute to earlier contrasts arguing that sand production could potentially enable well self-permeability enhancement caused by the establishment of new flow channels created by loosening and detachment of sand grains. We hope that our research will contribute to future sand control designs capable of adapting to flexible production adjustments in controlled sand management. This paper presents results which are part of an ongoing research towards the authors' PhD project in the optimization of sand control systems for WHFC wells.

Keywords: waxy-heavy oils, clay-fine sands, sand control optimization, interparticle factors, design of experiments

Procedia PDF Downloads 117
32764 Adaptive Auth - Adaptive Authentication Based on User Attributes for Web Application

Authors: Senthuran Manoharan, Rathesan Sivagananalingam

Abstract:

One of the main issues in system security is Authentication. Authentication can be defined as the process of recognizing the user's identity and it is the most important step in the access control process to safeguard data/resources from being accessed by unauthorized users. The static method of authentication cannot ensure the genuineness of the user. Due to this reason, more innovative authentication mechanisms came into play. At first two factor authentication was introduced and later, multi-factor authentication was introduced to enhance the security of the system. It also had some issues and later, adaptive authentication was introduced. In this research paper, the design of an adaptive authentication engine was put forward. The user risk profile was calculated based on the user parameters and then the user was challenged with a suitable authentication method.

Keywords: authentication, adaptive authentication, machine learning, security

Procedia PDF Downloads 220
32763 Improving the Performance of Gas Turbine Power Plant by Modified Axial Turbine

Authors: Hakim T. Kadhim, Faris A. Jabbar, Aldo Rona, Audrius Bagdanaviciu

Abstract:

Computer-based optimization techniques can be employed to improve the efficiency of energy conversions processes, including reducing the aerodynamic loss in a thermal power plant turbomachine. In this paper, towards mitigating secondary flow losses, a design optimization workflow is implemented for the casing geometry of a 1.5 stage axial flow turbine that improves the turbine isentropic efficiency. The improved turbine is used in an open thermodynamic gas cycle with regeneration and cogeneration. Performance estimates are obtained by the commercial software Cycle – Tempo. Design and off design conditions are considered as well as variations in inlet air temperature. Reductions in both the natural gas specific fuel consumption and in CO2 emissions are predicted by using the gas turbine cycle fitted with the new casing design. These gains are attractive towards enhancing the competitiveness and reducing the environmental impact of thermal power plant.

Keywords: axial flow turbine, computational fluid dynamics, gas turbine power plant, optimization

Procedia PDF Downloads 143