Search results for: optimized closed polygonal segment method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20878

Search results for: optimized closed polygonal segment method

19948 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies

Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof

Abstract:

Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.

Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics

Procedia PDF Downloads 144
19947 Intellectual Telepathy between Arabs and Pashtuns; Study of Their Proverbs as a Model

Authors: Shams Ul Hussain Zaheer, Bibi Alia, Shehla Shams

Abstract:

With the creation of human beings, almost all of them are blessed with the award of the power of expression, and this series starts from the life of Adam (A.S) in Paradise when he was blesses with language and knowledge and given priority upon the Angels. Later on, when the population spread and many other languages came into being, the method of the different people of different regions remained various. And when the Arabic was formed as a language after Ismail (A.S) and his sons spread in the gulf area, the words adopted from other gulf languages also became a part of this new language with it immense. Beside this, the tone of expression in other areas of the word was different, but the incidents, norms of bad and good, parameters for like and dislike, thinking styles, and rules of good and bad governance with social values remained round about the same. People practiced their lives according to the set norms everywhere in the world. Especially the two built, i.e., Hijaz and Khurasan, wherein Arabs and Pashtun accordingly were dwelling; it seems that their social values were much closed to each other. These norms reflect in various kinds of literature of both of the nations, but this article deals in with their proverbs specifically. This article discusses the intellectual telepathic between them in a research way. And put the defined similarities and dissimilarities between both in the proverb. And it also draws a sketch in front of readers that how the thinking and expression styles remains same in humans. As it belongs to a comparative analysis of the proverbs so, the same methodology has been adopted in the articles.

Keywords: intellectual telepathy, hijaz, arab, khurasan, pashtun, proverbs, comparison

Procedia PDF Downloads 86
19946 Effect of Removing Hub Domain on Human CaMKII Isoforms Sensitivity to Calcium/Calmodulin

Authors: Ravid Inbar

Abstract:

CaMKII (calcium-calmodulin dependent protein kinase II) makes up 2% of the protein in our brain and has a critical role in memory formation and long-term potentiation of neurons. Despite this, research has yet to uncover the role of one of the domains on the activation of this kinase. The following proposes to express the protein without the hub domain in E. coli, leaving only the kinase and regulatory segment of the protein. Next, a series of kinase assays will be conducted to elucidate the role the hub domain plays on CaMKII sensitivity to calcium/calmodulin activation. The hub domain may be important for activation; however, it may also be a variety of domains working together to influence protein activation and not the hub alone. Characterization of a protein is critical to the future understanding of the protein's function, as well as for producing pharmacological targets in cases of patients with diseases.

Keywords: CaMKII, hub domain, kinase assays, kinase + reg seg

Procedia PDF Downloads 86
19945 Rare-Earth Ions Doped Zirconium Oxide Layers for Optical and Photovoltaic Applications

Authors: Sylwia Gieraltowska, Lukasz Wachnicki, Bartlomiej S. Witkowski, Marek Godlewski

Abstract:

Oxide layers doped with rare-earth (RE) ions in optimized way can absorb short (ultraviolet light), which will be converted to visible light by so-called down-conversion. Down-conversion mechanisms are usually exploited to modify the incident solar spectrum. In down conversion, multiple low-energy photons are generated to exploit the energy of one incident high-energy photon. These RE-doped oxide materials have attracted a great deal of attention from researchers because of their potential for optical manipulation in optical devices (detectors, temperature sensors, and compact solid-state lasers, light-emitting diodes), bio-analysis, medical therapy, display technologies, and light harvesting (such as in photovoltaic cells). The zirconium dioxide (ZrO2) doped RE ions (Eu, Tb, Ce) multilayer structures were tested as active layers, which can convert short wave emission to light in the visible range (the down-conversion mechanism). For these applications original approach of deposition ZrO2 layers using the Atomic Layer Deposition (ALD) method and doping these layers with RE ions using the spin-coating technique was used. ALD films are deposited at relatively low temperature (well below 250°C). This can be an effective method to achieve the white-light emission and to improve on this way light conversion efficiency, by an extension of absorbed spectral range by a solar cell material. Photoluminescence (PL), X-ray diffraction (XRD), scanning electron microscope (SEM) and atomic force microscope (AFM) measurement are analyzed. The research was financially supported by the National Science Centre (decision No. DEC-2012/06/A/ST7/00398 and DEC- 2013/09/N/ST5/00901).

Keywords: ALD, oxide layers, photovoltaics, thin films

Procedia PDF Downloads 268
19944 Microvoid Growth in the Interfaces during Aging

Authors: Jae-Yong Park, Gwancheol Seo, Young-Ho Kim

Abstract:

Microvoids, sometimes called Kikendall voids, generally form in the interfaces between Sn-based solders and Cu and degrade the mechanical and electrical properties of the solder joints. The microvoid formation is known as the rapid interdiffusion between Sn and Cu and impurity content in the Cu. Cu electroplating from the acid solutions has been widely used by microelectronic packaging industry for both printed circuit board (PCB) and integrated circuit (IC) applications. The quality of electroplated Cu that can be optimized by the electroplating conditions is critical for the solder joint reliability. In this paper, the influence of electroplating conditions on the microvoid growth in the interfaces between Sn-3.0Ag-0.5Cu (SAC) solder and Cu layer was investigated during isothermal aging. The Cu layers were electroplated by controlling the additive of electroplating bath and current density to induce various microvoid densities. The electroplating bath consisted of sulfate, sulfuric acid, and additives and the current density of 5-15 mA/cm2 for each bath was used. After aging at 180 °C for up to 250 h, typical bi-layer of Cu6Sn5 and Cu3Sn intermetallic compounds (IMCs) was gradually growth at the SAC/Cu interface and microvoid density in the Cu3Sn showed disparities in the electroplating conditions. As the current density increased, the microvoid formation was accelerated in all electroplating baths. The higher current density induced, the higher impurity content in the electroplated Cu. When the polyethylene glycol (PEG) and Cl- ion were mixed in an electroplating bath, the microvoid formation was the highest compared to other electroplating baths. On the other hand, the overall IMC thickness was similar in all samples irrespective of the electroplating conditions. Impurity content in electroplated Cu influenced the microvoid growth, but the IMC growth was not affected by the impurity content. In conclusion, the electroplated conditions are properly optimized to avoid the excessive microvoid formation that results in brittle fracture of solder joint under high strain rate loading.

Keywords: electroplating, additive, microvoid, intermetallic compound

Procedia PDF Downloads 253
19943 A Review on Higher-Order Spline Techniques for Solving Burgers Equation Using B-Spline Methods and Variation of B-Spline Techniques

Authors: Maryam Khazaei Pool, Lori Lewis

Abstract:

This is a summary of articles based on higher order B-splines methods and the variation of B-spline methods such as Quadratic B-spline Finite Elements Method, Exponential Cubic B-Spline Method, Septic B-spline Technique, Quintic B-spline Galerkin Method, and B-spline Galerkin Method based on the Quadratic B-spline Galerkin method (QBGM) and Cubic B-spline Galerkin method (CBGM). In this paper, we study the B-spline methods and variations of B-spline techniques to find a numerical solution to the Burgers’ equation. A set of fundamental definitions, including Burgers equation, spline functions, and B-spline functions, are provided. For each method, the main technique is discussed as well as the discretization and stability analysis. A summary of the numerical results is provided, and the efficiency of each method presented is discussed. A general conclusion is provided where we look at a comparison between the computational results of all the presented schemes. We describe the effectiveness and advantages of these methods.

Keywords: Burgers’ equation, Septic B-spline, modified cubic B-spline differential quadrature method, exponential cubic B-spline technique, B-spline Galerkin method, quintic B-spline Galerkin method

Procedia PDF Downloads 120
19942 Order Optimization of a Telecommunication Distribution Center through Service Lead Time

Authors: Tamás Hartványi, Ferenc Tóth

Abstract:

European telecommunication distribution center performance is measured by service lead time and quality. Operation model is CTO (customized to order) namely, a high mix customization of telecommunication network equipment and parts. CTO operation contains material receiving, warehousing, network and server assembly to order and configure based on customer specifications. Variety of the product and orders does not support mass production structure. One of the success factors to satisfy customer is to have a proper aggregated planning method for the operation in order to have optimized human resources and highly efficient asset utilization. Research will investigate several methods and find proper way to have an order book simulation where practical optimization problem may contain thousands of variables and the simulation running times of developed algorithms were taken into account with high importance. There are two operation research models that were developed, customer demand is given in orders, no change over time, customer demands are given for product types, and changeover time is constant.

Keywords: CTO, aggregated planning, demand simulation, changeover time

Procedia PDF Downloads 264
19941 Improving the Management of Delirium of Surgical Inpatients

Authors: Shammael Selorfia

Abstract:

The Quality improvement project aimed to improve junior doctors and nurses’ knowledge and confidence in diagnosing and managing delirium on inpatient surgical wards in a tertiary hospital. The study aimed to develop a standardised assessment and management checklist for all staff working with patients who were presenting with signs of delirium. The aim of the study was to increase confidence of staff at dealing with delirium and improve the quality of referrals that were being sent to the Mental Health Liaison team over a 6-month period. A significant proportion of time was being spent by the Mental Health Liaison triage nurses on referrals for delirium. Data showed 28% of all delirium referrals from surgical teams were being closed at triage reflecting a poor standard of quality of those referrals. A qualitative survey of junior doctors in 6 surgical specialties in a UK tertiary hospital was conducted. These specialties include general surgery, vascular, plastic, urology, neurosurgery, and orthopaedics. The standardised checklist was distributed to all surgical wards. A comparison was made between the Mental health team caseload of delirium before intervention was compared and after. A Qualitative survey at end of 3-month cycle and compare overall caseload on Mental Health Liaison team to pre-QIP data with aim to improve quality of referrals and reduce workload on Mental Health Liaison team. At the end of the project cycle, we demonstrated an improvement in the quality of referrals with a decrease in the percentage of referrals being closed at triage by 8%. Our surveys also indicated an increase in the knowledge of official trust delirium guidelines and confidence at managing the patients. This project highlights that a new approach to delirium using multi-component interventions is needed, where the diagnosis of delirium is shared amongst medical and nursing staff, and everyone plays role in management. The key is improving awareness of delirium and encouraging the use of recognized diagnostic tools and official guidelines. Recommendations were made to the trust on how to implement a long-lasting change.

Keywords: delirium, surgery, quality, improvement

Procedia PDF Downloads 76
19940 Interaction between Space Syntax and Agent-Based Approaches for Vehicle Volume Modelling

Authors: Chuan Yang, Jing Bie, Panagiotis Psimoulis, Zhong Wang

Abstract:

Modelling and understanding vehicle volume distribution over the urban network are essential for urban design and transport planning. The space syntax approach was widely applied as the main conceptual and methodological framework for contemporary vehicle volume models with the help of the statistical method of multiple regression analysis (MRA). However, the MRA model with space syntax variables shows a limitation in vehicle volume predicting in accounting for the crossed effect of the urban configurational characters and socio-economic factors. The aim of this paper is to construct models by interacting with the combined impact of the street network structure and socio-economic factors. In this paper, we present a multilevel linear (ML) and an agent-based (AB) vehicle volume model at an urban scale interacting with space syntax theoretical framework. The ML model allowed random effects of urban configurational characteristics in different urban contexts. And the AB model was developed with the incorporation of transformed space syntax components of the MRA models into the agents’ spatial behaviour. Three models were implemented in the same urban environment. The ML model exhibit superiority over the original MRA model in identifying the relative impacts of the configurational characters and macro-scale socio-economic factors that shape vehicle movement distribution over the city. Compared with the ML model, the suggested AB model represented the ability to estimate vehicle volume in the urban network considering the combined effects of configurational characters and land-use patterns at the street segment level.

Keywords: space syntax, vehicle volume modeling, multilevel model, agent-based model

Procedia PDF Downloads 140
19939 Mechanical Characterization of Banana by Inverse Analysis Method Combined with Indentation Test

Authors: Juan F. P. Ramírez, Jésica A. L. Isaza, Benjamín A. Rojano

Abstract:

This study proposes a novel use of a method to determine the mechanical properties of fruits by the use of the indentation tests. The method combines experimental results with a numerical finite elements model. The results presented correspond to a simplified numerical modeling of banana. The banana was assumed as one-layer material with an isotropic linear elastic mechanical behavior, the Young’s modulus found is 0.3Mpa. The method will be extended to multilayer models in further studies.

Keywords: finite element method, fruits, inverse analysis, mechanical properties

Procedia PDF Downloads 353
19938 Linear Array Geometry Synthesis with Minimum Sidelobe Level and Null Control Using Taguchi Method

Authors: Amara Prakasa Rao, N. V. S. N. Sarma

Abstract:

This paper describes the synthesis of linear array geometry with minimum sidelobe level and null control using the Taguchi method. Based on the concept of the orthogonal array, Taguchi method effectively reduces the number of tests required in an optimization process. Taguchi method has been successfully applied in many fields such as mechanical, chemical engineering, power electronics, etc. Compared to other evolutionary methods such as genetic algorithms, simulated annealing and particle swarm optimization, the Taguchi method is much easier to understand and implement. It requires less computational/iteration processing to optimize the problem. Different cases are considered to illustrate the performance of this technique. Simulation results show that this method outperforms the other evolution algorithms (like GA, PSO) for smart antenna systems design.

Keywords: array factor, beamforming, null placement, optimization method, orthogonal array, Taguchi method, smart antenna system

Procedia PDF Downloads 384
19937 Study Employed a Computer Model and Satellite Remote Sensing to Evaluate the Temporal and Spatial Distribution of Snow in the Western Hindu Kush Region of Afghanistan

Authors: Noori Shafiqullah

Abstract:

Millions of people reside downstream of river basins that heavily rely on snowmelt originating from the Hindu Kush (HK) region. Snowmelt plays a critical role as a primary water source in these areas. This study aimed to evaluate snowfall and snowmelt characteristics in the HK region across altitudes ranging from 2019m to 4533m. To achieve this, the study employed a combination of remote sensing techniques and the Snow Model (SM) to analyze the spatial and temporal distribution of Snow Water Equivalent (SWE). By integrating the simulated Snow-cover Area (SCA) with data from the Moderate Resolution Imaging Spectroradiometer (MODIS), the study optimized the Precipitation Gradient (PG), snowfall assessment, and the degree-day factor (DDF) for snowmelt distribution. Ground observed data from various elevations were used to calculate a temperature lapse rate of -7.0 (°C km-1). Consequently, the DDF value was determined as 3 (mm °C-1 d-1) for altitudes below 3000m and 3 to 4 (mm °C-1 d-1) for higher altitudes above 3000m. Moreover, the distribution of precipitation varies with elevation, with the PG being 0.001 (m-1) at lower elevations below 4000m and 0 (m-1) at higher elevations above 4000m. This study successfully utilized the SM to assess SCA and SWE by incorporating the two optimized parameters. The analysis of simulated SCA and MODIS data yielded coefficient determinations of R2, resulting in values of 0.95 and 0.97 for the years 2014-2015, 2015-2016, and 2016-2017, respectively. These results demonstrate that the SM is a valuable tool for managing water resources in mountainous watersheds such as the HK, where data scarcity poses a challenge."

Keywords: improved MODIS, experiment, snow water equivalent, snowmelt

Procedia PDF Downloads 64
19936 Hyper-Production of Lysine through Fermentation and Its Biological Evaluation on Broiler Chicks

Authors: Shagufta Gulraiz, Abu Saeed Hashmi, Muhammad Mohsin Javed

Abstract:

Lysine required for poultry feed is imported in Pakistan to fulfil the desired dietary needs. Present study was designed to produce maximum lysine by utilizing cheap sources to save the foreign exchange. To achieve the goal of lysine production through fermentation, large scale production of lysine was carried out in 7.5 L stirred glass vessel fermenter with wild and mutant Brevibacterium flavum (B. flavum) using all pre-optimized conditions. The identification of produced lysine was carried out by TLC and amino acid analyzer. Toxicity evaluation of produced lysine was performed before feeding to broiler chicks. During biological trial concentrated fermented broth having 8% lysine was used in poultry rations as a source of Lysine for test birds. Fermenter scale studies showed that the maximum lysine (20.8 g/L) was produced at 250 rpm, 1.5 vvm aeration, 6.0% inoculum under controlled pH conditions after 56 h of fermentation with wild culture but mutant (BFENU2) gave maximum yield of lysine 36.3 g/L under optimized condition after 48 h. Amino acid profiling showed 1.826% Lysine in fermented broth by wild B. flavum and 2.644% by mutant strain (BFENU2). Toxicity evaluation report showed that the produced lysine is safe for consumption by broilers. Biological evaluation results showed that produced lysine was equally good as commercial lysine in terms of weight gain, feed intake and feed conversion ratio. A cheap and practical bioprocess of Lysine production was concluded, that can be exploited commercially in Pakistan to save foreign exchange.

Keywords: lysine, fermentation, broiler chicks, biological evaluation

Procedia PDF Downloads 545
19935 Computational Fluid Dynamics Modelling of the Improved Airflow on a Ballistic Grille Using a Porous Medium Approach

Authors: Mapula Mothomogolo, Anria Clarke

Abstract:

Ballistic grilles are adopted on military vehicles to mitigate the vulnerability of the radiator. The design of ballistic grilles needs to address conflicting requirements: shielding the surface area of the radiator from incoming projectile threats yet providing sufficient airflow through the radiator to yield adequate heat rejection. These conflicting requirements result in a unique and challenging design problem. In this paper, the airflow through a ballistic grille using a computational modelling approach is investigated. A comparative study was conducted between a standard grille and a ballistic grille of a military vehicle. The results were used as a benchmark study for optimizing the ballistic grille with pressure drop selected as the parameter for optimization. The grilles were modelled as a porous medium to account for the pressure drop in the porous region. The effects of the porous zone were accounted for in the source term of the momentum Navier Stokes equations. The source term defines the pressure drop in the porous region as a function of the velocity. A pressure drop curve approach was used to determine the Darcy coefficient and inertial resistance coefficients of the source terms. The empirically defined coefficients were used as simulation input for a more accurate pressure drop prediction in the porous region. Additionally, the ballistic grille was optimized using an adjoint solver (shape optimization module in Ansys Fluent) to reduce the pressure drop through the ballistic grille by 30%. Based on the simulation results, the optimized ballistic grille geometry needs to be experimentally tested to validate the numerical simulation data.

Keywords: ballistic grille, darcy coefficient, optimization, porous medium

Procedia PDF Downloads 27
19934 Residual Power Series Method for System of Volterra Integro-Differential Equations

Authors: Zuhier Altawallbeh

Abstract:

This paper investigates the approximate analytical solutions of general form of Volterra integro-differential equations system by using the residual power series method (for short RPSM). The proposed method produces the solutions in terms of convergent series requires no linearization or small perturbation and reproduces the exact solution when the solution is polynomial. Some examples are given to demonstrate the simplicity and efficiency of the proposed method. Comparisons with the Laplace decomposition algorithm verify that the new method is very effective and convenient for solving system of pantograph equations.

Keywords: integro-differential equation, pantograph equations, system of initial value problems, residual power series method

Procedia PDF Downloads 416
19933 A Method for Improving the Embedded Runge Kutta Fehlberg 4(5)

Authors: Sunyoung Bu, Wonkyu Chung, Philsu Kim

Abstract:

In this paper, we introduce a method for improving the embedded Runge-Kutta-Fehlberg 4(5) method. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. This solution and error are obtained by solving an initial value problem whose solution has the information of the error at each integration step. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. For the assessment of the effectiveness, EULR problem is numerically solved.

Keywords: embedded Runge-Kutta-Fehlberg method, initial value problem, EULR problem, integration step

Procedia PDF Downloads 459
19932 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 103
19931 Nearly Zero-Energy Regulation and Buildings Built with Prefabricated Technology: The Case of Hungary

Authors: András Horkai, Attila Talamon, Viktória Sugár

Abstract:

There is an urgent need nowadays to reduce energy demand and the current level of greenhouse gas emission and use renewable energy sources increase in energy efficiency. On the other hand, the European Union (EU) countries are largely dependent on energy imports and are vulnerable to disruption in energy supply, which may, in turn, threaten the functioning of their current economic structure. Residential buildings represent a significant part of the energy consumption of the building stock. Only a small part of the building stock is exchanged every year, thus it is essential to increase the energy efficiency of the existing buildings. Present paper focuses on the buildings built with industrialized technology only, and their opportunities in the boundaries of nearly zero-energy regulation. Current paper shows the emergence of panel construction method, and past and present of the ‘panel’ problem in Hungary with a short outlook to Europe. The study shows as well as the possibilities for meeting the nearly zero and cost optimized requirements for residential buildings by analyzing the renovation scenarios of an existing residential typology.

Keywords: Budapest, energy consumption, industrialized technology, nearly zero-energy buildings

Procedia PDF Downloads 343
19930 Classifications of Images for the Recognition of People’s Behaviors by SIFT and SVM

Authors: Henni Sid Ahmed, Belbachir Mohamed Faouzi, Jean Caelen

Abstract:

Behavior recognition has been studied for realizing drivers assisting system and automated navigation and is an important studied field in the intelligent Building. In this paper, a recognition method of behavior recognition separated from a real image was studied. Images were divided into several categories according to the actual weather, distance and angle of view etc. SIFT was firstly used to detect key points and describe them because the SIFT (Scale Invariant Feature Transform) features were invariant to image scale and rotation and were robust to changes in the viewpoint and illumination. My goal is to develop a robust and reliable system which is composed of two fixed cameras in every room of intelligent building which are connected to a computer for acquisition of video sequences, with a program using these video sequences as inputs, we use SIFT represented different images of video sequences, and SVM (support vector machine) Lights as a programming tool for classification of images in order to classify people’s behaviors in the intelligent building in order to give maximum comfort with optimized energy consumption.

Keywords: video analysis, people behavior, intelligent building, classification

Procedia PDF Downloads 373
19929 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features

Authors: Bo Wang

Abstract:

The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.

Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection

Procedia PDF Downloads 282
19928 Short Association Bundle Atlas for Lateralization Studies from dMRI Data

Authors: C. Román, M. Guevara, P. Salas, D. Duclap, J. Houenou, C. Poupon, J. F. Mangin, P. Guevara

Abstract:

Diffusion Magnetic Resonance Imaging (dMRI) allows the non-invasive study of human brain white matter. From diffusion data, it is possible to reconstruct fiber trajectories using tractography algorithms. Our previous work consists in an automatic method for the identification of short association bundles of the superficial white matter (SWM), based on a whole brain inter-subject hierarchical clustering applied to a HARDI database. The method finds representative clusters of similar fibers, belonging to a group of subjects, according to a distance measure between fibers, using a non-linear registration (DTI-TK). The algorithm performs an automatic labeling based on the anatomy, defined by a cortex mesh parcelated with FreeSurfer software. The clustering was applied to two independent groups of 37 subjects. The clusters resulting from both groups were compared using a restrictive threshold of mean distance between each pair of bundles from different groups, in order to keep reproducible connections. In the left hemisphere, 48 reproducible bundles were found, while 43 bundles where found in the right hemisphere. An inter-hemispheric bundle correspondence was then applied. The symmetric horizontal reflection of the right bundles was calculated, in order to obtain the position of them in the left hemisphere. Next, the intersection between similar bundles was calculated. The pairs of bundles with a fiber intersection percentage higher than 50% were considered similar. The similar bundles between both hemispheres were fused and symmetrized. We obtained 30 common bundles between hemispheres. An atlas was created with the resulting bundles and used to segment 78 new subjects from another HARDI database, using a distance threshold between 6-8 mm according to the bundle length. Finally, a laterality index was calculated based on the bundle volume. Seven bundles of the atlas presented right laterality (IP_SP_1i, LO_LO_1i, Op_Tr_0i, PoC_PoC_0i, PoC_PreC_2i, PreC_SM_0i, y RoMF_RoMF_0i) and one presented left laterality (IP_SP_2i), there is no tendency of lateralization according to the brain region. Many factors can affect the results, like tractography artifacts, subject registration, and bundle segmentation. Further studies are necessary in order to establish the influence of these factors and evaluate SWM laterality.

Keywords: dMRI, hierarchical clustering, lateralization index, tractography

Procedia PDF Downloads 328
19927 Attribute Selection for Preference Functions in Engineering Design

Authors: Ali E. Abbas

Abstract:

Industrial Engineering is a broad multidisciplinary field with intersections and applications in numerous areas. When designing a product, it is important to determine the appropriate attributes of value and the preference function for which the product is optimized. This paper provides some guidelines on appropriate selection of attributes for preference and value functions for engineering design.

Keywords: decision analysis, industrial engineering, direct vs. indirect values, engineering management

Procedia PDF Downloads 299
19926 A Succinct Method for Allocation of Reactive Power Loss in Deregulated Scenario

Authors: J. S. Savier

Abstract:

Real power is the component power which is converted into useful energy whereas reactive power is the component of power which cannot be converted to useful energy but it is required for the magnetization of various electrical machineries. If the reactive power is compensated at the consumer end, the need for reactive power flow from generators to the load can be avoided and hence the overall power loss can be reduced. In this scenario, this paper presents a succinct method called JSS method for allocation of reactive power losses to consumers connected to radial distribution networks in a deregulated environment. The proposed method has the advantage that no assumptions are made while deriving the reactive power loss allocation method.

Keywords: deregulation, reactive power loss allocation, radial distribution systems, succinct method

Procedia PDF Downloads 370
19925 Modification of Underwood's Equation to Calculate Minimum Reflux Ratio for Column with One Side Stream Upper Than Feed

Authors: S. Mousavian, A. Abedianpour, A. Khanmohammadi, S. Hematian, Gh. Eidi Veisi

Abstract:

Distillation is one of the most important and utilized separation methods in the industrial practice. There are different ways to design of distillation column. One of these ways is short cut method. In short cut method, material balance and equilibrium are employed to calculate number of tray in distillation column. There are different methods that are classified in short cut method. One of these methods is Fenske-Underwood-Gilliland method. In this method, minimum reflux ratio should be calculated by underwood equation. Underwood proposed an equation that is useful for simple distillation column with one feed and one top and bottom product. In this study, underwood method is developed to predict minimum reflux ratio for column with one side stream upper than feed. The result of this model compared with McCabe-Thiele method. The result shows that proposed method able to calculate minimum reflux ratio with very small error.

Keywords: minimum reflux ratio, side stream, distillation, Underwood’s method

Procedia PDF Downloads 403
19924 Demographics Are Not Enough! Targeting and Segmentation of Anti-Obesity Campaigns in Mexico

Authors: Dagmara Wrzecionkowska

Abstract:

Mass media campaigns against obesity are often designed to impact large audiences. This usually means that their audience is defined based on general demographic characteristics like age, gender, occupation etc., not taking into account psychographics like behavior, motivations, wants, etc. Using psychographics, as the base for the audience segmentation, is a common practice in case of successful campaigns, as it allows developing more relevant messages. It also serves a purpose of identifying key segments, those that generate the best return on investment. For a health campaign, that would be segments that have the best chance of being converted into healthy lifestyle at the lowest cost. This paper presents the limitations of the demographic targeting, based on the findings from the reception study of IMSS anti-obesity TV commercials and proposes mothers as the first level of segmentation, in the process of identifying the key segment for these campaigns.

Keywords: anti-obesity campaigns, mothers, segmentation, targeting

Procedia PDF Downloads 399
19923 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 124
19922 Static and Dynamic Analysis of Hyperboloidal Helix Having Thin Walled Open and Close Sections

Authors: Merve Ermis, Murat Yılmaz, Nihal Eratlı, Mehmet H. Omurtag

Abstract:

The static and dynamic analyses of hyperboloidal helix having the closed and the open square box sections are investigated via the mixed finite element formulation based on Timoshenko beam theory. Frenet triad is considered as local coordinate systems for helix geometry. Helix domain is discretized with a two-noded curved element and linear shape functions are used. Each node of the curved element has 12 degrees of freedom, namely, three translations, three rotations, two shear forces, one axial force, two bending moments and one torque. Finite element matrices are derived by using exact nodal values of curvatures and arc length and it is interpolated linearly throughout the element axial length. The torsional moments of inertia for close and open square box sections are obtained by finite element solution of St. Venant torsion formulation. With the proposed method, the torsional rigidity of simply and multiply connected cross-sections can be also calculated in same manner. The influence of the close and the open square box cross-sections on the static and dynamic analyses of hyperboloidal helix is investigated. The benchmark problems are represented for the literature.

Keywords: hyperboloidal helix, squared cross section, thin walled cross section, torsional rigidity

Procedia PDF Downloads 374
19921 Influence of Strengthening of Hip Abductors and External Rotators in Treatment of Patellofemoral Pain Syndrome

Authors: Karima Abdel Aty Hassan Mohamed, Manal Mohamed Ismail, Mona Hassan Gamal Eldein, Ahmed Hassan Hussein, Abdel Aziz Mohamed Elsingerg

Abstract:

Background: Patellofemoral pain (PFP) is a common musculoskeletal pain condition, especially in females. Decreased hip muscle strength has been implicated as a contributing factor, yet the relationships between pain, hip muscle strength and function are not known. Objective: The purpose of this study is to investigate the effects of strengthening hip abductors and lateral rotators on pain intensity, function and hip abductor and hip lateral rotator eccentric and concentric torques in patients with PFPS. Methods: Thirty patients had participated in this study; they were assigned into two experimental groups. With age ranged for eighty to thirty five years. Group A consisted of 15 patients (11females and 4 males) with mean age 20.8 (±2.73) years, received closed kinetic chain exercises program, stretching exercises for tight lower extremity soft tissues, and hip strengthening exercises .Group B consisted of 15 patients (12 females and 3 males) with mean age 21.2(±3.27) years, received closed kinetic chain exercises program and stretching exercises for tight lower extremity soft tissues. Treatment was given 2-3times/week, for 6 weeks. Patients were evaluated pre and post treatment for their pain severity, function of knee joint, hip abductors and external rotators concentric/eccentric peak torque. Result: the results revealed that there were significant differences in pain and function between both groups, while there was improvement for all values for both group. Conclusion: Six weeks rehabilitation program focusing on knee strengthening exercises either supplemented by hip strengthening exercises or not effective in improving function, reducing pain and improving hip muscles torque in patients with PFPS. However, adding hip abduction and lateral rotation strengthening exercises seem to reduce pain and improve function more efficiently.

Keywords: patellofemoral pain syndrome, hip muscles, rehabilitation, isokinetic

Procedia PDF Downloads 440
19920 Grey Relational Analysis Coupled with Taguchi Method for Process Parameter Optimization of Friction Stir Welding on 6061 AA

Authors: Eyob Messele Sefene, Atinkut Atinafu Yilma

Abstract:

The highest strength-to-weight ratio criterion has fascinated increasing curiosity in virtually all areas where weight reduction is indispensable. One of the recent advances in manufacturing to achieve this intention endears friction stir welding (FSW). The process is widely used for joining similar and dissimilar non-ferrous materials. In FSW, the mechanical properties of the weld joints are impelled by property-selected process parameters. This paper presents verdicts of optimum process parameters in attempting to attain enhanced mechanical properties of the weld joint. The experiment was conducted on a 5 mm 6061 aluminum alloy sheet. A butt joint configuration was employed. Process parameters, rotational speed, traverse speed or feed rate, axial force, dwell time, tool material and tool profiles were utilized. Process parameters were also optimized, making use of a mixed L18 orthogonal array and the Grey relation analysis method with larger is better quality characteristics. The mechanical properties of the weld joint are examined through the tensile test, hardness test and liquid penetrant test at ambient temperature. ANOVA was conducted in order to investigate the significant process parameters. This research shows that dwell time, rotational speed, tool shape, and traverse speed have become significant, with a joint efficiency of about 82.58%. Nine confirmatory tests are conducted, and the results indicate that the average values of the grey relational grade fall within the 99% confidence interval. Hence the experiment is proven reliable.

Keywords: friction stir welding, optimization, 6061 AA, Taguchi

Procedia PDF Downloads 91
19919 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data

Authors: M. A. Meslem

Abstract:

For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.

Keywords: quasigeoid, gravity aomalies, covariance, GGM

Procedia PDF Downloads 131