Search results for: linear acceleration method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21200

Search results for: linear acceleration method

17990 Integrating Artificial Neural Network and Taguchi Method on Constructing the Real Estate Appraisal Model

Authors: Mu-Yen Chen, Min-Hsuan Fan, Chia-Chen Chen, Siang-Yu Jhong

Abstract:

In recent years, real estate prediction or valuation has been a topic of discussion in many developed countries. Improper hype created by investors leads to fluctuating prices of real estate, affecting many consumers to purchase their own homes. Therefore, scholars from various countries have conducted research in real estate valuation and prediction. With the back-propagation neural network that has been popular in recent years and the orthogonal array in the Taguchi method, this study aimed to find the optimal parameter combination at different levels of orthogonal array after the system presented different parameter combinations, so that the artificial neural network obtained the most accurate results. The experimental results also demonstrated that the method presented in the study had a better result than traditional machine learning. Finally, it also showed that the model proposed in this study had the optimal predictive effect, and could significantly reduce the cost of time in simulation operation. The best predictive results could be found with a fewer number of experiments more efficiently. Thus users could predict a real estate transaction price that is not far from the current actual prices.

Keywords: artificial neural network, Taguchi method, real estate valuation model, investors

Procedia PDF Downloads 481
17989 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 146
17988 Algorithms for Fast Computation of Pan Matrix Profiles of Time Series Under Unnormalized Euclidean Distances

Authors: Jing Zhang, Daniel Nikovski

Abstract:

We propose an approximation algorithm called LINKUMP to compute the Pan Matrix Profile (PMP) under the unnormalized l∞ distance (useful for value-based similarity search) using double-ended queue and linear interpolation. The algorithm has comparable time/space complexities as the state-of-the-art algorithm for typical PMP computation under the normalized l₂ distance (useful for shape-based similarity search). We validate its efficiency and effectiveness through extensive numerical experiments and a real-world anomaly detection application.

Keywords: pan matrix profile, unnormalized euclidean distance, double-ended queue, discord discovery, anomaly detection

Procedia PDF Downloads 241
17987 Synergistic Studies of Liposomes of Clove and Cinnamon Oil in Oral Health Care

Authors: Sandhya Parameswaran, Prajakta Dhuri

Abstract:

Despite great improvements in health care, the world oral health report states that dental problems still persist, particularly among underprivileged groups in both developing and developed countries. Dental caries and periodontal diseases are identified as the most important oral health problems globally. Acidic foods and beverages can affect natural teeth, and chronic exposure often leads to the development of dental erosion, abrasion, and decay. In recent years, there has been an increased interest toward essential oils. These are secondary metabolites and possess antibacterial, antifungal and antioxidant properties. Essential oils are volatile and chemically unstable in the presence of air, light, moisture and high temperature. Hence many novel methods like a liposomal encapsulation of oils have been introduced to enhance the stability and bioavailability. This research paper focuses on two essential oils, clove and cinnamon oil. Clove oil was obtained from Syzygium aromaticum Linn using clavengers apparatus. It contains eugenol and β caryophyllene. Cinnamon oil, from the barks of Cinnamomum cassia, contains cinnamaldehyde, The objective of the current research was to develop a liposomal carrier system containing clove and cinnamon oil and study their synergistic activity against dental pathogens when formulated as a gel. Methodology: The essential oil were first tested for their antimicrobial activity against dental pathogens, Lactobacillus acidophillus (MTCC No. 10307, MRS broth) and Streptococcus Mutans (MTCC No .890, Brain Heart Infusion agar). The oils were analysed by UV spectroscopy for eugenol and cinnamaldehyde content. Standard eugenol was linear between 5ppm to 25ppm at 282nm and standard cinnamaldehde from 1ppm to 5pmm at 284nm. The concentration of eugenol in clove oil was found to be 62.65 % w/w, and that of cinnamaldehyde was found to be 5.15%s w/w. The oils were then formulated into liposomes. Liposomes were prepared by thin film hydration method using Phospholipid, Cholesterol, and other oils dissolved in a chloroform methanol (3:1) mixture. The organic solvent was evaporated in a rotary evaporator above lipid transition temperature. The film was hydrated with phosphate buffer (pH 5.5).The various batches of liposomes were characterized and compared for their size, loading rate, encapsulation efficiency and morphology. The prepared liposomes when evaluated for entrapment efficiency showed 65% entrapment for clove and 85% for cinnamon oil. They were also tested for their antimicrobial activity against dental pathogens and their synergistic activity studied. Based on the activity and the entrapment efficiency the amount of liposomes required to prepare 1gm of the gel was calculated. The gel was prepared using a simple ointment base and contained 0.56% of cinnamon and clove liposomes. A simultaneous method of analysis for eugenol and cinnamaldehyde.was then developed using HPLC. The prepared gels were then studied for their stability as per ICH guidelines. Conclusion: It was found that liposomes exhibited spherical shaped vesicles and protected the essential oil from degradation. Liposomes, therefore, constitute a suitable system for encapsulation of volatile, unstable essential oil constituents.

Keywords: cinnamon oil, clove oil, dental caries, liposomes

Procedia PDF Downloads 188
17986 Morphology and Risk Factors for Blunt Aortic Trauma in Car Accidents: An Autopsy Study

Authors: Ticijana Prijon, Branko Ermenc

Abstract:

Background: Blunt aortic trauma (BAT) includes various morphological changes that occur during deceleration, acceleration and/or body compression in traffic accidents. The various forms of BAT, from limited laceration of the intima to complete transection of the aorta, depends on the force acting on the vessel wall and the tolerance of the aorta to injury. The force depends on the change in velocity, the dynamics of the accident and of the seating position in the car. Tolerance to aortic injury depends on the anatomy, histological structure and pathomorphological alterations due to aging or disease of the aortic wall.An overview of the literature and medical documentation reveals that different terms are used to describe certain forms of BAT, which can lead to misinterpretation of findings or diagnoses. We therefore, propose a classification that would enable uniform systematic screening of all forms of BAT. We have classified BAT into three morphologycal types: TYPE I (intramural), TYPE II (transmural) and TYPE III (multiple) aortic ruptures with appropriate subtypes. Methods: All car accident casualties examined at the Institute of Forensic Medicine from 2001 to 2009 were included in this retrospective study. Autopsy reports were used to determine the occurrence of each morphological type of BAT in deceased drivers, front seat passengers and other passengers in cars and to define the morphology of BAT in relation to the accident dynamics and the age of the fatalities. Results: A total of 391 fatalities in car accidents were included in the study. TYPE I, TYPE II and TYPE III BAT were observed in 10,9%, 55,6% and 33,5%, respectively. The incidence of BAT in drivers, front seat and other passengers was 36,7%, 43,1% and 28,6%, respectively. In frontal collisions, the incidence of BAT was 32,7%, in lateral collisions 54,2%, and in other traffic accidents 29,3%. The average age of fatalities with BAT was 42,8 years and of those without BAT 39,1 years. Conclusion: Identification and early recognition of the risk factors of BAT following a traffic accident is crucial for successful treatment of patients with BAT. Front seat passengers over 50 years of age who have been injured in a lateral collision are the most at risk of BAT.

Keywords: aorta, blunt trauma, car accidents, morphology, risk factors

Procedia PDF Downloads 503
17985 An Analysis of the Performances of Various Buoys as the Floats of Wave Energy Converters

Authors: İlkay Özer Erselcan, Abdi Kükner, Gökhan Ceylan

Abstract:

The power generated by eight point absorber type wave energy converters each having a different buoy are calculated in order to investigate the performances of buoys in this study. The calculations are carried out by modeling three different sea states observed in two different locations in the Black Sea. The floats analyzed in this study have two basic geometries and four different draft/radius (d/r) ratios. The buoys possess the shapes of a semi-ellipsoid and a semi-elliptic paraboloid. Additionally, the draft/radius ratios range from 0.25 to 1 by an increment of 0.25. The radiation forces acting on the buoys due to the oscillatory motions of these bodies are evaluated by employing a 3D panel method along with a distribution of 3D pulsating sources in frequency domain. On the other hand, the wave forces acting on the buoys which are taken as the sum of Froude-Krylov forces and diffraction forces are calculated by using linear wave theory. Furthermore, the wave energy converters are assumed to be taut-moored to the seabed so that the secondary body which houses a power take-off system oscillates with much smaller amplitudes compared to the buoy. As a result, it is assumed that there is not any significant contribution to the power generation from the motions of the housing body and the only contribution to power generation comes from the buoy. The power take-off systems of the wave energy converters are high pressure oil hydraulic systems which are identical in terms of their characteristic parameters. The results show that the power generated by wave energy converters which have semi-ellipsoid floats is higher than that of those which have semi elliptic paraboloid floats in both locations and in all sea states. It is also determined that the power generated by the wave energy converters follow an unsteady pattern such that they do not decrease or increase with changing draft/radius ratios of the floats. Although the highest power level is obtained with a semi-ellipsoid float which has a draft/radius ratio equal to 1, other floats of which the draft/radius ratio is 0.25 delivered higher power that the floats with a draft/radius ratio equal to 1 in some cases.

Keywords: Black Sea, buoys, hydraulic power take-off system, wave energy converters

Procedia PDF Downloads 346
17984 Recirculated Sedimentation Method to Control Contamination for Algal Biomass Production

Authors: Ismail S. Bostanci, Ebru Akkaya

Abstract:

Microalgae-derived biodiesel, fertilizer or industrial chemicals' production with wastewater has great potential. Especially water from a municipal wastewater treatment plant is a very important nutrient source for biofuel production. Microalgae biomass production in open ponds system is lower cost culture systems. There are many hurdles for commercial algal biomass production in large scale. One of the important technical bottlenecks for microalgae production in open system is culture contamination. The algae culture contaminants can generally be described as invading organisms which could cause pond crash. These invading organisms can be competitors, parasites, and predators. Contamination is unavoidable in open systems. Potential contaminant organisms are already inoculated if wastewater is utilized for algal biomass cultivation. Especially, it is important to control contaminants to retain in acceptable level in order to reach true potential of algal biofuel production. There are several contamination management methods in algae industry, ranging from mechanical, chemical, biological and growth condition change applications. However, none of them are accepted as a suitable contamination control method. This experiment describes an innovative contamination control method, 'Recirculated Sedimentation Method', to manage contamination to avoid pond cash. The method can be used for the production of algal biofuel, fertilizer etc. and algal wastewater treatment. To evaluate the performance of the method on algal culture, an experiment was conducted for 90 days at a lab-scale raceway (60 L) reactor with the use of non-sterilized and non-filtered wastewater (secondary effluent and centrate of anaerobic digestion). The application of the method provided the following; removing contaminants (predators and diatoms) and other debris from reactor without discharging the culture (with microscopic evidence), increasing raceway tank’s suspended solids holding capacity (770 mg L-1), increasing ammonium removal rate (29.83 mg L-1 d-1), decreasing algal and microbial biofilm formation on inner walls of reactor, washing out generated nitrifier from reactor to prevent ammonium consumption.

Keywords: contamination control, microalgae culture contamination, pond crash, predator control

Procedia PDF Downloads 199
17983 Power-Aware Adaptive Coverage Control with Consensus Protocol

Authors: Mert Turanli, Hakan Temeltas

Abstract:

In this paper, we propose a new approach to coverage control problem by using adaptive coordination and power aware control laws. Nonholonomic mobile nodes position themselves suboptimally according to a time-varying density function using Centroidal Voronoi Tesellations. The Lyapunov stability analysis of the adaptive and decentralized approach is given. A linear consensus protocol is used to establish synchronization among the mobile nodes. Also, repulsive forces prevent nodes from collision. Simulation results show that by using power aware control laws, energy consumption of the nodes can be reduced.

Keywords: power aware, coverage control, adaptive, consensus, nonholonomic, coordination

Procedia PDF Downloads 349
17982 Efficient Numerical Simulation for LDC

Authors: Badr Alkahtani

Abstract:

In this poster, numerical solutions of two-dimensional and three-dimensional lid driven cavity are presented by solving the steady Navier-Stokes equations at high Reynolds numbers where it becomes difficult. Lid driven cavity is where the a fluid contained in a cube and the upper wall is moving. In two dimensions, we use the streamfunction-vorticity formulation to solve the problem in a square domain. A numerical method is employed to discretize the problem in the x and y directions with a spectral collocation method. The problem is coded in the MATLAB programming environment. Solutions at high Reynolds numbers are obtained up to Re=20000 on a fine grid of 131 * 131. Also in this presentation, the numerical solutions for the three-dimensional lid-driven cavity problem are obtained by solving the velocity-vorticity formulation of the Navier-Stokes equations (which is the first time that this has been simulated with special boundary conditions) for various Reynolds numbers. A spectral collocation method is employed to discretize the y and z directions and a finite difference method is used to discretize the x direction. Numerical solutions are obtained for Reynolds number up to 200. , The work prepared here is to show the efficiency of methods used to simulate the physical problem where accurate simulations of lid driven cavity are obtained at high Reynolds number as mentioned above. The result for the two dimensional problem is far from the previous researcher result.

Keywords: lid driven cavity, navier-stokes, simulation, Reynolds number

Procedia PDF Downloads 706
17981 The Effects of Damping Devices on Displacements, Velocities and Accelerations of Structures

Authors: Radhwane Boudjelthia

Abstract:

The most recent earthquakes that occurred in the world and particularly in Algeria, have killed thousands of people and severe damage. The example that is etched in our memory is the last earthquake in the regions of Boumerdes and Algiers (Boumerdes earthquake of May 21, 2003). For all the actors involved in the building process, the earthquake is the litmus test for construction. The goal we set ourselves is to contribute to the implementation of a thoughtful approach to the seismic protection of structures. For many engineers, the most conventional approach protection works (buildings and bridges) the effects of earthquakes is to increase rigidity. This approach is not always effective, especially when there is a context that favors the phenomenon of resonance and amplification of seismic forces. Therefore, the field of earthquake engineering has made significant inroads among others catalyzed by the development of computational techniques in computer form and the use of powerful test facilities. This has led to the emergence of several innovative technologies, such as the introduction of special devices insulation between infrastructure and superstructure. This approach, commonly known as "seismic isolation" to absorb the significant efforts without the structure is damaged and thus ensuring the protection of lives and property. In addition, the restraints to the construction by the ground shaking are located mainly at the supports. With these moves, the natural period of construction is increasing, and seismic loads are reduced. Thus, there is an attenuation of the seismic movement. Likewise, the insulation of the base mechanism may be used in combination with earthquake dampers in order to control the deformation of the insulation system and the absolute displacement of the superstructure located above the isolation interface. On the other hand, only can use these earthquake dampers to reduce the oscillation amplitudes and thus reduce seismic loads. The use of damping devices represents an effective solution for the rehabilitation of existing structures. Given all these acceleration reducing means considered passive, much research has been conducted for several years to develop an active control system of the response of buildings to earthquakes.

Keywords: earthquake, building, seismic forces, displacement, resonance, response

Procedia PDF Downloads 122
17980 Prediction Model of Body Mass Index of Young Adult Students of Public Health Faculty of University of Indonesia

Authors: Yuwaratu Syafira, Wahyu K. Y. Putra, Kusharisupeni Djokosujono

Abstract:

Background/Objective: Body Mass Index (BMI) serves various purposes, including measuring the prevalence of obesity in a population, and also in formulating a patient’s diet at a hospital, and can be calculated with the equation = body weight (kg)/body height (m)². However, the BMI of an individual with difficulties in carrying their weight or standing up straight can not necessarily be measured. The aim of this study was to form a prediction model for the BMI of young adult students of Public Health Faculty of University of Indonesia. Subject/Method: This study used a cross sectional design, with a total sample of 132 respondents, consisted of 58 males and 74 females aged 21- 30. The dependent variable of this study was BMI, and the independent variables consisted of sex and anthropometric measurements, which included ulna length, arm length, tibia length, knee height, mid-upper arm circumference, and calf circumference. Anthropometric information was measured and recorded in a single sitting. Simple and multiple linear regression analysis were used to create the prediction equation for BMI. Results: The male respondents had an average BMI of 24.63 kg/m² and the female respondents had an average of 22.52 kg/m². A total of 17 variables were analysed for its correlation with BMI. Bivariate analysis showed the variable with the strongest correlation with BMI was Mid-Upper Arm Circumference/√Ulna Length (MUAC/√UL) (r = 0.926 for males and r = 0.886 for females). Furthermore, MUAC alone also has a very strong correlation with BMI (r = 0,913 for males and r = 0,877 for females). Prediction models formed from either MUAC/√UL or MUAC alone both produce highly accurate predictions of BMI. However, measuring MUAC/√UL is considered inconvenient, which may cause difficulties when applied on the field. Conclusion: The prediction model considered most ideal to estimate BMI is: Male BMI (kg/m²) = 1.109(MUAC (cm)) – 9.202 and Female BMI (kg/m²) = 0.236 + 0.825(MUAC (cm)), based on its high accuracy levels and the convenience of measuring MUAC on the field.

Keywords: body mass index, mid-upper arm circumference, prediction model, ulna length

Procedia PDF Downloads 210
17979 Development of Lipid Architectonics for Improving Efficacy and Ameliorating the Oral Bioavailability of Elvitegravir

Authors: Bushra Nabi, Saleha Rehman, Sanjula Baboota, Javed Ali

Abstract:

Aim: The objective of research undertaken is analytical method validation (HPLC method) of an anti-HIV drug Elvitegravir (EVG). Additionally carrying out the forced degradation studies of the drug under different stress conditions to determine its stability. It is envisaged in order to determine the suitable technique for drug estimation, which would be employed in further research. Furthermore, comparative pharmacokinetic profile of the drug from lipid architectonics and drug suspension would be obtained post oral administration. Method: Lipid Architectonics (LA) of EVR was formulated using probe sonication technique and optimized using QbD (Box-Behnken design). For the estimation of drug during further analysis HPLC method has been validation on the parameters (Linearity, Precision, Accuracy, Robustness) and Limit of Detection (LOD) and Limit of Quantification (LOQ) has been determined. Furthermore, HPLC quantification of forced degradation studies was carried out under different stress conditions (acid induced, base induced, oxidative, photolytic and thermal). For pharmacokinetic (PK) study, Albino Wistar rats were used weighing between 200-250g. Different formulations were given per oral route, and blood was collected at designated time intervals. A plasma concentration profile over time was plotted from which the following parameters were determined:

Keywords: AIDS, Elvitegravir, HPLC, nanostructured lipid carriers, pharmacokinetics

Procedia PDF Downloads 135
17978 A Dissipative Particle Dynamics Study of a Capsule in Microfluidic Intracellular Delivery System

Authors: Nishanthi N. S., Srikanth Vedantam

Abstract:

Intracellular delivery of materials has always proved to be a challenge in research and therapeutic applications. Usually, vector-based methods, such as liposomes and polymeric materials, and physical methods, such as electroporation and sonoporation have been used for introducing nucleic acids or proteins. Reliance on exogenous materials, toxicity, off-target effects was the short-comings of these methods. Microinjection was an alternative process which addressed the above drawbacks. However, its low throughput had hindered its adoption widely. Mechanical deformation of cells by squeezing them through constriction channel can cause the temporary development of pores that would facilitate non-targeted diffusion of materials. Advantages of this method include high efficiency in intracellular delivery, a wide choice of materials, improved viability and high throughput. This cell squeezing process can be studied deeper by employing simple models and efficient computational procedures. In our current work, we present a finite sized dissipative particle dynamics (FDPD) model to simulate the dynamics of the cell flowing through a constricted channel. The cell is modeled as a capsule with FDPD particles connected through a spring network to represent the membrane. The total energy of the capsule is associated with linear and radial springs in addition to constraint of the fixed area. By performing detailed simulations, we studied the strain on the membrane of the capsule for channels with varying constriction heights. The strain on the capsule membrane was found to be similar though the constriction heights vary. When strain on the membrane was correlated to the development of pores, we found higher porosity in capsule flowing in wider channel. This is due to localization of strain to a smaller region in the narrow constriction channel. But the residence time of the capsule increased as the channel constriction narrowed indicating that strain for an increased time will cause less cell viability.

Keywords: capsule, cell squeezing, dissipative particle dynamics, intracellular delivery, microfluidics, numerical simulations

Procedia PDF Downloads 138
17977 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 143
17976 Self-Calibration of Fish-Eye Camera for Advanced Driver Assistance Systems

Authors: Atef Alaaeddine Sarraj, Brendan Jackman, Frank Walsh

Abstract:

Tomorrow’s car will be more automated and increasingly connected. Innovative and intuitive interfaces are essential to accompany this functional enrichment. For that, today the automotive companies are competing to offer an advanced driver assistance system (ADAS) which will be able to provide enhanced navigation, collision avoidance, intersection support and lane keeping. These vision-based functions require an accurately calibrated camera. To achieve such differentiation in ADAS requires sophisticated sensors and efficient algorithms. This paper explores the different calibration methods applicable to vehicle-mounted fish-eye cameras with arbitrary fields of view and defines the first steps towards a self-calibration method that adequately addresses ADAS requirements. In particular, we present a self-calibration method after comparing different camera calibration algorithms in the context of ADAS requirements. Our method gathers data from unknown scenes while the car is moving, estimates the camera intrinsic and extrinsic parameters and corrects the wide-angle distortion. Our solution enables continuous and real-time detection of objects, pedestrians, road markings and other cars. In contrast, other camera calibration algorithms for ADAS need pre-calibration, while the presented method calibrates the camera without prior knowledge of the scene and in real-time.

Keywords: advanced driver assistance system (ADAS), fish-eye, real-time, self-calibration

Procedia PDF Downloads 243
17975 Coupling Heat Transfer by Natural Convection and Thermal Radiation in a Storage Tank of LNG

Authors: R. Hariti, M. Saighi, H. Saidani-Scott

Abstract:

A numerical simulation of natural convection double diffusion, coupled with thermal radiation in unsteady laminar regime in a storage tank is carried out. The storage tank contains a liquefied natural gas (LNG) in its gaseous phase. Fluent, a commercial CFD package, based on the numerical finite volume method, is used to simulate the flow. The radiative transfer equation is solved using the discrete coordinate method. This numerical simulation is used to determine the temperature profiles, stream function, velocity vectors and variation of the heat flux density for unsteady laminar natural convection. Furthermore, the influence of thermal radiation on the heat transfer has been investigated and the results obtained were compared to those found in the literature. Good agreement between the results obtained by the numerical method and those taken on site for the temperature values.

Keywords: tank, storage, liquefied natural gas, natural convection, thermal radiation, numerical simulation

Procedia PDF Downloads 534
17974 A Simple Finite Element Method for Glioma Tumor Growth Model with Density Dependent Diffusion

Authors: Shangerganesh Lingeshwaran

Abstract:

In this presentation, we have performed numerical simulations for a reaction-diffusion equation with various nonlinear density-dependent diffusion operators and proliferation functions. The mathematical model represented by parabolic partial differential equation is considered to study the invasion of gliomas (the most common type of brain tumors) and to describe the growth of cancer cells and response to their treatment. The unknown quantity of the given reaction-diffusion equation is the density of cancer cells and the mathematical model based on the proliferation and migration of glioma cells. A standard Galerkin finite element method is used to perform the numerical simulations of the given model. Finally, important observations on the each of nonlinear diffusion functions and proliferation functions are presented with the help of computational results.

Keywords: glioma invasion, nonlinear diffusion, reaction-diffusion, finite eleament method

Procedia PDF Downloads 224
17973 A Modified QuEChERS Method Using Activated Carbon Fibers as r-DSPE Sorbent for Sample Cleanup: Application to Pesticides Residues Analysis in Food Commodities Using GC-MS/MS

Authors: Anshuman Srivastava, Shiv Singh, Sheelendra Pratap Singh

Abstract:

A simple, sensitive and effective gas chromatography tandem mass spectrometry (GC-MS/MS) method was developed for simultaneous analysis of multi pesticide residues (organophosphate, organochlorines, synthetic pyrethroids and herbicides) in food commodities using phenolic resin based activated carbon fibers (ACFs) as reversed-dispersive solid phase extraction (r-DSPE) sorbent in modified QuEChERS (Quick Easy Cheap Effective Rugged Safe) method. The acetonitrile-based QuEChERS technique was used for the extraction of the analytes from food matrices followed by sample cleanup with ACFs instead of traditionally used primary secondary amine (PSA). Different physico-chemical characterization techniques such as Fourier transform infrared spectroscopy, scanning electron microscopy, X-ray diffraction and Brunauer-Emmet-Teller surface area analysis were employed to investigate the engineering and structural properties of ACFs. The recovery of pesticides and herbicides was tested at concentration levels of 0.02 and 0.2 mg/kg in different commodities such as cauliflower, cucumber, banana, apple, wheat and black gram. The recoveries of all twenty-six pesticides and herbicides were found in acceptable limit (70-120%) according to SANCO guideline with relative standard deviation value < 15%. The limit of detection and limit of quantification of the method was in the range of 0.38-3.69 ng/mL and 1.26 -12.19 ng/mL, respectively. In traditional QuEChERS method, PSA used as r-DSPE sorbent plays a vital role in sample clean-up process and demonstrates good recoveries for multiclass pesticides. This study reports that ACFs are better in terms of removal of co-extractives in comparison of PSA without compromising the recoveries of multi pesticides from food matrices. Further, ACF replaces the need of charcoal in addition to the PSA from traditional QuEChERS method which is used to remove pigments. The developed method will be cost effective because the ACFs are significantly cheaper than the PSA. So the proposed modified QuEChERS method is more robust, effective and has better sample cleanup efficiency for multiclass multi pesticide residues analysis in different food matrices such as vegetables, grains and fruits.

Keywords: QuEChERS, activated carbon fibers, primary secondary amine, pesticides, sample preparation, carbon nanomaterials

Procedia PDF Downloads 263
17972 Shield Tunnel Excavation Simulation of a Case Study Using a So-Called 'Stress Relaxation' Method

Authors: Shengwei Zhu, Alireza Afshani, Hirokazu Akagi

Abstract:

Ground surface settlement induced by shield tunneling is addressing increasing attention as shield tunneling becomes a popular construction technique for tunnels in urban areas. This paper discusses a 2D longitudinal FEM simulation of a tunneling case study in Japan (Tokyo Metro Yurakucho Line). Tunneling-induced field data was already collected and is used here for comparison and evaluating purposes. In this model, earth pressure, face pressure, backfilling grouting, elastic tunnel lining, and Mohr-Coulomb failure criterion for soil elements are considered. A method called ‘stress relaxation’ is also exploited to simulate the gradual tunneling excavation. Ground surface settlements obtained from numerical results using the introduced method are then compared with the measurement data.

Keywords: 2D longitudinal FEM model, tunneling case study, stress relaxation, shield tunneling excavation

Procedia PDF Downloads 324
17971 A Guide for Using Viscoelasticity in ANSYS

Authors: A. Fettahoglu

Abstract:

Theory of viscoelasticity is used by many researchers to represent the behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell model and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Afterwards, a guide is illustrated to ease using of viscoelasticity tool in ANSYS.

Keywords: ANSYS, generalized Maxwell model, finite element method, Prony series, viscoelasticity, viscoelastic material curve fitting

Procedia PDF Downloads 571
17970 Method for Tuning Level Control Loops Based on Internal Model Control and Closed Loop Step Test Data

Authors: Arnaud Nougues

Abstract:

This paper describes a two-stage methodology derived from internal model control (IMC) for tuning a proportional-integral-derivative (PID) controller for levels or other integrating processes in an industrial environment. Focus is the ease of use and implementation speed which are critical for an industrial application. Tuning can be done with minimum effort and without the need for time-consuming open-loop step tests on the plant. The first stage of the method applies to levels only: the vessel residence time is calculated from equipment dimensions and used to derive a set of preliminary proportional-integral (PI) settings with IMC. The second stage, re-tuning in closed-loop, applies to levels as well as other integrating processes: a tuning correction mechanism has been developed based on a series of closed-loop simulations with model errors. The tuning correction is done from a simple closed-loop step test and the application of a generic correlation between observed overshoot and integral time correction. A spin-off of the method is that an estimate of the vessel residence time (levels) or open-loop process gain (other integrating process) is obtained from the closed-loop data.

Keywords: closed-loop model identification, IMC-PID tuning method, integrating process control, on-line PID tuning adaptation

Procedia PDF Downloads 211
17969 Modeling and Simulation of a CMOS-Based Analog Function Generator

Authors: Madina Hamiane

Abstract:

Modelling and simulation of an analogy function generator is presented based on a polynomial expansion model. The proposed function generator model is based on a 10th order polynomial approximation of any of the required functions. The polynomial approximations of these functions can then be implemented using basic CMOS circuit blocks. In this paper, a circuit model is proposed that can simultaneously generate many different mathematical functions. The circuit model is designed and simulated with HSPICE and its performance is demonstrated through the simulation of a number of non-linear functions.

Keywords: modelling and simulation, analog function generator, polynomial approximation, CMOS transistors

Procedia PDF Downloads 455
17968 A Preparatory Method for Building Construction Implemented in a Case Study in Brazil

Authors: Aline Valverde Arroteia, Tatiana Gondim do Amaral, Silvio Burrattino Melhado

Abstract:

During the last twenty years, the construction field in Brazil has evolved significantly in response to its market growing and competitiveness. However, this evolving path has faced many obstacles such as cultural barriers and the lack of efforts to achieve quality at the construction site. At the same time, the greatest amount of information generated on the designing or construction phases is lost due to the lack of an effective coordination of these activities. Face this problem, the aim of this research was to implement a French method named PEO which means preparation for building construction (in Portuguese) seeking to understand the design management process and its interface with the building construction phase. The research method applied was qualitative, and it was carried out through two case studies in the city of Goiania, in Goias, Brazil. The research was divided into two stages called pilot study at Company A and implementation of PEO at Company B. After the implementation; the results demonstrated the PEO method's effectiveness and feasibility while a booster on the quality improvement of design management. The analysis showed that the method has a purpose to improve the design and allow the reduction of failures, errors and rework commonly found in the production of buildings. Therefore, it can be concluded that the PEO is feasible to be applied to real estate and building companies. But, companies need to believe in the contribution they can make to the discovery of design failures in conjunction with other stakeholders forming a construction team. The result of PEO can be maximized when adopting the principles of simultaneous engineering and insertion of new computer technologies, which use a three-dimensional model of the building with BIM process.

Keywords: communication, design and construction interface management, preparation for building construction (PEO), proactive coordination (CPA)

Procedia PDF Downloads 153
17967 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models

Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo De Magalhães

Abstract:

This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.

Keywords: rainfall-runoff models, automatic calibration, hyperbolic smoothing method

Procedia PDF Downloads 145
17966 Determination of Vitamin C (Ascorbic Acid) in Orange Juices Product

Authors: Wanida Wonsawat

Abstract:

This research describes a voltammetric approach to determine amounts of vitamin C (Ascorbic acid) in orange juice sample, using three screen printed electrode. The anodic currents of vitamin C were proportional to vitamin C concentration in the range of 0 – 10.0 mM with the limit of detection of 1.36 mM. The method was successfully employed with 2 µL of the working solution dropped on the electrode surface. The proposed method was applied for the analysis of vitamin C in packed orange juice without sample purification or complexion of sample preparation step.

Keywords: ascorbic acid, vitamin C, juice, voltammetry

Procedia PDF Downloads 319
17965 Generalization of Tau Approximant and Error Estimate of Integral Form of Tau Methods for Some Class of Ordinary Differential Equations

Authors: A. I. Ma’ali, R. B. Adeniyi, A. Y. Badeggi, U. Mohammed

Abstract:

An error estimation of the integrated formulation of the Lanczos tau method for some class of ordinary differential equations was reported. This paper is concern with the generalization of tau approximants and their corresponding error estimates for some class of ordinary differential equations (ODEs) characterized by m + s =3 (i.e for m =1, s=2; m=2, s=1; and m=3, s=0) where m and s are the order of differential equations and number of overdetermination, respectively. The general result obtained were validated with some numerical examples.

Keywords: approximant, error estimate, tau method, overdetermination

Procedia PDF Downloads 600
17964 Decision-Making Tool for Planning the Construction of Infrastructure Projects

Authors: Rolla Monib, Chris I. Goodier, Alistair Gibbs

Abstract:

The aim of this paper is to investigate the key drivers in planning the construction phase for infrastructure projects to reduce project delays. To achieve this aim, the research conducted three case studies using semi-structured and unstructured interviews (n=36). The results conclude that a lack of modularisation awareness is among the key factors attributed to project delays. The current emotive and ill-informed approach to decision-making, coupled with the lack of knowledge regarding appropriate construction method selection, prevents the potential benefits of modularisation being fully realised. To assist with decision-making for the best construction method, the research presents project management tools to help decision makers to choose the most appropriate construction approach through optimising the use of modularisation in EC. A decision-making checklist and diagram are presented in this paper. These checklist tools and diagrams assist the project team in determining the best construction method, taking into consideration the module type.

Keywords: infrastructure, modularization, decision support, decision-making

Procedia PDF Downloads 51
17963 Measuring Sustainable Interior Design

Authors: Iman Ibrahim

Abstract:

The interest of this paper is to review the sustainability measuring tools in Interior Design in UAE. To examine the ability of creating sustainable interior designed buildings satisfying the community social culture needs related to the world eco systems and how much it’s affected by humans, as the research will focus on sustainability as a multi-dimensional concept including environmental, social and economic dimensions. The aim of this research is to reach the most suitable sustainable rating method criteria for buildings in UAE, in a trial to develop it to match the community culture. Developing such criteria is gaining significance in UAE as a result of increased awareness of the environmental, economic and social issues. This will allow an exploration of the suitable criteria for developing a sustainable rating method for buildings in UAE. The final research findings will be presented as suitable criteria for developing a sustainable building assessment method for UAE in terms of environmental, economic, social and cultural perspectives.

Keywords: rating methods, sustainability tools, UAE, local conditions

Procedia PDF Downloads 415
17962 Factorial Design Analysis for Quality of Video on MANET

Authors: Hyoup-Sang Yoon

Abstract:

The quality of video transmitted by mobile ad hoc networks (MANETs) can be influenced by several factors, including protocol layers; parameter settings of each protocol. In this paper, we are concerned with understanding the functional relationship between these influential factors and objective video quality in MANETs. We illustrate a systematic statistical design of experiments (DOE) strategy can be used to analyse MANET parameters and performance. Using a 2k factorial design, we quantify the main and interactive effects of 7 factors on a response metric (i.e., mean opinion score (MOS) calculated by PSNR with Evalvid package) we then develop a first-order linear regression model between the influential factors and the performance metric.

Keywords: evalvid, full factorial design, mobile ad hoc networks, ns-2

Procedia PDF Downloads 408
17961 Assessing India’s Foreign Policy Towards Afghanistan

Authors: Saifurahman Fayiz

Abstract:

Afghanistan and India have close technical, political, economic, and diplomatic bilateral ties. The ties is not limited between the governments of the two countries, but their relationship are among the peoples. India is the best regional trustworthy partner and biggest donor for the development of Afghanistan. The objectives of this study to assess India’s foreign policy towards Afghanistan since 9\11. The research method conducted based on qualitative research method with descriptive. The research findings propose that; India should deal with and build up its strategy relations with neighbor countries.

Keywords: strategy, policy, India, Afghanistan

Procedia PDF Downloads 323