Search results for: Convex hull
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 115

Search results for: Convex hull

25 Evaluation of the Triticale Flour Blend Dough in the Mixing and Fermentation Processes

Authors: Martins Sabovics, Karina Ruse, Evita Straumite, Ruta Galoburda

Abstract:

The research was accomplished on triticale flour blend, which was made from whole grain triticale, rye, hull-less barley flour and rice, maize flour. The aim of this research was to evaluate physico-chemical and sensory properties of triticale flour blend dough in the mixing and fermentation processes. For dough making was used triticale flour blend, yeast, sugar, salt, and water. In the mixing process ware evaluated moisture, acidity, pH, and dough sensory properties (softness, viscosity, and stickiness), but in the fermentation process ware evaluated volume, moisture, acidity, and pH. During present research was established that increasing fermentation temperature and time, increase dough temperature, volume, moisture, and acidity. The mixing time and fermentation time and temperature have significant effect (p<0.05) on triticale flour blend dough physico-chemical and sensory properties.

Keywords: Dough quality, dough fermentation, dough mixing, triticale flour blend.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2417
24 Free Vibration Analysis of Carbon Nanotube Reinforced Laminated Composite Panels

Authors: B. Ramgopal Reddy, K. Ramji, B. Satyanarayana

Abstract:

In this paper, free vibration analysis of carbon nanotube (CNT) reinforced laminated composite panels is presented. Three types of panels such as flat, concave and convex are considered for study. Numerical simulation is carried out using commercially available finite element analysis software ANSYS. Numerical homogenization is employed to calculate the effective elastic properties of randomly distributed carbon nanotube reinforced composites. To verify the accuracy of the finite element method, comparisons are made with existing results available in the literature for conventional laminated composite panels and good agreements are obtained. The results of the CNT reinforced composite materials are compared with conventional composite materials under different boundary conditions.

Keywords: CNT Reinforced Composite Panels, Effective ElasticProperties, Finite Element Method, Natural Frequency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2953
23 Intelligent Rescheduling Trains for Air Pollution Management

Authors: Kainat Affrin, P. Reshma, G. Narendra Kumar

Abstract:

Optimization of timetable is the need of the day for the rescheduling and routing of trains in real time. Trains are scheduled in parallel with the road transport vehicles to the same destination. As the number of trains is restricted due to single track, customers usually opt for road transport to use frequently. The air pollution increases as the density of vehicles on road transport is increased. Use of an alternate mode of transport like train helps in reducing air-pollution. This paper mainly aims at attracting the passengers to Train transport by proper rescheduling of trains using hybrid of stop-skip algorithm and iterative convex programming algorithm. Rescheduling of train bi-directionally is achieved on a single track with dynamic dual time and varying stops. Introduction of more trains attract customers to use rail transport frequently, thereby decreasing the pollution. The results are simulated using Network Simulator (NS-2).

Keywords: Air pollution, routing protocol, network simulator, rescheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938
22 Simulation and Experimentation on the Contact Width of New Metal Gasket for Asbestos Substitution

Authors: Moch. Agus Choiron, Yoshihiro Kurata, Shigeyuki Haruyama, Ken Kaminishi

Abstract:

The contact width is important design parameter for optimizing the design of new metal gasket for asbestos substitution gasket. The contact width is found have relationship with the helium leak quantity. In the increasing of axial load value, the helium leak quantity is decreasing and the contact width is increasing. This study provides validity method using simulation analysis and the result is compared to experimental using pressure sensitive paper. The results denote similar trend data between simulation and experimental result. Final evaluation is determined by helium leak quantity to check leakage performance of gasket design. Considering the phenomena of position change on the convex contact, it can be developed the optimization of gasket design by increasing contact width.

Keywords: contact width, simulation, pressure sensitive paper.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899
21 Studies on Microstructure and Mechanical Properties of Simulated Heat Affected Zone in a Micro Alloyed Steel

Authors: Sanjeev Kumar, S. K. Nath

Abstract:

Proper selection of welding parameters for getting excellent weld is a challenge. HAZ simulation helps in identifying suitable welding parameters like heating rate, cooling rate, peak temperature, and energy input. In this study, the influence of weld thermal cycle of heat affected zone (HAZ) is simulated for Submerged Arc Welding (SAW) using Gleeble ® 3800 thermomechanical simulator. A (Micro-alloyed) MA steel plate of thickness 18 mm having yield strength 450MPa is used for making test specimens. Determination of the mechanical properties of weld simulated specimens including Charpy V-notch toughness and hardness is performed. Peak temperatures of 1300°C, 1150°C, 1000°C, 900°C, 800°C, heat energy input of 22KJ/cm and preheat temperatures of 30°C have been used with Rykalin-3D simulation model. It is found that the impact toughness (75J) is the best for the simulated HAZ specimen at the peak temperature 900ºC. For parent steel, impact toughness value is 26.8J at -50°C in transverse direction.

Keywords: HAZ Simulation, Mechanical Properties, Peak Temperature, Ship hull steel, and Weldability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
20 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading

Authors: Vaso K. Kapnopoulou, Piero Caridis

Abstract:

The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.

Keywords: Lower hopper knuckle, high cycle fatigue, finite element method, dynamic load cases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 954
19 Voltage-Controllable Liquid Crystals Lens

Authors: Wen-Chi Hung, Tung-Kai Liu, Ming-Shan Tsai, Chun-Che Lee, I-Min Jiang

Abstract:

This study investigates a voltage-controllable liquid crystals lens with a Fresnel zone electrode. When applying a proper voltage on the liquid crystal cell, a Fresnel-zone-distributed electric field is induced to direct liquid crystals aligned in a concentric structure. Owing to the concentrically aligned liquid crystals, a Fresnel lens is formed. We probe the Fresnel liquid crystal lens using a polarized incident beam with a wavelength of 632.8 nm, finding that the diffraction efficiency depends on the applying voltage. A remarkable diffraction efficiency of ~39.5 % is measured at the voltage of 0.9V. Additionally, a dual focus lens is fabricated by attaching a plane-convex lens to the Fresnel liquid crystals cell. The Fresnel LC lens and the dual focus lens may be applied for DVD/CD pick-up head, confocal microscopy system, or electrically-controlling optical systems.

Keywords: Liquid Crystals Lens, Fresnel Lens, and Dual focus

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241
18 Comprehensive Study on the Linear Hydrodynamic Analysis of a Truss Spar in Random Waves

Authors: Roozbeh Mansouri, Hassan Hadidi

Abstract:

Truss spars are used for oil exploitation in deep and ultra-deep water if storage crude oil is not needed. The linear hydrodynamic analysis of truss spar in random sea wave load is necessary for determining the behaviour of truss spar. This understanding is not only important for design of the mooring lines, but also for optimising the truss spar design. In this paper linear hydrodynamic analysis of truss spar is carried out in frequency domain. The hydrodynamic forces are calculated using the modified Morison equation and diffraction theory. Added mass and drag coefficients of truss section computed by transmission matrix and normal acceleration and velocity component acting on each element and for hull section computed by strip theory. The stiffness properties of the truss spar can be separated into two components; hydrostatic stiffness and mooring line stiffness. Then, platform response amplitudes obtained by solved the equation of motion. This equation is non-linear due to viscous damping term therefore linearised by iteration method [1]. Finally computed RAOs and significant response amplitude and results are compared with experimental data.

Keywords: Truss Spar, Hydrodynamic analysis, Wave spectrum, Frequency Domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2375
17 A Review of Test Protocols for Assessing Coating Performance of Water Ballast Tank Coatings

Authors: Emmanuel A. Oriaifo, Noel Perera, Alan Guy, Pak. S. Leung, Kian T. Tan

Abstract:

Concerns on corrosion and effective coating protection of double hull tankers and bulk carriers in service have been raised especially in water ballast tanks (WBTs). Test protocols/methodologies specifically that which is incorporated in the International Maritime Organisation (IMO), Performance Standard for Protective Coatings for Dedicated Sea Water ballast tanks (PSPC) are being used to assess and evaluate the performance of the coatings for type approval prior to their application in WBTs. However, some of the type approved coatings may be applied as very thick films to less than ideally prepared steel substrates in the WBT. As such films experience hygrothermal cycling from operating and environmental conditions, they become embrittled which may ultimately result in cracking. This embrittlement of the coatings is identified as an undesirable feature in the PSPC but is not mentioned in the test protocols within it. There is therefore renewed industrial research aimed at understanding this issue in order to eliminate cracking and achieve the intended coating lifespan of 15 years in good condition. This paper will critically review test protocols currently used for assessing and evaluating coating performance, particularly the IMO PSPC.

Keywords: Corrosion Test, Hygrothermal Cycling, Coating Test Protocols, Water Ballast Tanks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4158
16 Real-time 3D Feature Extraction without Explicit 3D Object Reconstruction

Authors: Kwangjin Hong, Chulhan Lee, Keechul Jung, Kyoungsu Oh

Abstract:

For the communication between human and computer in an interactive computing environment, the gesture recognition is studied vigorously. Therefore, a lot of studies have proposed efficient methods about the recognition algorithm using 2D camera captured images. However, there is a limitation to these methods, such as the extracted features cannot fully represent the object in real world. Although many studies used 3D features instead of 2D features for more accurate gesture recognition, the problem, such as the processing time to generate 3D objects, is still unsolved in related researches. Therefore we propose a method to extract the 3D features combined with the 3D object reconstruction. This method uses the modified GPU-based visual hull generation algorithm which disables unnecessary processes, such as the texture calculation to generate three kinds of 3D projection maps as the 3D feature: a nearest boundary, a farthest boundary, and a thickness of the object projected on the base-plane. In the section of experimental results, we present results of proposed method on eight human postures: T shape, both hands up, right hand up, left hand up, hands front, stand, sit and bend, and compare the computational time of the proposed method with that of the previous methods.

Keywords: Fast 3D Feature Extraction, Gesture Recognition, Computer Vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
15 Earthquake Classification in Molluca Collision Zone Using Conventional Statistical Methods

Authors: H. J. Wattimanela, U. S. Passaribu, N. T. Puspito, S. W. Indratno

Abstract:

Molluca Collision Zone is located at the junction of the Eurasian, Australian, Pacific and the Philippines plates. Between the Sangihe arc, west of the collision zone, and to the east of Halmahera arc is active collision and convex toward the Molluca Sea. This research will analyze the behavior of earthquake occurrence in Molluca Collision Zone related to the distributions of an earthquake in each partition regions, determining the type of distribution of a occurrence earthquake of partition regions, and the mean occurence of earthquakes each partition regions, and the correlation between the partitions region. We calculate number of earthquakes using partition method and its behavioral using conventional statistical methods. In this research, we used data of shallow earthquakes type and its magnitudes ≥4 SR (period 1964-2013). From the results, we can classify partitioned regions based on the correlation into two classes: strong and very strong. This classification can be used for early warning system in disaster management.

Keywords: Molluca Collision Zone, partition regions, conventional statistical methods, Earthquakes, classifications, disaster management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
14 Minimal Spanning Tree based Fuzzy Clustering

Authors: Ágnes Vathy-Fogarassy, Balázs Feil, János Abonyi

Abstract:

Most of fuzzy clustering algorithms have some discrepancies, e.g. they are not able to detect clusters with convex shapes, the number of the clusters should be a priori known, they suffer from numerical problems, like sensitiveness to the initialization, etc. This paper studies the synergistic combination of the hierarchical and graph theoretic minimal spanning tree based clustering algorithm with the partitional Gath-Geva fuzzy clustering algorithm. The aim of this hybridization is to increase the robustness and consistency of the clustering results and to decrease the number of the heuristically defined parameters of these algorithms to decrease the influence of the user on the clustering results. For the analysis of the resulted fuzzy clusters a new fuzzy similarity measure based tool has been presented. The calculated similarities of the clusters can be used for the hierarchical clustering of the resulted fuzzy clusters, which information is useful for cluster merging and for the visualization of the clustering results. As the examples used for the illustration of the operation of the new algorithm will show, the proposed algorithm can detect clusters from data with arbitrary shape and does not suffer from the numerical problems of the classical Gath-Geva fuzzy clustering algorithm.

Keywords: Clustering, fuzzy clustering, minimal spanning tree, cluster validity, fuzzy similarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2347
13 Restrictedly-Regular Map Representation of n-Dimensional Abstract Polytopes

Authors: Antonio Breda d’Azevedo

Abstract:

Regularity has often been present in the form of regular polyhedra or tessellations; classical examples are the nine regular polyhedra consisting of the five Platonic solids (regular convex polyhedra) and the four Kleper-Poinsot polyhedra. These polytopes can be seen as regular maps. Maps are cellular embeddings of graphs (with possibly multiple edges, loops or dangling edges) on compact connected (closed) surfaces with or without boundary. The n-dimensional abstract polytopes, particularly the regular ones, have gained popularity over recent years. The main focus of research has been their symmetries and regularity. Planification of polyhedra helps its spatial construction, yet it destroys its symmetries. To our knowledge there is no “planification” for n-dimensional polytopes. However we show that it is possible to make a “surfacification” of the n-dimensional polytope, that is, it is possible to construct a restrictedly-marked map representation of the abstract polytope on some surface that describes its combinatorial structures as well as all of its symmetries. We also show that there are infinitely many ways to do this; yet there is one that is more natural that describes reflections on the sides ((n−1)-faces) of n-simplices with reflections on the sides of n-polygons. We illustrate this construction with the 4-tetrahedron (a regular 4-polytope with automorphism group of size 120) and the 4-cube (a regular 4-polytope with automorphism group of size 384).

Keywords: Maps, representation, polytopes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618
12 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: Dynamic analysis, finite element methods, ship structure, vibration analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2369
11 Algorithms for Computing of Optimization Problems with a Common Minimum-Norm Fixed Point with Applications

Authors: Apirak Sombat, Teerapol Saleewong, Poom Kumam, Parin Chaipunya, Wiyada Kumam, Anantachai Padcharoen, Yeol Je Cho, Thana Sutthibutpong

Abstract:

This research is aimed to study a two-step iteration process defined over a finite family of σ-asymptotically quasi-nonexpansive nonself-mappings. The strong convergence is guaranteed under the framework of Banach spaces with some additional structural properties including strict and uniform convexity, reflexivity, and smoothness assumptions. With similar projection technique for nonself-mapping in Hilbert spaces, we hereby use the generalized projection to construct a point within the corresponding domain. Moreover, we have to introduce the use of duality mapping and its inverse to overcome the unavailability of duality representation that is exploit by Hilbert space theorists. We then apply our results for σ-asymptotically quasi-nonexpansive nonself-mappings to solve for ideal efficiency of vector optimization problems composed of finitely many objective functions. We also showed that the obtained solution from our process is the closest to the origin. Moreover, we also give an illustrative numerical example to support our results.

Keywords: σ-asymptotically quasi-nonexpansive nonselfmapping, strong convergence, fixed point, uniformly convex and uniformly smooth Banach space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1055
10 PointNetLK-OBB: A Point Cloud Registration Algorithm with High Accuracy

Authors: Wenhao Lan, Ning Li, Qiang Tong

Abstract:

To improve the registration accuracy of a source point cloud and template point cloud when the initial relative deflection angle is too large, a PointNetLK algorithm combined with an oriented bounding box (PointNetLK-OBB) is proposed. In this algorithm, the OBB of a 3D point cloud is used to represent the macro feature of source and template point clouds. Under the guidance of the iterative closest point algorithm, the OBB of the source and template point clouds is aligned, and a mirror symmetry effect is produced between them. According to the fitting degree of the source and template point clouds, the mirror symmetry plane is detected, and the optimal rotation and translation of the source point cloud is obtained to complete the 3D point cloud registration task. To verify the effectiveness of the proposed algorithm, a comparative experiment was performed using the publicly available ModelNet40 dataset. The experimental results demonstrate that, compared with PointNetLK, PointNetLK-OBB improves the registration accuracy of the source and template point clouds when the initial relative deflection angle is too large, and the sensitivity of the initial relative position between the source point cloud and template point cloud is reduced. The primary contribution of this paper is the use of PointNetLK to avoid the non-convex problem of traditional point cloud registration and leveraging the regularity of the OBB to avoid the local optimization problem in the PointNetLK context.

Keywords: Mirror symmetry, oriented bounding box, point cloud registration, PointNetLK-OBB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 632
9 A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies

Authors: Sambit Prasad Kar, P.Palanisamy

Abstract:

In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.

Keywords: Frequency estimation, peak search, subspace-based method without eigen decomposition, quadratic convex function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701
8 Development and Control of Deep Seated Gravitational Slope Deformation: The Case of Colzate-Vertova Landslide, Bergamo, Northern Italy

Authors: Paola Comella, Vincenzo Francani, Paola Gattinoni

Abstract:

This paper presents the Colzate-Vertova landslide, a Deep Seated Gravitational Slope Deformation (DSGSD) located in the Seriana Valley, Northern Italy. The paper aims at describing the development as well as evaluating the factors that influence the evolution of the landslide. After defining the conceptual model of the landslide, numerical simulations were developed using a finite element numerical model, first with a two-dimensional domain, and later with a three-dimensional one. The results of the 2-D model showed a displacement field typical of a sackung, as a consequence of the erosion along the Seriana Valley. The analysis also showed that the groundwater flow could locally affect the slope stability, bringing about a reduction in the safety factor, but without reaching failure conditions. The sensitivity analysis carried out on the strength parameters pointed out that slope failures could be reached only for relevant reduction of the geotechnical characteristics. Such a result does not fit the real conditions observed on site, where a number of small failures often develop all along the hillslope. The 3-D model gave a more comprehensive analysis of the evolution of the DSGSD, also considering the border effects. The results showed that the convex profile of the slope favors the development of displacements along the lateral valley, with a relevant reduction in the safety factor, justifying the existing landslides.

Keywords: Deep seated gravitational slope deformation, Italy, landslide, numerical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 975
7 Regional Analysis of Streamflow Drought: A Case Study for Southwestern Iran

Authors: M. Byzedi, B. Saghafian

Abstract:

Droughts are complex, natural hazards that, to a varying degree, affect some parts of the world every year. The range of drought impacts is related to drought occurring in different stages of the hydrological cycle and usually different types of droughts, such as meteorological, agricultural, hydrological, and socioeconomical are distinguished. Streamflow drought was analyzed by the method of truncation level (at 70% level) on daily discharges measured in 54 hydrometric stations in southwestern Iran. Frequency analysis was carried out for annual maximum series (AMS) of drought deficit volume and duration series. Some factors including physiographic, climatic, geologic, and vegetation cover were studied as influential factors in the regional analysis. According to the results of factor analysis, six most effective factors were identified as area, rainfall from December to February, the percent of area with Normalized Difference Vegetation Index (NDVI) <0.1, the percent of convex area, drainage density and the minimum of watershed elevation that explained 90.9% of variance. The homogenous regions were determined by cluster analysis and discriminate function analysis. Suitable multivariate regression models were evaluated for streamflow drought deficit volume with 2 years return period. The significance level of regression models was 0.01. The results showed that the watershed area is the most effective factor with high correlation with deficit volume. Also, drought duration was not a suitable drought index for regional analysis.

Keywords: Iran, Streamflow drought, truncation level method, regional analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
6 PeliGRIFF: A Parallel DEM-DLM/FD Method for DNS of Particulate Flows with Collisions

Authors: Anthony Wachs, Guillaume Vinay, Gilles Ferrer, Jacques Kouakou, Calin Dan, Laurence Girolami

Abstract:

An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.

Keywords: Particulate flow, distributed lagrange multiplier/fictitious domain method, discrete element method, polygonal shape, sedimentation, distributed computing, MPI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
5 Cephalometric Changes of Patient with Class II Division 1 [Malocclusion] Post Orthodontic Treatment with Growth Stimulation: A Case Report

Authors: Pricillia Priska Sianita

Abstract:

An aesthetic facial profile is one of the goals in Orthodontics treatment. However, this is not easily achieved, especially in patients with Class II Division 1 malocclusion who have the clinical characteristics of convex profile and significant skeletal discrepancy due to mandibular growth deficiency. Malocclusion with skeletal problems require proper treatment timing for growth stimulation, and it must be done in early age and in need of good cooperation from the patient. If this is not done and the patient has passed the growth period, the ideal treatment is orthognathic surgery which is more complicated and more painful. The growth stimulation of skeletal malocclusion requires a careful cephalometric evaluation ranging from diagnosis to determine the parts that require stimulation to post-treatment evaluation to see the success achieved through changes in the measurement of the skeletal parameters shown in the cephalometric analysis. This case report aims to describe skeletal changes cephalometrically that were achieved through orthodontic treatment in growing period. Material and method: Lateral Cephalograms, pre-treatment, and post-treatment of cases of Class II Division 1 malocclusion is selected from a collection of cephalometric radiographic in a private clinic. The Cephalogram is then traced and measured for the skeletal parameters. The result is noted as skeletal condition data of pre-treatment and post-treatment. Furthermore, superimposition is done to see the changes achieved. The results show that growth stimulation through orthodontic treatment can solve the skeletal problem of Class II Division 1 malocclusion and the skeletal changes that occur can be verified through cephalometric analysis. The skeletal changes have an impact on the improvement of patient's facial profile. To sum up, the treatment timing on a skeletal malocclusion is very important to obtain satisfactory results for the improvement of the aesthetic facial profile, and skeletal changes can be verified through cephalometric evaluation of pre- and post-treatment.

Keywords: Cephalometric evaluation, Class II Division 1 malocclusion, growth stimulation, skeletal changes, skeletal problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077
4 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: Path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 843
3 Optical Verification of an Ophthalmological Examination Apparatus Employing the Electroretinogram Function on Fundus-Related Perimetry

Authors: Naoto Suzuki

Abstract:

Japanese are affected by the most common causes of eyesight loss such as glaucoma, diabetic retinopathy, pigmentary retinal degeneration, and age-related macular degeneration. We developed an ophthalmological examination apparatus with a fundus camera, precisely fundus-related perimetry (microperimetry), and electroretinogram (ERG) functions to diagnose a variety of diseases that cause eyesight loss. The experimental apparatus was constructed with the same optical system as a fundus camera. The microperimetry optical system was calculated and added to the experimental apparatus using the German company Optenso's optical engineering software (OpTaliX-LT 10.8). We also added an Edmund infrared camera (EO-0413), a lens with a 25 mm focal length, a 45° cold mirror, a 12 V/50 W halogen lamp, and an 8-inch monitor. We made the artificial eye of a plane-convex lens, a black spacer, and a hemispherical cup. The hemispherical cup had a small section of the paper at the bottom. The artificial eye was photographed five times using the experimental apparatus. The software was created to display the examination target on the monitor and save examination data using C++Builder 10.2. The retinal fundus was displayed on the monitor at a length and width of 1 mm and a resolution of 70.4 ± 4.1 and 74.7 ± 6.8 pixels, respectively. The microperimetry and ERG functions were successfully added to the experimental ophthalmological apparatus. A moving machine was developed to measure the artificial eye's movement. The artificial eye's rear part was painted black and white in the central area. It was rotated 10 degrees from one side to the other. The movement was captured five times as motion videos. Three static images were extracted from one of the motion videos captured. The images display the artificial eye facing the center, right, and left directions. The three images were processed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2, including trimming, binarization, making a window, deleting peripheral area, and morphological operations. To calculate the artificial eye's fundus center, we added a gravity method to the program to calculate the gravity position of connected components. From the three images, the image processing could calculate the center position.

Keywords: Ophthalmological examination apparatus, microperimetry, electroretinogram, eye movement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
2 Voyage Analysis of a Marine Gas Turbine Engine Installed to Power and Propel an Ocean-Going Cruise Ship

Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris

Abstract:

A gas turbine-powered cruise Liner is scheduled to transport pilgrim passengers from Lagos-Nigeria to the Islamic port city of Jeddah in Saudi Arabia. Since the gas turbine is an air breathing machine, changes in the density and/or mass flow at the compressor inlet due to an encounter with variations in weather conditions induce negative effects on the performance of the power plant during the voyage. In practice, all deviations from the reference atmospheric conditions of 15 oC and 1.103 bar tend to affect the power output and other thermodynamic parameters of the gas turbine cycle. Therefore, this paper seeks to evaluate how a simple cycle marine gas turbine power plant would react under a variety of scenarios that may be encountered during a voyage as the ship sails across the Atlantic Ocean and the Mediterranean Sea before arriving at its designated port of discharge. It is also an assessment that focuses on the effect of varying aerodynamic and hydrodynamic conditions which deteriorate the efficient operation of the propulsion system due to an increase in resistance that results from some projected levels of the ship hull fouling. The investigated passenger ship is designed to run at a service speed of 22 knots and cover a distance of 5787 nautical miles. The performance evaluation consists of three separate voyages that cover a variety of weather conditions in winter, spring and summer seasons. Real-time daily temperatures and the sea states for the selected transit route were obtained and used to simulate the voyage under the aforementioned operating conditions. Changes in engine firing temperature, power output as well as the total fuel consumed per voyage including other performance variables were separately predicted under both calm and adverse weather conditions. The collated data were obtained online from the UK Meteorological Office as well as the UK Hydrographic Office websites, while adopting the Beaufort scale for determining the magnitude of sea waves resulting from rough weather situations. The simulation of the gas turbine performance and voyage analysis was effected through the use of an integrated Cranfield-University-developed computer code known as ‘Turbomatch’ and ‘Poseidon’. It is a project that is aimed at developing a method for predicting the off design behavior of the marine gas turbine when installed and operated as the main prime mover for both propulsion and powering of all other auxiliary services onboard a passenger cruise liner. Furthermore, it is a techno-economic and environmental assessment that seeks to enable the forecast of the marine gas turbine part and full load performance as it relates to the fuel requirement for a complete voyage.

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
1 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: Crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1125