Search results for: control performance.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8522

Search results for: control performance.

242 Optimization of Doubly Fed Induction Generator Equivalent Circuit Parameters by Direct Search Method

Authors: Mamidi Ramakrishna Rao

Abstract:

Doubly-fed induction generator (DFIG) is currently the choice for many wind turbines. These generators, when connected to the grid through a converter, is subjected to varied power system conditions like voltage variation, frequency variation, short circuit fault conditions, etc. Further, many countries like Canada, Germany, UK, Scotland, etc. have distinct grid codes relating to wind turbines. Accordingly, following the network faults, wind turbines have to supply a definite reactive current. To satisfy the requirements including reactive current capability, an optimum electrical design becomes a mandate for DFIG to function. This paper intends to optimize the equivalent circuit parameters of an electrical design for satisfactory DFIG performance. Direct search method has been used for optimization of the parameters. The variables selected include electromagnetic core dimensions (diameters and stack length), slot dimensions, radial air gap between stator and rotor and winding copper cross section area. Optimization for 2 MW DFIG has been executed separately for three objective functions - maximum reactive power capability (Case I), maximum efficiency (Case II) and minimum weight (Case III). In the optimization analysis program, voltage variations (10%), power factor- leading and lagging (0.95), speeds for corresponding to slips (-0.3 to +0.3) have been considered. The optimum designs obtained for objective functions were compared. It can be concluded that direct search method of optimization helps in determining an optimum electrical design for each objective function like efficiency or reactive power capability or weight minimization.

Keywords: Direct search, DFIG, equivalent circuit parameters, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 872
241 Assessment of Obesity Parameters in Terms of Metabolic Age above and below Chronological Age in Adults

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Chronologic age (CA) of individuals is closely related to obesity and generally affects the magnitude of obesity parameters. On the other hand, close association between basal metabolic rate (BMR) and metabolic age (MA) is also a matter of concern. It is suggested that MA higher than CA is the indicator of the need to improve the metabolic rate. In this study, the aim was to assess some commonly used obesity parameters, such as obesity degree, visceral adiposity, BMR, BMR-to-weight ratio, in several groups with varying differences between MA and CA values. The study comprises adults, whose ages vary between 18 and 79 years. Four groups were constituted. Group 1, 2, 3 and 4 were composed of 55, 33, 76 and 47 adults, respectively. The individuals exhibiting -1, 0 and +1 for their MA-CA values were involved in Group 1, which was considered as the control group. Those, whose MA-CA values varying between -5 and -10 participated in Group 2. Those, whose MAs above their real ages were divided into two groups [Group 3 (MA-CA; from +5 to + 10) and Group 4 (MA-CA; from +11 to + 12)]. Body mass index (BMI) values were calculated. TANITA body composition monitor using bioelectrical impedance analysis technology was used to obtain values for obesity degree, visceral adiposity, BMR and BMR-to-weight ratio. The compiled data were evaluated statistically using a statistical package program; SPSS. Mean ± SD values were determined. Correlation analyses were performed. The statistical significance degree was accepted as p < 0.05. The increase in BMR was positively correlated with obesity degree. MAs and CAs of the groups were 39.9 ± 16.8 vs 39.9 ± 16.7 years for Group 1, 45.0 ± 15.3 vs 51.4 ± 15.7 years for Group 2, 47.2 ± 12.7 vs 40.0 ± 12.7 years for Group 3, and 53.6 ± 14.8 vs 42 ± 14.8 years for Group 4. BMI values of the groups were 24.3 ± 3.6 kg/m2, 23.2 ± 1.7 kg/m2, 30.3 ± 3.8 kg/m2, and 40.1 ± 5.1 kg/m2 for Group 1, 2, 3 and 4, respectively. Values obtained for BMR were 1599 ± 328 kcal in Group 1, 1463 ± 198 kcal in Group 2, 1652 ± 350 kcal in Group 3, and 1890 ± 360 kcal in Group 4. A correlation was observed between BMR and MA-CA values in Group 1. No correlation was detected in other groups. On the other hand, statistically significant correlations between MA-CA values and obesity degree, BMI as well as BMR/weight were found in Group 3 and in Group 4. It was concluded that upon consideration of these findings in terms of MA-CA values, BMR-to-weight ratio was found to be much more useful indicator of the severe increase in obesity development than BMR. Also, the lack of associations between MA and BMR as well as BMR-to-weight ratio emphasize the importance of consideration of MA-CA values rather than MA.

Keywords: Basal metabolic rate, chronologic age, metabolic age, obesity degree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 978
240 CBIR Using Multi-Resolution Transform for Brain Tumour Detection and Stages Identification

Authors: H. Benjamin Fredrick David, R. Balasubramanian, A. Anbarasa Pandian

Abstract:

Image retrieval is the most interesting technique which is being used today in our digital world. CBIR, commonly expanded as Content Based Image Retrieval is an image processing technique which identifies the relevant images and retrieves them based on the patterns that are extracted from the digital images. In this paper, two research works have been presented using CBIR. The first work provides an automated and interactive approach to the analysis of CBIR techniques. CBIR works on the principle of supervised machine learning which involves feature selection followed by training and testing phase applied on a classifier in order to perform prediction. By using feature extraction, the image transforms such as Contourlet, Ridgelet and Shearlet could be utilized to retrieve the texture features from the images. The features extracted are used to train and build a classifier using the classification algorithms such as Naïve Bayes, K-Nearest Neighbour and Multi-class Support Vector Machine. Further the testing phase involves prediction which predicts the new input image using the trained classifier and label them from one of the four classes namely 1- Normal brain, 2- Benign tumour, 3- Malignant tumour and 4- Severe tumour. The second research work includes developing a tool which is used for tumour stage identification using the best feature extraction and classifier identified from the first work. Finally, the tool will be used to predict tumour stage and provide suggestions based on the stage of tumour identified by the system. This paper presents these two approaches which is a contribution to the medical field for giving better retrieval performance and for tumour stages identification.

Keywords: Brain tumour detection, content based image retrieval, classification of tumours, image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 720
239 Towards a New Era of Sustainability in the Automotive Industry: Strategic Human Resource Management and Green Technology Innovation

Authors: Reihaneh Montazeri Shatouri, Rosmini Omar, Kunio Igusa

Abstract:

Although automotive industry has brought different beneficiaries to human life, it is being pointed out as one of the major cause of global air pollution which resulted in climate change, smog, green house gases (GHGs), and human diseases by many reasons. Since auto industry is one of the largest consumers of fossil fuels, the realization of green innovations is becoming a crucial choice to meet the challenges towards sustainable development. Recently, many auto manufacturers have embarked on green technology initiatives to gain a competitive advantage in the global market; however, innovative manufacturing systems and technologies can enhance operational performance only if the human resource management is in place to elicit the motivation of the employees and develop their organizational expertise. No organization can perform at peak levels unless each employee is committed to the company goals and works as an effective team member. Strategic human resource practices are the primary means by which firms can shape the skills, attitudes, and behavior of individuals to align with the business strategic objectives. This study investigates on the comprehensive approach of multiple advanced technology innovations and human resource management at Toyota Motor Corporation as the market leader of full hybrid technology in the automotive industry. Then, HRM framework of the company is described and three sets of human resource practices that support the innovation-oriented HR system, presented. Finally, a conceptual framework for innovativeness in green technology in automotive industry by applying a deliberate strategic HR management system and knowledge management with the intervening factors of organizational culture, knowledge application and knowledge sharing is proposed.

Keywords: Automotive Industry, Green Technology, Innovation, Strategic Human Resource Management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5207
238 Optimization Approach on Flapping Aerodynamic Characteristics of Corrugated Airfoil

Authors: Wei-Hsin Sun, Jr-Ming Miao, Chang-Hsien Tai, Chien-Chun Hung

Abstract:

The development of biomimetic micro-aerial-vehicles (MAVs) with flapping wings is the future trend in military/domestic field. The successful flight of MAVs is strongly related to the understanding of unsteady aerodynamic performance of low Reynolds number airfoils under dynamic flapping motion. This study explored the effects of flapping frequency, stroke amplitude, and the inclined angle of stroke plane on lift force and thrust force of a bio-inspiration corrugated airfoil with 33 full factorial design of experiment and ANOVA analysis. Unsteady vorticity flows over a corrugated thin airfoil executing flapping motion are computed with time-dependent two-dimensional laminar incompressible Reynolds-averaged Navier-Stokes equations with the conformal hybrid mesh. The tested freestream Reynolds number based on the chord length of airfoil as characteristic length is fixed of 103. The dynamic mesh technique is applied to model the flapping motion of a corrugated airfoil. Instant vorticity contours over a complete flapping cycle clearly reveals the flow mechanisms for lift force generation are dynamic stall, rotational circulation, and wake capture. The thrust force is produced as the leading edge vortex shedding from the trailing edge of airfoil to form a reverse von Karman vortex. Results also indicated that the inclined angle is the most significant factor on both the lift force and thrust force. There are strong interactions between tested factors which mean an optimization study on parameters should be conducted in further runs.

Keywords: biomimetic, MAVs, aerodynamic, ANOVA analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2087
237 A New Distribution Network Reconfiguration Approach using a Tree Model

Authors: E. Dolatdar, S. Soleymani, B. Mozafari

Abstract:

Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.

Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3741
236 Incorporating Circular Economy into Passive Design Strategies in Tropical Nigeria

Authors: Noah G. Akhimien, Eshrar Latif

Abstract:

The natural environment is in need for an urgent rescue due to dilapidation and recession of resources. Passive design strategies have proven to be one of the effective ways to reduce CO2 emissions and to improve building performance. On the other hand, there is a huge drop in material availability due to poor recycling culture. Consequently, building waste pose environmental hazard due to unrecycled building materials from construction and deconstruction. Buildings are seen to be material banks for a circular economy, therefore incorporating circular economy into passive housing will not only safe guide the climate but also improve resource efficiency. The study focuses on incorporating a circular economy in passive design strategies for an affordable energy and resource efficient residential building in Nigeria. Carbon dioxide (CO2) concentration is still on the increase as buildings are responsible for a significant amount of this emission globally. Therefore, prompt measures need to be taken to combat the effect of global warming and associated threats. Nigeria is rapidly growing in human population, resources on the other hand have receded greatly, and there is an abrupt need for recycling even in the built environment. It is necessary that Nigeria responds to these challenges effectively and efficiently considering building resource and energy. Passive design strategies were assessed using simulations to obtain qualitative and quantitative data which were inferred to case studies as it relates to the Nigeria climate. Building materials were analysed using the ReSOLVE model in order to explore possible recycling phase. This provided relevant information and strategies to illustrate the possibility of circular economy in passive buildings. The study offers an alternative approach, as it is the general principle for the reworking of an economy on ecological lines in passive housing and by closing material loops in circular economy.

Keywords: Building, circular economy, efficiency, passive design, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 689
235 In vivo Antidiabetic and Antioxidant Potential of Pseudovaria macrophylla Extract

Authors: Aditya Arya, Hairin Taha, Ataul Karim Khan, Nayiar Shahid, Hapipah Mohd Ali, Mustafa Ali Mohd

Abstract:

This study has investigated the antidiabetic and antioxidant potential of Pseudovaria macrophylla bark extract on streptozotocin–nicotinamide induced type 2 diabetic rats. LCMSQTOF and NMR experiments were done to determine the chemical composition in the methanolic bark extract. For in vivo experiments, the STZ (60 mg/kg/b.w, 15 min after 120 mg/kg/1 nicotinamide, i.p.) induced diabetic rats were treated with methanolic extract of Pseuduvaria macrophylla (200 and 400 mg/kg·bw) and glibenclamide (2.5 mg/kg) as positive control respectively. Biochemical parameters were assayed in the blood samples of all groups of rats. The pro-inflammatory cytokines, antioxidant status and plasma transforming growth factor βeta-1 (TGF-β1) were evaluated. The histological study of the pancreas was examined and its expression level of insulin was observed by immunohistochemistry. In addition, the expression of glucose transporters (GLUT 1, 2 and 4) were assessed in pancreas tissue by western blot analysis. The outcomes of the study displayed that the bark methanol extract of Pseuduvaria macrophylla has potentially normalized the elevated blood glucose levels and improved serum insulin and C-peptide levels with significant increase in the antioxidant enzyme, reduced glutathione (GSH) and decrease in the level of lipid peroxidation (LPO). Additionally, the extract has markedly decreased the levels of serum pro-inflammatory cytokines and transforming growth factor beta-1 (TGF-β1). Histopathology analysis demonstrated that Pseuduvaria macrophylla has the potential to protect the pancreas of diabetic rats against peroxidation damage by downregulating oxidative stress and elevated hyperglycaemia. Furthermore, the expression of insulin protein, GLUT-1, GLUT-2 and GLUT-4 in pancreatic cells was enhanced. The findings of this study support the anti-diabetic claims of Pseudovaria macrophylla bark.

Keywords: Diabetes mellitus, Pseuduvaria macrophylla, alkaloids, caffeic acid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2727
234 Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment

Authors: Amit Chhabra, Gurvinder Singh, Sandeep Singh Waraich, Bhavneet Sidhu, Gaurav Kumar

Abstract:

Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.

Keywords: SLB, DLB, Host, Algorithm and Load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
233 Investigation of the Properties of Epoxy Modified Binders Based on Epoxy Oligomer with Improved Deformation and Strength Properties

Authors: Hlaing Zaw Oo, N. Kostromina, V. Osipchik, T. Kravchenko, K. Yakovleva

Abstract:

The process of modification of ed-20 epoxy resin synthesized by vinyl-containing compounds is considered. It is shown that the introduction of vinyl-containing compounds into the composition based on epoxy resin ED-20 allows adjusting the technological and operational characteristics of the binder. For improvement of the properties of epoxy resin, following modifiers were selected: polyvinylformalethyl, polyvinyl butyral and composition of linear and aromatic amines (Аramine) as a hardener. Now the big range of hardeners of epoxy resins exists that allows varying technological properties of compositions, and also thermophysical and strength indicators. The nature of the aramin type hardener has a significant impact on the spatial parameters of the mesh, glass transition temperature, and strength characteristics. Epoxy composite materials based on ED-20 modified with polyvinyl butyral were obtained and investigated. It is shown that the composition of resins based on derivatives of polyvinyl butyral and ED-20 allows obtaining composite materials with a higher complex of deformation-strength, adhesion and thermal properties, better water resistance, frost resistance, chemical resistance, and impact strength. The magnitude of the effect depends on the chemical structure, temperature and curing time. In the area of concentrations, where the effect of composite synergy is appearing, the values of strength and stiffness significantly exceed the similar parameters of the individual components of the mixture. The polymer-polymer compositions form their class of materials with diverse specific properties that ensure their competitive application. Coatings with high performance under cyclic loading have been obtained based on epoxy oligomers modified with vinyl-containing compounds.

Keywords: Epoxy resins, modification, vinyl-containing compounds, deformation and strength properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 536
232 Combination of Different Classifiers for Cardiac Arrhythmia Recognition

Authors: M. R. Homaeinezhad, E. Tavakkoli, M. Habibi, S. A. Atyabi, A. Ghaffari

Abstract:

This paper describes a new supervised fusion (hybrid) electrocardiogram (ECG) classification solution consisting of a new QRS complex geometrical feature extraction as well as a new version of the learning vector quantization (LVQ) classification algorithm aimed for overcoming the stability-plasticity dilemma. Toward this objective, after detection and delineation of the major events of ECG signal via an appropriate algorithm, each QRS region and also its corresponding discrete wavelet transform (DWT) are supposed as virtual images and each of them is divided into eight polar sectors. Then, the curve length of each excerpted segment is calculated and is used as the element of the feature space. To increase the robustness of the proposed classification algorithm versus noise, artifacts and arrhythmic outliers, a fusion structure consisting of five different classifiers namely as Support Vector Machine (SVM), Modified Learning Vector Quantization (MLVQ) and three Multi Layer Perceptron-Back Propagation (MLP–BP) neural networks with different topologies were designed and implemented. The new proposed algorithm was applied to all 48 MIT–BIH Arrhythmia Database records (within–record analysis) and the discrimination power of the classifier in isolation of different beat types of each record was assessed and as the result, the average accuracy value Acc=98.51% was obtained. Also, the proposed method was applied to 6 number of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging to 20 different records of the aforementioned database (between– record analysis) and the average value of Acc=95.6% was achieved. To evaluate performance quality of the new proposed hybrid learning machine, the obtained results were compared with similar peer– reviewed studies in this area.

Keywords: Feature Extraction, Curve Length Method, SupportVector Machine, Learning Vector Quantization, Multi Layer Perceptron, Fusion (Hybrid) Classification, Arrhythmia Classification, Supervised Learning Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185
231 Simultaneous Treatment and Catalytic Gasification of Olive Mill Wastewater under Supercritical Conditions

Authors: Ekin Kıpçak, Sinan Kutluay, Mesut Akgün

Abstract:

Recently, a growing interest has emerged on the development of new and efficient energy sources, due to the inevitable extinction of the nonrenewable energy reserves. One of these alternative sources which has a great potential and sustainability to meet up the energy demand is biomass energy. This significant energy source can be utilized with various energy conversion technologies, one of which is biomass gasification in supercritical water. Water, being the most important solvent in nature, has very important characteristics as a reaction solvent under supercritical circumstances. At temperatures above its critical point (374.8oC and 22.1 MPa), water becomes more acidic and its diffusivity increases. Working with water at high temperatures increases the thermal reaction rate, which in consequence leads to a better dissolving of the organic matters and a fast reaction with oxygen. Hence, supercritical water offers a control mechanism depending on solubility, excellent transport properties based on its high diffusion ability and new reaction possibilities for hydrolysis or oxidation. In this study the gasification of a real biomass, namely olive mill wastewater (OMW), in supercritical water is investigated with the use of Pt/Al2O3 and Ni/Al2O3 catalysts. OMW is a by-product obtained during olive oil production, which has a complex nature characterized by a high content of organic compounds and polyphenols. These properties impose OMW a significant pollution potential, but at the same time, the high content of organics makes OMW a desirable biomass candidate for energy production. All of the catalytic gasification experiments were made with five different reaction temperatures (400, 450, 500, 550 and 600°C), under a constant pressure of 25 MPa. For the experiments conducted with Ni/Al2O3 catalyst, the effect of five reaction times (30, 60, 90, 120 and 150 s) was investigated. However, procuring that similar gasification efficiencies could be obtained at shorter times, the experiments were made by using different reaction times (10, 15, 20, 25 and 30 s) for the case of Pt/Al2O3 catalyst. Through these experiments, the effects of temperature, time and catalyst type on the gasification yields and treatment efficiencies were investigated.

Keywords: Catalyst, Gasification, Olive mill wastewater, Supercritical water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
230 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.

Keywords: Anomaly detection, dimensionality reduction, frequencies selection, modal analysis, neural network, structural health monitoring, vibration measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 643
229 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks

Authors: Ashkan Ebadi, Adam Krzyzak

Abstract:

Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.

Keywords: Tourism, hotel recommender system, hybrid, implicit features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1860
228 Cascaded Transcritical/Supercritical CO2 Cycles and Organic Rankine Cycles to Recover Low-Temperature Waste Heat and LNG Cold Energy Simultaneously

Authors: Haoshui Yu, Donghoi Kim, Truls Gundersen

Abstract:

Low-temperature waste heat is abundant in the process industries, and large amounts of Liquefied Natural Gas (LNG) cold energy are discarded without being recovered properly in LNG terminals. Power generation is an effective way to utilize low-temperature waste heat and LNG cold energy simultaneously. Organic Rankine Cycles (ORCs) and CO2 power cycles are promising technologies to convert low-temperature waste heat and LNG cold energy into electricity. If waste heat and LNG cold energy are utilized simultaneously in one system, the performance may outperform separate systems utilizing low-temperature waste heat and LNG cold energy, respectively. Low-temperature waste heat acts as the heat source and LNG regasification acts as the heat sink in the combined system. Due to the large temperature difference between the heat source and the heat sink, cascaded power cycle configurations are proposed in this paper. Cascaded power cycles can improve the energy efficiency of the system considerably. The cycle operating at a higher temperature to recover waste heat is called top cycle and the cycle operating at a lower temperature to utilize LNG cold energy is called bottom cycle in this study. The top cycle condensation heat is used as the heat source in the bottom cycle. The top cycle can be an ORC, transcritical CO2 (tCO2) cycle or supercritical CO2 (sCO2) cycle, while the bottom cycle only can be an ORC due to the low-temperature range of the bottom cycle. However, the thermodynamic path of the tCO2 cycle and sCO2 cycle are different from that of an ORC. The tCO2 cycle and the sCO2 cycle perform better than an ORC for sensible waste heat recovery due to a better temperature match with the waste heat source. Different combinations of the tCO2 cycle, sCO2 cycle and ORC are compared to screen the best configurations of the cascaded power cycles. The influence of the working fluid and the operating conditions are also investigated in this study. Each configuration is modeled and optimized in Aspen HYSYS. The results show that cascaded tCO2/ORC performs better compared with cascaded ORC/ORC and cascaded sCO2/ORC for the case study.

Keywords: LNG cold energy, low-temperature waste heat, organic Rankine cycle, supercritical CO2 cycle, transcritical CO2 cycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1021
227 Appropriate Technology: Revisiting the Movement in Developing Countries for Sustainability

Authors: Jayshree Patnaik, Bhaskar Bhowmick

Abstract:

The economic growth of any nation is steered and dependent on innovation in technology. It can be preferably argued that technology has enhanced the quality of life. Technology is linked both with an economic and a social structure. But there are some parts of the world or communities which are yet to reap the benefits of technological innovation. Business and organizations are now well equipped with cutting-edge innovations that improve the firm performance and provide them with a competitive edge, but rarely does it have a positive impact on any community which is weak and marginalized. In recent times, it is observed that communities are actively handling social or ecological issues with the help of indigenous technologies. Thus, "Appropriate Technology" comes into the discussion, which is quite prevalent in the rural third world. Appropriate technology grew as a movement in the mid-1970s during the energy crisis, but it lost its stance in the following years when people started it to describe it as an inferior technology or dead technology. Basically, there is no such technology which is inferior or sophisticated for a particular region. The relevance of appropriate technology lies in penetrating technology into a larger and weaker section of community where the “Bottom of the pyramid” can pay for technology if they find the price is affordable. This is a theoretical paper which primarily revolves around how appropriate technology has faded and again evolved in both developed and developing countries. The paper will try to focus on the various concepts, history and challenges faced by the appropriate technology over the years. Appropriate technology follows a documented approach but lags in overall design and diffusion. Diffusion of technology into the poorer sections of community remains unanswered until the present time. Appropriate technology is multi-disciplinary in nature; therefore, this openness allows having a varied working model for different problems. Appropriate technology is a friendly technology that seeks to improve the lives of people in a constraint environment by providing an affordable and sustainable solution. Appropriate technology needs to be defined in the era of modern technological advancement for sustainability.

Keywords: Appropriate technology, community, developing country, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822
226 Milling Simulations with a 3-DOF Flexible Planar Robot

Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden

Abstract:

Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.

Keywords: Control, machining, multibody, robotic, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1328
225 An Inclusion Project for Deaf Children into a Northern Italy Contest

Authors: G. Tamanza, A. Bossoni

Abstract:

84 deaf students (from primary school to college) and their families participated in this inclusion project in cooperation with numerous institutions in northern Italy (Brescia-Lombardy). Participants were either congenitally deaf or their deafness was related to other pathologies. This research promoted the integration of deaf students as they pass from primary school to high school to college. Learning methods and processes were studied that focused on encour­aging individual autonomy and socialization. The research team and its collaborators included school teachers, speech ther­apists, psychologists and home tutors, as well as teaching assistants, child neuropsychiatrists and other external authorities involved with deaf persons social inclusion programs. Deaf children and their families were supported, in terms of inclusion, and were made aware of the research team that focused on the Bisogni Educativi Speciali (BES or Special Educational Needs) (L.170/2010 - DM 5669/2011). This project included a diagnostic and evaluative phase as well as an operational one. Results demonstrated that deaf children were highly satisfied and confident; academic performance improved and collaboration in school increased. Deaf children felt that they had access to high school and college. Empowerment for the families of deaf children in terms of networking among local services that deal with the deaf also improved while family satisfaction also improved. We found that teachers and those who gave support to deaf children increased their professional skills. Achieving autonomy, instrumental, communicative and relational abilities were also found to be crucial. Project success was determined by temporal continuity, clear theoretical methodology, strong alliance for the project direction and a resilient team response.

Keywords: Autonomy, inclusion, skills, well-being.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1145
224 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 980
223 Heavy Deformation and High-Temperature Annealing Microstructure and Texture Studies of TaHfNbZrTi Equiatomic Refractory High Entropy Alloy

Authors: Veeresham Mokali

Abstract:

The refractory alloys are crucial for high-temperature applications to improve performance and reduce cost. They are used in several applications such as aerospace, outer space, military and defense, nuclear powerplants, automobiles, and industry. The conventional refractory alloys show greater stability at high temperatures and in contrast they have operational limitations due to their low melting temperatures. However, there is a huge requirement to improve the refractory alloys’ operational temperatures and replace the conventional alloys. The newly emerging refractory high entropy alloys (RHEAs) could be alternative materials for conventional refractory alloys and fulfill the demands and requirements of various practical applications in the future. The RHEA TaHfNbZrTi was prepared through an arc melting process. The annealing behavior of severely deformed equiatomic RHEATaHfNbZrTi has been investigated. To obtain deformed condition, the alloy is cold-rolled to 90% thickness reduction and then subjected to an annealing process to observe recrystallization and microstructural evolution in the range of 800 °C to 1400 °C temperatures. The cold-rolled – 90% condition shows the presence of microstructural heterogeneity. The annealing microstructure of 800 °C temperature reveals that partial recrystallization and further annealing treatment carried out annealing treatment in the range of 850 °C to 1400 °C temperatures exhibits completely recrystallized microstructures, followed by coarsening with a degree of annealing temperature. The deformed and annealed conditions featured the development of body-centered cubic (BCC) fiber textures. The experimental investigation of heavy deformation and followed by high-temperature annealing up to 1400 °C temperature will contribute to the understanding of microstructure and texture evolution of emerging RHEAs.

Keywords: Refractory high entropy alloys, cold-rolling, annealing, microstructure, texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 606
222 Hypertensive Response to Maximal Exercise Test in Young and Middle Age Hypertensive on Blood Pressure Lowering Medication: Monotherapy vs. Combination Therapy

Authors: James Patrick A. Diaz, Raul E. Ramboyong

Abstract:

Background: Hypertensive response during maximal exercise test provides important information on the level of blood pressure control and evaluation of treatment. Method: A single center retrospective descriptive study was conducted among 117 young (aged 20 to 40) and middle age (aged 40 to 65) hypertensive patients, who underwent treadmill stress test. Currently on maintenance frontline medication either monotherapy (Angiotensin-converting enzyme inhibitor/Angiotensin receptor blocker [ACEi/ARB], Calcium channel blocker [CCB], Diuretic - Hydrochlorthiazide [HCTZ]) or combination therapy (ARB+CCB, ARB+HCTZ), who attained a maximal exercise on treadmill stress test (TMST) with hypertensive response (systolic blood pressure: male >210 mm Hg, female >190 mm Hg, diastolic blood pressure >100 mmHg, or increase of >10 mm Hg at any time during the test), on Bruce and Modified Bruce protocol. Exaggerated blood pressure response during exercise (systolic [SBP] and diastolic [DBP]), peak exercise blood pressure (SBP and DBP), recovery period (SBP and DBP) and test for ischemia and their antihypertensive medication/s were investigated. Analysis of variance and chi-square test were used for statistical analysis. Results: Hypertensive responses on maximal exercise test were seen mostly among female population (P < 0.000) and middle age (P < 0.000) patients. Exaggerated diastolic blood pressure responses were significantly lower in patients who were taking CCB (P < 0.004). A longer recovery period that showed a delayed decline in SBP was observed in patients taking ARB+HCTZ (P < 0.036). There were no significant differences in the level of exaggerated systolic blood pressure response and during peak exercise (both systolic and diastolic) in patients using either monotherapy or combination antihypertensives. Conclusion: Calcium channel blockers provided lower exaggerated diastolic BP response during maximal exercise test in hypertensive middle age patients. Patients on combination therapy using ARB+HCTZ exhibited a longer recovery period of systolic blood pressure.

Keywords: Antihypertensive, exercise test, hypertension, hypertensive response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 884
221 Effect of a Gravel Bed Flocculator on the Efficiency of a Low Cost Water Treatment Plants

Authors: Alaa Hussein Wadi

Abstract:

The principal objective of a water treatment plant is to produce water that satisfies a set of drinking water quality standards at a reasonable price to the consumers. The gravel-bed flocculator provide a simple and inexpensive design for flocculation in small water treatment plants (less than 5000 m3/day capacity). The packed bed of gravel provides ideal conditions for the formation of compact settleable flocs because of continuous recontact provided by the sinuous flow of water through the interstices formed by the gravel. The field data which were obtained from the operation of the water supply treatment unit cover the physical, chemical and biological water qualities of the raw and settled water as obtained by the operation of the treatment unit. The experiments were carried out with the aim of assessing the efficiency of the gravel filter in removing the turbidity, pathogenic bacteria, from the raw water. The water treatment plant, which was constructed for the treatment of river water, was in principle a rapid sand filter. The results show that the average value of the turbidity level of the settled water was 4.83 NTU with a standard deviation of turbidity 2.893 NTU. This indicated that the removal efficiency of the sedimentation tank (gravel filter) was about 67.8 %. for pH values fluctuated between 7.75 and 8.15, indicating the alkaline nature of the raw water of the river Shatt Al-Hilla, as expected. Raw water pH is depressed slightly following alum coagulation. The pH of the settled water ranged from 7.75 to a maximum of 8.05. The bacteriological tests which were carried out on the water samples were: total coliform test, E-coli test, and the plate count test. In each test the procedure used was as outlined in the Standard Methods for the Examination of Water and Wastewater (APHA, AWWA, and WPCF, 1985). The gravel filter exhibit a low performance in removing bacterial load. The percentage bacterial removal, which is maximum for total plate count (19%) and minimum for total coliform (16.82%).

Keywords: Gravel bed flocculator, turbidity, total coliform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2623
220 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams

Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin

Abstract:

In recent years, fire accidents have been steadily increased and the amount of property damage caused by the accidents has gradually raised. Damaging building structure, fire incidents bring about not only such property damage but also strength degradation and member deformation. As a result, the building structure undermines its structural ability. Examining the degradation and the deformation is very important because reusing the building is more economical than reconstruction. Therefore, engineers need to investigate the strength degradation and member deformation well, and make sure that they apply right rehabilitation methods. This study aims at evaluating deformation characteristics of fire damaged and rehabilitated normal strength concrete beams through both experiments and finite element analyses. For the experiments, control beams, fire damaged beams and rehabilitated beams are tested to examine deformation characteristics. Ten test beam specimens with compressive strength of 21MPa are fabricated and main test variables are selected as cover thickness of 40mm and 50mm and fire exposure time of 1 hour or 2 hours. After heating, fire damaged beams are air-recurred for 2 months and rehabilitated beams are repaired with polymeric cement mortar after being removed the fire damaged concrete cover. All beam specimens are tested under four points loading. FE analyses are executed to investigate the effects of main parameters applied to experimental study. Test results show that both maximum load and stiffness of the rehabilitated beams are higher than those of the fire damaged beams. In addition, predicted structural behaviors from the analyses also show good rehabilitation effect and the predicted load-deflection curves are similar to the experimental results. For the further, the proposed analytical method can be used to predict deformation characteristics of fire damaged and rehabilitated concrete beams without suffering from time and cost consuming of experimental process.

Keywords: Fire, Normal strength concrete, Rehabilitation, Reinforced concrete beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2352
219 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold

Authors: M. Malek Yarand, H. Saebi Monfared

Abstract:

This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.

Keywords: Mechanical Force Gauge, Mold, Reshaped Fruit, Square Watermelon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3079
218 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow

Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen

Abstract:

Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.

Keywords: Flux limiters, MILES, OpenFOAM, turbulence structures, TVD schemes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077
217 Nanomaterial Based Electrochemical Sensors for Endocrine Disrupting Compounds

Authors: Gaurav Bhanjana, Ganga Ram Chaudhary, Sandeep Kumar, Neeraj Dilbaghi

Abstract:

Main sources of endocrine disrupting compounds in the ecosystem are hormones, pesticides, phthalates, flame retardants, dioxins, personal-care products, coplanar polychlorinated biphenyls (PCBs), bisphenol A, and parabens. These endocrine disrupting compounds are responsible for learning disabilities, brain development problems, deformations of the body, cancer, reproductive abnormalities in females and decreased sperm count in human males. Although discharge of these chemical compounds into the environment cannot be stopped, yet their amount can be retarded through proper evaluation and detection techniques. The available techniques for determination of these endocrine disrupting compounds mainly include high performance liquid chromatography (HPLC), mass spectroscopy (MS) and gas chromatography-mass spectrometry (GC–MS). These techniques are accurate and reliable but have certain limitations like need of skilled personnel, time consuming, interference and requirement of pretreatment steps. Moreover, these techniques are laboratory bound and sample is required in large amount for analysis. In view of above facts, new methods for detection of endocrine disrupting compounds should be devised that promise high specificity, ultra sensitivity, cost effective, efficient and easy-to-operate procedure. Nowadays, electrochemical sensors/biosensors modified with nanomaterials are gaining high attention among researchers. Bioelement present in this system makes the developed sensors selective towards analyte of interest. Nanomaterials provide large surface area, high electron communication feature, enhanced catalytic activity and possibilities of chemical modifications. In most of the cases, nanomaterials also serve as an electron mediator or electrocatalyst for some analytes.

Keywords: Sensors, endocrine disruptors, nanoparticles, electrochemical, microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
216 Present Status, Driving Forces and Pattern Optimization of Territory in Hubei Province, China

Authors: Tingke Wu, Man Yuan

Abstract:

“National Territorial Planning (2016-2030)” was issued by the State Council of China in 2017. As an important initiative of putting it into effect, territorial planning at provincial level makes overall arrangement of territorial development, resources and environment protection, comprehensive renovation and security system construction. Hubei province, as the pivot of the “Rise of Central China” national strategy, is now confronted with great opportunities and challenges in territorial development, protection, and renovation. Territorial spatial pattern experiences long time evolution, influenced by multiple internal and external driving forces. It is not clear what are the main causes of its formation and what are effective ways of optimizing it. By analyzing land use data in 2016, this paper reveals present status of territory in Hubei. Combined with economic and social data and construction information, driving forces of territorial spatial pattern are then analyzed. Research demonstrates that the three types of territorial space aggregate distinctively. The four aspects of driving forces include natural background which sets the stage for main functions, population and economic factors which generate agglomeration effect, transportation infrastructure construction which leads to axial expansion and significant provincial strategies which encourage the established path. On this basis, targeted strategies for optimizing territory spatial pattern are then put forward. Hierarchical protection pattern should be established based on development intensity control as respect for nature. By optimizing the layout of population and industry and improving the transportation network, polycentric network-based development pattern could be established. These findings provide basis for Hubei Territorial Planning, and reference for future territorial planning in other provinces.

Keywords: Driving forces, Hubei, optimizing strategies, spatial pattern, territory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 547
215 Oily Sludge Bioremediation Pilot Plant Project, Nigeria

Authors: Ime R. Udotong, Justina I. R. Udotong, Ofonime U. M. John

Abstract:

Brass terminal, one of the several crude oil and petroleum products storage/handling facilities in the Niger Delta was built in the 1980s. Activities at this site, over the years, released crude oil into this 3 m-deep, 1500 m-long canal lying adjacent to the terminal with oil floating on it and its sediment heavily polluted. To ensure effective clean-up, three major activities were planned: site characterization, bioremediation pilot plant construction and testing and full-scale bioremediation of contaminated sediment / bank soil by land farming. The canal was delineated into 12 lots and each characterized, with reference to the floating oily phase, contaminated sediment and canal bank soil. As a result of site characterization, a pilot plant for on-site bioremediation was designed and a treatment basin constructed for carrying out pilot bioremediation test. Following a designed sampling protocol, samples from this pilot plant were collected for analysis at two laboratories as a quality assurance / quality control check. Results showed that Brass Canal upstream is contaminated with dark, thick and viscous oily film with characteristic hydrocarbon smell while downstream, thin oily film interspersed with water was observed. Sediments were observed to be dark with mixture of brownish sandy soil with TPH ranging from 17,800 mg/kg in Lot 1 to 88,500 mg/kg in Lot 12 samples. Brass Canal bank soil was observed to be sandy from ground surface to 3m, below ground surface (bgs) it was silty-sandy and brownish while subsurface soil (4-10m bgs) was sandy-clayey and whitish/grayish with typical hydrocarbon smell. Preliminary results obtained so far have been very promising but were proprietary. This project is considered, to the best of technical literature knowledge, the first large-scale on-site bioremediation project in the Niger Delta region, Nigeria.

Keywords: Bioremediation, Contaminated sediment, Land farming, Oily sludge, Oil Terminal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2022
214 Auto-Calibration and Optimization of Large-Scale Water Resources Systems

Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari

Abstract:

Water resource systems modeling has constantly been a challenge through history for human beings. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.

Keywords: Auto-calibration, Gilan, Large-Scale Water Resources, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1757
213 Operational Analysis of Urban Intelligent Transportation System and Strategies for Future Development - Taking Calling Service of Taxi in Wuhan as an Example

Authors: Wang Xu, Yao Yangyang, Lin Ying, Wang Zhenzhen

Abstract:

Intelligent Transportation System integrates various modern advanced technologies into the ground transportation system, and it will be the goal of urban transport system in the future because of its comprehensive effects. However, it also brings some problems, such as project performance assessment, fairness of benefiting groups, fund management, which are directly related to its operation and implementation. Wuhan has difficulties in organizing transportation because of its nature feature (river and lake), therefore, calling Service of Taxi plays an important role in transportation. This paper researches on calling Service of Taxi in Wuhan, based on quantitative and qualitative analysis. It analyzes its operations management systematically, including business model, finance, usage analysis and users evaluation. As for business model, it is that the government leads the operation at the initial stage, and the third part dominates the operation at the mature stage, which not only eases the pressure of the third part and benefits the spread of the calling service at the initial stage, but also alleviates financial pressure of government and improve the efficiency of the operation at the mature stage. As for finance, it draws that this service will bring heavy financial burden of equipments, but it will be alleviated in the future because of its spread. As for usage analysis, through data comparison, this service can bring some benefits for taxi drivers, and time and spatial distribution of usage have certain features. As for user evaluation, it analyzes using group and the reason why choosing it. At last, according to the analysis above, the paper puts forward the potentials, limitations, and future development strategies for it.

Keywords: Assessment, Calling service of taxi, Operations management, Strategies, Using groups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2189