Search results for: large deviation distribution
11437 Batch and Fixed-Bed Studies of Ammonia Treated Coconut Shell Activated Carbon for Adsorption of Benzene and Toluene
Authors: Jibril Mohammed, Usman Dadum Hamza, Muhammad Idris Misau, Baba Yahya Danjuma, Yusuf Bode Raji, Abdulsalam Surajudeen
Abstract:
Volatile organic compounds (VOCs) have been reported to be responsible for many acute and chronic health effects and environmental degradations such as global warming. In this study, a renewable and low-cost coconut shell activated carbon (PHAC) was synthesized and treated with ammonia (PHAC-AM) to improve its hydrophobicity and affinity towards VOCs. Removal efficiencies and adsorption capacities of the ammonia treated activated carbon (PHAC-AM) for benzene and toluene were carried out through batch and fixed-bed studies respectively. Langmuir, Freundlich and Tempkin adsorption isotherms were tested for the adsorption process and the experimental data were best fitted by Langmuir model and least fitted by Tempkin model; the favourability and suitability of fitness were validated by equilibrium parameter (RL) and the root square mean deviation (RSMD). Judging by the deviation of the predicted values from the experimental values, pseudo-second-order kinetic model best described the adsorption kinetics than the pseudo-first-order kinetic model for the two VOCs on PHAC and PHAC-AM. In the fixed-bed study, the effect of initial VOC concentration, bed height and flow rate on benzene and toluene adsorption were studied. The highest bed capacities of 77.30 and 69.40 mg/g were recorded for benzene and toluene respectively; at 250 mg/l initial VOC concentration, 2.5 cm bed height and 4.5 ml/min flow rate. The results of this study revealed that ammonia treated activate carbon (PHAC-AM) is a sustainable adsorbent for treatment of VOCs in polluted waters.Keywords: volatile organic compounds, equilibrium and kinetics studies, batch and fixed bed study, bio-based activated carbon
Procedia PDF Downloads 22711436 High Temperature Creep Analysis for Lower Head of Reactor Pressure Vessel
Authors: Dongchuan Su, Hai Xie, Naibin Jiang
Abstract:
Under severe accident cases, the nuclear reactor core may meltdown inside the lower head of the reactor pressure vessel (RPV). Retaining the melt pool inside the RPV is an important strategy of severe accident management. During this process, the inner wall of the lower head will be heated to high temperature of a thousand centigrade, and the outer wall is immersed in a large amount of cooling water. The material of the lower head will have serious creep damage under the high temperature and the temperature difference, and this produces a great threat to the integrity of the RPV. In this paper, the ANSYS program is employed to build the finite element method (FEM) model of the lower head, the creep phenomena is simulated under the severe accident case, the time dependent strain and stress distribution is obtained, the creep damage of the lower head is investigated, the integrity of the RPV is evaluated and the theoretical basis is provided for the optimized design and safety assessment of the RPV.Keywords: severe accident, lower head of RPV, creep, FEM
Procedia PDF Downloads 23311435 Radial Distribution Network Reliability Improvement by Using Imperialist Competitive Algorithm
Authors: Azim Khodadadi, Sahar Sadaat Vakili, Ebrahim Babaei
Abstract:
This study presents a numerical method to optimize the failure rate and repair time of a typical radial distribution system. Failure rate and repair time are effective parameters in customer and energy based indices of reliability. Decrease of these parameters improves reliability indices. Thus, system stability will be boost. The penalty functions indirectly reflect the cost of investment which spent to improve these indices. Constraints on customer and energy based indices, i.e. SAIFI, SAIDI, CAIDI and AENS have been considered by using a new method which reduces optimization algorithm controlling parameters. Imperialist Competitive Algorithm (ICA) used as main optimization technique and particle swarm optimization (PSO), simulated annealing (SA) and differential evolution (DE) has been applied for further investigation. These algorithms have been implemented on a test system by MATLAB. Obtained results have been compared with each other. The optimized values of repair time and failure rate are much lower than current values which this achievement reduced investment cost and also ICA gives better answer than the other used algorithms.Keywords: imperialist competitive algorithm, failure rate, repair time, radial distribution network
Procedia PDF Downloads 66911434 FlameCens: Visualization of Expressive Deviations in Music Performance
Authors: Y. Trantafyllou, C. Alexandraki
Abstract:
Music interpretation accounts to the way musicians shape their performance by deliberately deviating from composers’ intentions, which are commonly communicated via some form of music transcription, such as a music score. For transcribed and non-improvised music, music expression is manifested by introducing subtle deviations in tempo, dynamics and articulation during the evolution of performance. This paper presents an application, named FlameCens, which, given two recordings of the same piece of music, presumably performed by different musicians, allow visualising deviations in tempo and dynamics during playback. The application may also compare a certain performance to the music score of that piece (i.e. MIDI file), which may be thought of as an expression-neutral representation of that piece, hence depicting the expressive queues employed by certain performers. FlameCens uses the Dynamic Time Warping algorithm to compare two audio sequences, based on CENS (Chroma Energy distribution Normalized Statistics) audio features. Expressive deviations are illustrated in a moving flame, which is generated by an animation of particles. The length of the flame is mapped to deviations in dynamics, while the slope of the flame is mapped to tempo deviations so that faster tempo changes the slope to the right and slower tempo changes the slope to the left. Constant slope signifies no tempo deviation. The detected deviations in tempo and dynamics can be additionally recorded in a text file, which allows for offline investigation. Moreover, in the case of monophonic music, the color of particles is used to convey the pitch of the notes during performance. FlameCens has been implemented in Python and it is openly available via GitHub. The application has been experimentally validated for different music genres including classical, contemporary, jazz and popular music. These experiments revealed that FlameCens can be a valuable tool for music specialists (i.e. musicians or musicologists) to investigate the expressive performance strategies employed by different musicians, as well as for music audience to enhance their listening experience.Keywords: audio synchronization, computational music analysis, expressive music performance, information visualization
Procedia PDF Downloads 13111433 Determining Inventory Replenishment Policy for Major Component in Assembly-to-Order of Cooling System Manufacturing
Authors: Tippawan Nasawan
Abstract:
The objective of this study is to find the replenishment policy in Assembly-to-Order manufacturing (ATO) which some of the major components have lead-time longer than customer lead-time. The variety of products, independent component demand, and long component lead-time are the difficulty that has resulted in the overstock problem. In addition, the ordering cost is trivial when compared to the cost of material of the major component. A conceptual design of the Decision Supporting System (DSS) has introduced to assist the replenishment policy. Component replenishment by using the variable which calls Available to Promise (ATP) for making the decision is one of the keys. The Poisson distribution is adopted to realize demand patterns in order to calculate Safety Stock (SS) at the specified Customer Service Level (CSL). When distribution cannot identify, nonparametric will be applied instead. The test result after comparing the ending inventory between the new policy and the old policy, the overstock has significantly reduced by 46.9 percent or about 469,891.51 US-Dollars for the cost of the major component (material cost only). Besides, the number of the major component inventory is also reduced by about 41 percent which helps to mitigate the chance of damage and keeping stock.Keywords: Assembly-to-Order, Decision Supporting System, Component replenishment , Poisson distribution
Procedia PDF Downloads 12811432 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers
Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek
Abstract:
Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations
Procedia PDF Downloads 13811431 Finite Element Modeling of Ultrasonic Shot Peening Process using Multiple Pin Impacts
Authors: Chao-xun Liu, Shi-hong Lu
Abstract:
In spite of its importance to the aerospace and automobile industries, little or no attention has been devoted to the accurate modeling of the ultrasonic shot peening (USP) process. It is therefore the purpose of this study to conduct finite element analysis of the process using a realistic multiple pin impacts model with the explicit solver of ABAQUS. In this paper, we research the effect of several key parameters on the residual stress distribution within the target, including impact velocity, incident angle, friction coefficient between pins and target and impact number of times were investigated. The results reveal that the impact velocity and impact number of times have obvious effect and impacting vertically could produce the most perfect residual stress distribution. Then we compare the results with the date in USP experiment and verify the exactness of the model. The analysis of the multiple pin impacts date reveal the relationships between peening process parameters and peening quality, which are useful for identifying the parameters which need to be controlled and regulated in order to produce a more beneficial compressive residual stress distribution within the target.Keywords: ultrasonic shot peening, finite element, multiple pins, residual stress, numerical simulation
Procedia PDF Downloads 44811430 Advanced Hybrid Particle Swarm Optimization for Congestion and Power Loss Reduction in Distribution Networks with High Distributed Generation Penetration through Network Reconfiguration
Authors: C. Iraklis, G. Evmiridis, A. Iraklis
Abstract:
Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.Keywords: congestion, distribution networks, loss reduction, particle swarm optimization, smart grid
Procedia PDF Downloads 44711429 Sensor Validation Using Bottleneck Neural Network and Variable Reconstruction
Authors: Somia Bouzid, Messaoud Ramdani
Abstract:
The success of any diagnosis strategy critically depends on the sensors measuring process variables. This paper presents a detection and diagnosis sensor faults method based on a Bottleneck Neural Network (BNN). The BNN approach is used as a statistical process control tool for drinking water distribution (DWD) systems to detect and isolate the sensor faults. Variable reconstruction approach is very useful for sensor fault isolation, this method is validated in simulation on a nonlinear system: actual drinking water distribution system. Several results are presented.Keywords: fault detection, localization, PCA, NLPCA, auto-associative neural network
Procedia PDF Downloads 39011428 Research on Modern Semiconductor Converters and the Usage of SiC Devices in the Technology Centre of Ostrava
Authors: P. Vaculík, P. Kaňovský
Abstract:
The following article presents Technology Centre of Ostrava (TCO) in the Czech Republic. Describes the structure and main research areas realized by the project ENET-Energy Units for Utilization of non-traditional Energy Sources. More details are presented from the research program dealing with transformation, accumulation, and distribution of electric energy. Technology Centre has its own energy mix consisting of alternative sources of fuel sources that use of process gases from the storage part and also the energy from distribution network. The article will focus on the properties and application possibilities SiC semiconductor devices for power semiconductor converter for photo-voltaic systems.Keywords: SiC, Si, technology centre of Ostrava, photovoltaic systems, DC/DC Converter, simulation
Procedia PDF Downloads 61211427 Temperature-Dependent Barrier Characteristics of Inhomogeneous Pd/n-GaN Schottky Barrier Diodes Surface
Authors: K. Al-Heuseen, M. R. Hashim
Abstract:
The current-voltage (I-V) characteristics of Pd/n-GaN Schottky barrier were studied at temperatures over room temperature (300-470K). The values of ideality factor (n), zero-bias barrier height (φB0), flat barrier height (φBF) and series resistance (Rs) obtained from I-V-T measurements were found to be strongly temperature dependent while (φBo) increase, (n), (φBF) and (Rs) decrease with increasing temperature. The apparent Richardson constant was found to be 2.1x10-9 Acm-2K-2 and mean barrier height of 0.19 eV. After barrier height inhomogeneities correction, by assuming a Gaussian distribution (GD) of the barrier heights, the Richardson constant and the mean barrier height were obtained as 23 Acm-2K-2 and 1.78eV, respectively. The corrected Richardson constant was very closer to theoretical value of 26 Acm-2K-2.Keywords: electrical properties, Gaussian distribution, Pd-GaN Schottky diodes, thermionic emission
Procedia PDF Downloads 27711426 Temporal Variation of Shorebirds Population in Two Different Mudflats Areas
Authors: N. Norazlimi, R. Ramli
Abstract:
A study was conducted to determine the diversity and abundance of shorebird species habituating the mudflat area of Jeram Beach and Remis Beach, Selangor, Peninsular Malaysia. Direct observation technique (using binoculars and video camera) was applied to record the presence of bird species in the sampling sites from August 2013 until July 2014. A total of 32 species of shorebird were recorded during both migratory and non-migratory seasons. Of these, eleven species (47.8%) are migrants, six species (26.1%) have both migrant and resident populations, four species (17.4%) are vagrants and two species (8.7%) are residents. The compositions of the birds differed significantly in all months (χ2=84.35, p<0.001). There is a significant difference in avian abundance between migratory and non-migratory seasons (Mann-Whitney, t=2.39, p=0.036). The avian abundance were differed significantly in Jeram and Remis Beaches during migratory periods (t=4.39, p=0.001) but not during non-migratory periods (t=0.78, p=0.456). Shorebird diversity was also affected by tidal cycle. There is a significance difference between high tide and low tide (Mann-Whitney, t=78.0, p<0.005). Frequency of disturbance also affected the shorebird distribution (Mann-Whitney, t=57.0, p= 0.0134). Therefore, this study concluded that tides and disturbances are two factors that affecting temporal distribution of shorebird in mudflats area.Keywords: biodiversity, distribution, migratory birds, direct observation
Procedia PDF Downloads 39311425 3D Microscopy, Image Processing, and Analysis of Lymphangiogenesis in Biological Models
Authors: Thomas Louis, Irina Primac, Florent Morfoisse, Tania Durre, Silvia Blacher, Agnes Noel
Abstract:
In vitro and in vivo lymphangiogenesis assays are essential for the identification of potential lymphangiogenic agents and the screening of pharmacological inhibitors. In the present study, we analyse three biological models: in vitro lymphatic endothelial cell spheroids, in vivo ear sponge assay, and in vivo lymph node colonisation by tumour cells. These assays provide suitable 3D models to test pro- and anti-lymphangiogenic factors or drugs. 3D images were acquired by confocal laser scanning and light sheet fluorescence microscopy. Virtual scan microscopy followed by 3D reconstruction by image aligning methods was also used to obtain 3D images of whole large sponge and ganglion samples. 3D reconstruction, image segmentation, skeletonisation, and other image processing algorithms are described. Fixed and time-lapse imaging techniques are used to analyse lymphatic endothelial cell spheroids behaviour. The study of cell spatial distribution in spheroid models enables to detect interactions between cells and to identify invasion hierarchy and guidance patterns. Global measurements such as volume, length, and density of lymphatic vessels are measured in both in vivo models. Branching density and tortuosity evaluation are also proposed to determine structure complexity. Those properties combined with vessel spatial distribution are evaluated in order to determine lymphangiogenesis extent. Lymphatic endothelial cell invasion and lymphangiogenesis were evaluated under various experimental conditions. The comparison of these conditions enables to identify lymphangiogenic agents and to better comprehend their roles in the lymphangiogenesis process. The proposed methodology is validated by its application on the three presented models.Keywords: 3D image segmentation, 3D image skeletonisation, cell invasion, confocal microscopy, ear sponges, light sheet microscopy, lymph nodes, lymphangiogenesis, spheroids
Procedia PDF Downloads 38011424 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping
Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM
Procedia PDF Downloads 9511423 Repair Workshop Queue System Modification Using Priority Scheme
Authors: C. Okonkwo Ugochukwu, E. Sinebe Jude, N. Odoh Blessing, E. Okafor Christian
Abstract:
In this paper, a modification on repair workshop queuing system using multi priority scheme was carried out. Chi square goodness of fit test was used to determine the random distribution of the inter arrival time and service time of crankshafts that come for maintenance in the workshop. The chi square values obtained for all the prioritized classes show that the distribution conforms to Poisson distribution. The mean waiting time in queue results of non-preemptive priority for 1st, 2nd and 3rd classes show 0.066, 0.09, and 0.224 day respectively, while preemptive priority show 0.007, 0.036 and 0.258 day. However, when non priority is used, which obviously has no class distinction it amounts to 0.17 days. From the results, one can observe that the preemptive priority system provides a very dramatic improvement over the non preemptive priority as it concerns arrivals that are of higher priority. However, the improvement has a detrimental effect on the low priority class. The trend of the results is similar to the mean waiting time in the system as a result of addition of the actual service time. Even though the mean waiting time for the queue and that of the system for no priority takes the least time when compared with the least priority, urgent and semi-urgent jobs will terribly suffer which will most likely result in reneging or balking of many urgent jobs. Hence, the adoption of priority scheme in this type of scenario will result in huge profit to the Company and more customer satisfaction.Keywords: queue, priority class, preemptive, non-preemptive, mean waiting time
Procedia PDF Downloads 39811422 Slip Limit Prediction of High-Strength Bolt Joints Based on Local Approach
Authors: Chang He, Hiroshi Tamura, Hiroshi Katsuchi, Jiaqi Wang
Abstract:
In this study, the aim is to infer the slip limit (static friction limit) of contact interfaces in bolt friction joints by analyzing other bolt friction joints with the same contact surface but in a different shape. By using the Weibull distribution to deal with microelements on the contact surface statistically, the slip limit of a certain type of bolt joint was predicted from other types of bolt joint with the same contact surface. As a result, this research succeeded in predicting the slip limit of bolt joins with different numbers of contact surfaces and with different numbers of bolt rows.Keywords: bolt joints, slip coefficient, finite element method, Weibull distribution
Procedia PDF Downloads 17311421 Reliability Based Topology Optimization: An Efficient Method for Material Uncertainty
Authors: Mehdi Jalalpour, Mazdak Tootkaboni
Abstract:
We present a computationally efficient method for reliability-based topology optimization under material properties uncertainty, which is assumed to be lognormally distributed and correlated within the domain. Computational efficiency is achieved through estimating the response statistics with stochastic perturbation of second order, using these statistics to fit an appropriate distribution that follows the empirical distribution of the response, and employing an efficient gradient-based optimizer. The proposed algorithm is utilized for design of new structures and the changes in the optimized topology is discussed for various levels of target reliability and correlation strength. Predictions were verified thorough comparison with results obtained using Monte Carlo simulation.Keywords: material uncertainty, stochastic perturbation, structural reliability, topology optimization
Procedia PDF Downloads 60611420 Optimal Capacitors Placement and Sizing Improvement Based on Voltage Reduction for Energy Efficiency
Authors: Zilaila Zakaria, Muhd Azri Abdul Razak, Muhammad Murtadha Othman, Mohd Ainor Yahya, Ismail Musirin, Mat Nasir Kari, Mohd Fazli Osman, Mohd Zaini Hassan, Baihaki Azraee
Abstract:
Energy efficiency can be realized by minimizing the power loss with a sufficient amount of energy used in an electrical distribution system. In this report, a detailed analysis of the energy efficiency of an electric distribution system was carried out with an implementation of the optimal capacitor placement and sizing (OCPS). The particle swarm optimization (PSO) will be used to determine optimal location and sizing for the capacitors whereas energy consumption and power losses minimization will improve the energy efficiency. In addition, a certain number of busbars or locations are identified in advance before the PSO is performed to solve OCPS. In this case study, three techniques are performed for the pre-selection of busbar or locations which are the power-loss-index (PLI). The particle swarm optimization (PSO) is designed to provide a new population with improved sizing and location of capacitors. The total cost of power losses, energy consumption and capacitor installation are the components considered in the objective and fitness functions of the proposed optimization technique. Voltage magnitude limit, total harmonic distortion (THD) limit, power factor limit and capacitor size limit are the parameters considered as the constraints for the proposed of optimization technique. In this research, the proposed methodologies implemented in the MATLAB® software will transfer the information, execute the three-phase unbalanced load flow solution and retrieve then collect the results or data from the three-phase unbalanced electrical distribution systems modeled in the SIMULINK® software. Effectiveness of the proposed methods used to improve the energy efficiency has been verified through several case studies and the results are obtained from the test systems of IEEE 13-bus unbalanced electrical distribution system and also the practical electrical distribution system model of Sultan Salahuddin Abdul Aziz Shah (SSAAS) government building in Shah Alam, Selangor.Keywords: particle swarm optimization, pre-determine of capacitor locations, optimal capacitors placement and sizing, unbalanced electrical distribution system
Procedia PDF Downloads 43411419 Building an Arithmetic Model to Assess Visual Consistency in Townscape
Authors: Dheyaa Hussein, Peter Armstrong
Abstract:
The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.Keywords: townscape, urban design, visual assessment, visual consistency
Procedia PDF Downloads 31411418 Distribution and Historical Trends of PAHs Deposition in Recent Sediment Cores of the Imo River, SE Nigeria
Authors: Miranda I. Dosunmu, Orok E. Oyo-Ita, Inyang O. Oyo-Ita
Abstract:
Polycyclic aromatic hydrocarbons (PAHs) are a class of priority listed organic pollutants due to their carcinogenicity, mutagenity, acute toxicity and persistency in the environment. The distribution and historical changes of PAHs contamination in recent sediment cores from the Imo River were investigated using gas chromatography coupled with mass spectrometer. The concentrations of total PAHs (TPAHs) ranging from 402.37 ng/g dry weight (dw) at the surface layer of the Estuary zone (ESC6; 0-5 cm) to 92,388.59 ng/g dw at the near surface layer of the Afam zone (ASC5; 5-10 cm) indicate that PAHs contamination was localized not only between sample sites but also within the same cores. Sediment-depth profiles for the four (Afam, Mangrove, Estuary and illegal Petroleum refinery) cores revealed irregular distribution patterns in the TPAH concentrations except the fact that these levels became maximized at the near surface layers (5-10 cm) corresponding to a geological time-frame of about 1996-2004. This time scale coincided with the period of intensive bunkering and oil pipeline vandalization by the Niger Delta militant groups. Also a general slight decline was found in the TPAHs levels from near the surface layers (5-10 cm) to the most recent top layers (0-5 cm) of the cores, attributable to the recent effort by the Nigerian government in clamping down the illegal activity of the economic saboteurs. Therefore, the recent amnesty period granted to the militant groups should be extended. Although mechanism of perylene formation still remains enigmatic, examination of its distributions down cores indicates natural biogenic, pyrogenic and petrogenic origins for the compound at different zones. Thus, the characteristic features of the Imo River environment provide a means of tracing diverse origins for perylene.Keywords: perylene, historical trend, distribution, origin, Imo River
Procedia PDF Downloads 25111417 Analysis of Urban Population Using Twitter Distribution Data: Case Study of Makassar City, Indonesia
Authors: Yuyun Wabula, B. J. Dewancker
Abstract:
In the past decade, the social networking app has been growing very rapidly. Geolocation data is one of the important features of social media that can attach the user's location coordinate in the real world. This paper proposes the use of geolocation data from the Twitter social media application to gain knowledge about urban dynamics, especially on human mobility behavior. This paper aims to explore the relation between geolocation Twitter with the existence of people in the urban area. Firstly, the study will analyze the spread of people in the particular area, within the city using Twitter social media data. Secondly, we then match and categorize the existing place based on the same individuals visiting. Then, we combine the Twitter data from the tracking result and the questionnaire data to catch the Twitter user profile. To do that, we used the distribution frequency analysis to learn the visitors’ percentage. To validate the hypothesis, we compare it with the local population statistic data and land use mapping released by the city planning department of Makassar local government. The results show that there is the correlation between Twitter geolocation and questionnaire data. Thus, integration the Twitter data and survey data can reveal the profile of the social media users.Keywords: geolocation, Twitter, distribution analysis, human mobility
Procedia PDF Downloads 31511416 Undercooling of Refractory High-Entropy Alloy
Authors: Liang Hu
Abstract:
The innovation of refractory high-entropy alloy (RHEA) formed from refractory metals W, Ta, Mo, Nb, Hf, V, and Zr was firstly implemented in 2010 to obtain better strength at high temperature than conventional HEAs based on Al, Co, Cr, Cu, Fe and Ni. Due to the refractory characteristic and high chemical activity at elevated temperature, electrostatic levitation technique has been utilized to fulfill the rapid solidification of RHEA. Several RHEAs consisting W, Ta, Mo, Nb, Zr have been selected to perform the undercooling and rapid solidification by ESL. They are substantially undercooled by up to 0.2TL. The evolution of as-solidified microstructure and component redistribution with undercooling have been investigated by SEM, EBSD, and EPMA analysis. According to the EPMA results of composing elements at different undercooling levels, the chemical distribution relevant to undercooling was also analyzed.Keywords: chemical distribution, high-entropy alloy, rapid solidification, undercooling
Procedia PDF Downloads 12911415 Efficient Principal Components Estimation of Large Factor Models
Authors: Rachida Ouysse
Abstract:
This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting
Procedia PDF Downloads 15011414 Reliability Indices Evaluation of SEIG Rotor Core Magnetization with Minimum Capacitive Excitation for WECs
Authors: Lokesh Varshney, R. K. Saket
Abstract:
This paper presents reliability indices evaluation of the rotor core magnetization of the induction motor operated as a self-excited induction generator by using probability distribution approach and Monte Carlo simulation. Parallel capacitors with calculated minimum capacitive value across the terminals of the induction motor operating as a SEIG with unregulated shaft speed have been connected during the experimental study. A three phase, 4 poles, 50Hz, 5.5 hp, 12.3A, 230V induction motor coupled with DC Shunt Motor was tested in the electrical machine laboratory with variable reactive loads. Based on this experimental study, it is possible to choose a reliable induction machine operating as a SEIG for unregulated renewable energy application in remote area or where grid is not available. Failure density function, cumulative failure distribution function, survivor function, hazard model, probability of success and probability of failure for reliability evaluation of the three phase induction motor operating as a SEIG have been presented graphically in this paper.Keywords: residual magnetism, magnetization curve, induction motor, self excited induction generator, probability distribution, Monte Carlo simulation
Procedia PDF Downloads 55911413 Optical and Double Folding Analysis for 6Li+16O Elastic Scattering
Authors: Abd Elrahman Elgamala, N. Darwish, I. Bondouk, Sh. Hamada
Abstract:
Available experimental angular distributions for 6Li elastically scattered from 16O nucleus in the energy range 13.0–50.0 MeV are investigated and reanalyzed using optical model of the conventional phenomenological potential and also using double folding optical model of different interaction models: DDM3Y1, CDM3Y1, CDM3Y2, and CDM3Y3. All the involved models of interaction are of M3Y Paris except DDM3Y1 which is of M3Y Reid and the main difference between them lies in the different values for the parameters of the incorporated density distribution function F(ρ). We have extracted the renormalization factor NR for 6Li+16O nuclear system in the energy range 13.0–50.0 MeV using the aforementioned interaction models.Keywords: elastic scattering, optical model, folding potential, density distribution
Procedia PDF Downloads 14211412 A Succinct Method for Allocation of Reactive Power Loss in Deregulated Scenario
Authors: J. S. Savier
Abstract:
Real power is the component power which is converted into useful energy whereas reactive power is the component of power which cannot be converted to useful energy but it is required for the magnetization of various electrical machineries. If the reactive power is compensated at the consumer end, the need for reactive power flow from generators to the load can be avoided and hence the overall power loss can be reduced. In this scenario, this paper presents a succinct method called JSS method for allocation of reactive power losses to consumers connected to radial distribution networks in a deregulated environment. The proposed method has the advantage that no assumptions are made while deriving the reactive power loss allocation method.Keywords: deregulation, reactive power loss allocation, radial distribution systems, succinct method
Procedia PDF Downloads 37911411 Simulation Study on Particle Fluidization and Drying in a Spray Fluidized Bed
Authors: Jinnan Guo, Daoyin Liu
Abstract:
The quality of final products in the coating process significantly depends on particle fluidization and drying in the spray-fluidized bed. In this study, fluidizing gas temperature and velocity are changed, and their effects on particle flow, moisture content, and heat transfer in a spray fluidized bed are investigated by the CFD – Discrete Element Model (DEM). The gas flow velocity distribution of the fluidized bed is symmetrical, with high velocity in the middle and low velocity on both sides. During the heating process, the particles inside the central tube and at the bottom of the bed are rapidly heated. The particle circulation in the annular area is heated slowly and the temperature is low. The inconsistency of particle circulation results in two peaks in the probability density distribution of the particle temperature during the heating process, and the overall temperature of the particles increases uniformly. During the drying process, the distribution of particle moisture transitions from initial uniform moisture to two peaks, and then the number of completely dried (moisture content of 0) particles gradually increases. Increasing the fluidizing gas temperature and velocity improves particle circulation, drying and heat transfer in the bed. The current study provides an effective method for studying the hydrodynamics of spray fluidized beds with simultaneous processes of heating and particle fluidization.Keywords: heat transfer, CFD-DEM, spray fluidized bed, drying
Procedia PDF Downloads 7411410 Frequency Analysis Using Multiple Parameter Probability Distributions for Rainfall to Determine Suitable Probability Distribution in Pakistan
Authors: Tasir Khan, Yejuan Wang
Abstract:
The study of extreme rainfall events is very important for flood management in river basins and the design of water conservancy infrastructure. Evaluation of quantiles of annual maximum rainfall (AMRF) is required in different environmental fields, agriculture operations, renewable energy sources, climatology, and the design of different structures. Therefore, the annual maximum rainfall (AMRF) was performed at different stations in Pakistan. Multiple probability distributions, log normal (LN), generalized extreme value (GEV), Gumbel (max), and Pearson type3 (P3) were used to find out the most appropriate distributions in different stations. The L moments method was used to evaluate the distribution parameters. Anderson darling test, Kolmogorov- Smirnov test, and chi-square test showed that two distributions, namely GUM (max) and LN, were the best appropriate distributions. The quantile estimate of a multi-parameter PD offers extreme rainfall through a specific location and is therefore important for decision-makers and planners who design and construct different structures. This result provides an indication of these multi-parameter distribution consequences for the study of sites and peak flow prediction and the design of hydrological maps. Therefore, this discovery can support hydraulic structure and flood management.Keywords: RAMSE, multiple frequency analysis, annual maximum rainfall, L-moments
Procedia PDF Downloads 8211409 Marine Litter Dispersion in the Southern Shores of the Caspian Sea (Case Study: Mazandaran Province)
Authors: Siamak Jamshidi
Abstract:
One of the major environmental problems in the southern coasts of the Caspian Sea is that the marine and coastal debris is being deposited and accumulated due to industrial, urban and tourism activities. Study, sampling and analysis on the type, size, amount and origin of human-made (anthropogenic) waste in the coastal areas of this sea can be very effective in implementing management, cultural and informative programs to reduce marine environmental pollutants. Investigation on marine litter distribution under impact of seawater dynamics was performed for the first time in this research. The rate of entry and distribution of marine and coastal pollutants and wastes, which are mainly of urban, tourist and hospital origin, has multiplied on the southern shore of the Caspian Sea in the last decade. According to the results, the two most important sources of hospital waste in the coastal areas are Tonekabon and Mahmoudabad. In this case, the effect of dynamic parameters of seawater such as flow (with speeds of up to about 1 m/s) and waves, as well as the flow of rivers leading to the shoreline are also influential factors in the distribution of marine litter in the region. Marine litters in the southern coastal region were transported from west to east by the shallow waters of the southern Caspian Sea. In other words, the marine debris density has been observed more in the eastern part.Keywords: southern shelf, coastal oceanography, seawater flow, vertical structure, marine environment
Procedia PDF Downloads 7111408 Calculation of Fractal Dimension and Its Relation to Some Morphometric Characteristics of Iranian Landforms
Authors: Mitra Saberi, Saeideh Fakhari, Amir Karam, Ali Ahmadabadi
Abstract:
Geomorphology is the scientific study of the characteristics of form and shape of the Earth's surface. The existence of types of landforms and their variation is mainly controlled by changes in the shape and position of land and topography. In fact, the interest and application of fractal issues in geomorphology is due to the fact that many geomorphic landforms have fractal structures and their formation and transformation can be explained by mathematical relations. The purpose of this study is to identify and analyze the fractal behavior of landforms of macro geomorphologic regions of Iran, as well as studying and analyzing topographic and landform characteristics based on fractal relationships. In this study, using the Iranian digital elevation model in the form of slopes, coefficients of deposition and alluvial fan, the fractal dimensions of the curves were calculated through the box counting method. The morphometric characteristics of the landforms and their fractal dimension were then calculated for 4criteria (height, slope, profile curvature and planimetric curvature) and indices (maximum, Average, standard deviation) using ArcMap software separately. After investigating their correlation with fractal dimension, two-way regression analysis was performed and the relationship between fractal dimension and morphometric characteristics of landforms was investigated. The results show that the fractal dimension in different pixels size of 30, 90 and 200m, topographic curves of different landform units of Iran including mountain, hill, plateau, plain of Iran, from1.06in alluvial fans to1.17in The mountains are different. Generally, for all pixels of different sizes, the fractal dimension is reduced from mountain to plain. The fractal dimension with the slope criterion and the standard deviation index has the highest correlation coefficient, with the curvature of the profile and the mean index has the lowest correlation coefficient, and as the pixels become larger, the correlation coefficient between the indices and the fractal dimension decreases.Keywords: box counting method, fractal dimension, geomorphology, Iran, landform
Procedia PDF Downloads 84