Search results for: maximum distance separable
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5922

Search results for: maximum distance separable

4752 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest

Authors: Peter Baji

Abstract:

In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.

Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study

Procedia PDF Downloads 191
4751 Adsorption of Peppermint Essential Oil by Polypropylene Nanofiber

Authors: Duduku Krishnaiah, S. M. Anisuzzaman, Kumaran Govindaraj, Chiam Chel Ken, Zykamilia Kamin

Abstract:

Pure essential oil is highly demanded in the market since most of the so-called pure essential oils in the market contains alcohol. This is because of the usage of alcohol in separating oil and water mixture. Removal of pure essential oil from water without using any chemical solvent has become a challenging issue. Adsorbents generally have the properties of separating hydrophobic oil from hydrophilic mixture. Polypropylen nanofiber is a thermoplastic polymer which is produced from propylene. It was used as an adsorbent in this study. Based on the research, it was found that the polypropylene nanofiber was able to adsorb peppermint oil from the aqueous solution over a wide range of concentration. Based on scanning electron microscope (SEM), nanofiber has very small nano diameter fiber size in average before the adsorption and larger scaled average diameter of fibers after adsorption which indicates that smaller diameter of nanofiber enhances the adsorption process. The adsorption capacity of peppermint oil increases as the initial concentration of peppermint oil and amount of polypropylene nanofiber used increases. The maximum adsorption capacity of polypropylene nanofiber was found to be 689.5 mg/g at (T= 30°C). Moreover, the adsorption capacity of peppermint oil decreases as the temperature of solution increases. The equilibrium data of polypropylene nanofiber is best represented by Freundlich isotherm with the maximum adsorption capacity of 689.5 mg/g. The adsorption kinetics of polypropylene nanofiber was best represented by pseudo-second order model.

Keywords: nanofiber, adsorption, peppermint essential oil, isotherms, adsorption kinetics

Procedia PDF Downloads 151
4750 The Role of Institutions in Community Wildlife Conservation in Zimbabwe

Authors: Herbert Ntuli, Edwin Muchapondwa

Abstract:

This study used a sample of 336 households and community level data from 30 communities around the Gonarezhou National Park in Zimbabwe to analyse the association between ability to self-organize or cooperation and institutions on one hand and the relationship between success of biodiversity outcomes and cooperation on the other hand. Using both the ordinary least squares and instrumental variables estimation with heteroskedasticity-based instruments, our results confirmed that sound institutions are indeed an important ingredient for cooperation in the respective communities and cooperation positively and significantly affects biodiversity outcomes. Group size, community level trust, the number of stakeholders and punishment were found to be important variables explaining cooperation. From a policy perspective, our results show that external enforcement of rules and regulations does not necessarily translate into sound ecological outcomes but better outcomes are attainable when punishment is rather endogenized by local communities. This seems to suggest that communities should rather be supported in such a way that robust institutions that are tailor made to suit the needs of local condition will emerge that will in turn facilitate good environmental husbandry. Cooperation, training, benefits, distance from the nearest urban canter, distance from the fence, social capital average age of household head, fence and information sharing were found to be very important variables explaining the success of biodiversity outcomes ceteris paribus. Government programmes should target capacity building in terms of institutional capacity and skills development in order to have a positive impact on biodiversity. Hence, the role of stakeholders (e.g., NGOs) in capacity building and government effort should complement each other to ensure that the necessary resources are mobilized and all communities receive the necessary training and resources.

Keywords: institutions, self-organize, common pool resources, wildlife, conservation, Zimbabwe

Procedia PDF Downloads 275
4749 An Empirical Study of the Best Fitting Probability Distributions for Stock Returns Modeling

Authors: Jayanta Pokharel, Gokarna Aryal, Netra Kanaal, Chris Tsokos

Abstract:

Investment in stocks and shares aims to seek potential gains while weighing the risk of future needs, such as retirement, children's education etc. Analysis of the behavior of the stock market returns and making prediction is important for investors to mitigate risk on investment. Historically, the normal variance models have been used to describe the behavior of stock market returns. However, the returns of the financial assets are actually skewed with higher kurtosis, heavier tails, and a higher center than the normal distribution. The Laplace distribution and its family are natural candidates for modeling stock returns. The Variance-Gamma (VG) distribution is the most sought-after distributions for modeling asset returns and has been extensively discussed in financial literatures. In this paper, it explore the other Laplace family, such as Asymmetric Laplace, Skewed Laplace, Kumaraswamy Laplace (KS) together with Variance-Gamma to model the weekly returns of the S&P 500 Index and it's eleven business sector indices. The method of maximum likelihood is employed to estimate the parameters of the distributions and our empirical inquiry shows that the Kumaraswamy Laplace distribution performs much better for stock returns modeling among the choice of distributions used in this study and in practice, KS can be used as a strong alternative to VG distribution.

Keywords: stock returns, variance-gamma, kumaraswamy laplace, maximum likelihood

Procedia PDF Downloads 67
4748 Humans’ Physical Strength Capacities on Different Handwheel Diameters and Angles

Authors: Saif K. Al-Qaisi, Jad R. Mansour, Aseel W. Sakka, Yousef Al-Abdallat

Abstract:

Handwheels are common to numerous industries, such as power generation plants, oil refineries, and chemical processing plants. The forces required to manually turn handwheels have been shown to exceed operators’ physical strengths, posing risks for injuries. Therefore, the objectives of this research were twofold: (1) to determine humans’ physical strengths on handwheels of different sizes and angles and (2) to subsequently propose recommended torque limits (RTLs) that accommodate the strengths of even the weaker segment of the population. Thirty male and thirty female participants were recruited from a university student population. Participants were asked to exert their maximum possible forces in a counter-clockwise direction on handwheels of different sizes (35 cm, 45 cm, 60 cm, and 70 cm) and angles (0°-horizontal, 45°-slanted, and 90°-vertical). The participant’s posture was controlled by adjusting the handwheel to be at the elbow level of each participant, requiring the participant to stand erect, and restricting the hand placements to be in the 10-11 o’clock position for the left hand and the 4-5 o’clock position for the right hand. A torque transducer (Futek TDF600) was used to measure the maximum torques generated by the human. Three repetitions were performed for each handwheel condition, and the average was computed. Results showed that, at all handwheel angles, as the handwheel diameter increased, the maximum torques generated also increased, while the underlying forces decreased. In controlling the handwheel diameter, the 0° handwheel was associated with the largest torques and forces, and the 45° handwheel was associated with the lowest torques and forces. Hence, a larger handwheel diameter –as large as 70 cm– in a 0° angle is favored for increasing the torque production capacities of users. Also, it was recognized that, regardless of the handwheel diameter size and angle, the torque demands in the field are much greater than humans’ torque production capabilities. As such, this research proposed RTLs for the different handwheel conditions by using the 25th percentile values of the females’ torque strengths. The proposed recommendations may serve future standard developers in defining torque limits that accommodate humans’ strengths.

Keywords: handwheel angle, handwheel diameter, humans’ torque production strengths, recommended torque limits

Procedia PDF Downloads 104
4747 Development of Total Maximum Daily Load Using Water Quality Modelling as an Approach for Watershed Management in Malaysia

Authors: S. A. Che Osmi, W. M. F. Wan Ishak, H. Kim, M. A. Azman, M. A. Ramli

Abstract:

River is one of important water sources for many activities including industrial and domestic usage such as daily usage, transportation, power supply and recreational activities. However, increasing activities in a river has grown the sources of pollutant enters the water bodies, and degraded the water quality of the river. It becomes a challenge to develop an effective river management to ensure the water sources of the river are well managed and regulated. In Malaysia, several approaches for river management have been implemented such as Integrated River Basin Management (IRBM) program for coordinating the management of resources in a natural environment based on river basin to ensure their sustainability lead by Department of Drainage and Irrigation (DID), Malaysia. Nowadays, Total Maximum Daily Load (TMDL) is one of the best approaches for river management in Malaysia. TMDL implementation is regulated and implemented in the United States. A study on the development of TMDL in Malacca River has been carried out by doing water quality monitoring, the development of water quality model by using Environmental Fluid Dynamic Codes (EFDC), and TMDL implementation plan. The implementation of TMDL will help the stakeholders and regulators to control and improve the water quality of the river. It is one of the good approaches for river management in Malaysia.

Keywords: EFDC, river management, TMDL, water quality modelling

Procedia PDF Downloads 323
4746 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 485
4745 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 186
4744 Numerical Simulation of the Fractional Flow Reserve in the Coronary Artery with Serial Stenoses of Varying Configuration

Authors: Mariia Timofeeva, Andrew Ooi, Eric K. W. Poon, Peter Barlis

Abstract:

Atherosclerotic plaque build-up, commonly known as stenosis, limits blood flow and hence oxygen and nutrient supplies to the heart muscle. Thus, assessment of its severity is of great interest to health professionals. Numerical simulation of the fractional flow reserve (FFR) has proved to be well correlated with invasively measured FFR used for physiological assessment of the severity of coronary stenosis in arteries. Atherosclerosis may impact the diseased artery in several locations causing serial stenoses, which is a complicated subset of coronary artery disease that requires careful treatment planning. However, hemodynamic of the serial sequential stenoses in coronary arteries has not been extensively studied. The hemodynamics of the serial stenoses is complex because the stenoses in the series interact and affect the flow through each other. To address this, serial stenoses in a 3.4 mm left anterior descending (LAD) artery are examined in this study. Two diameter stenoses (DS) are considered, 30 and 50 percent of the reference diameter. Serial stenoses configurations are divided into three groups based on the order of the stenoses in the series, spacing between them, and deviation of the stenoses’ symmetry (eccentricity). A patient-specific pulsatile waveform is used in the simulations. Blood flow within the stenotic artery is assumed to be laminar, Newtonian, and incompressible. Results for the FFR are reported. Based on the simulation results, it can be deduced that the larger drop in pressure (smaller value of the FFR) is expected when the percentage of the second stenosis in the series is bigger. Varying the distance between the stenoses affects the location of the maximum drop in the pressure, while the minimal FFR in the artery remains unchanged. Eccentric serial stenoses are characterized by a noticeably larger decrease in pressure through the stenoses and by the development of the chaotic flow downstream of the stenoses. The largest drop in the pressure (about 4% difference compared to the axisymmetric case) is obtained for the serial stenoses, where both the stenoses are highly eccentric with the centerlines deflected to the different sides of the LAD. In conclusion, varying configuration of the sequential serial stenoses results in a different distribution of FFR through the LAD. Results presented in this study provide insight into the clinical assessment of the severity of the coronary serial stenoses, which is proved to depend on the relative position of the stenoses and the deviation of the stenoses’ symmetry.

Keywords: computational fluid dynamics, coronary artery, fractional flow reserve, serial stenoses

Procedia PDF Downloads 181
4743 Effect of Thistle Ecotype in the Physical-Chemical and Sensorial Properties of Serra da Estrela Cheese

Authors: Raquel P. F. Guiné, Marlene I. C. Tenreiro, Ana C. Correia, Paulo Barracosa, Paula M. R. Correia

Abstract:

The objective of this study was to evaluate the physical and chemical characteristics of Serra da Estrela cheese and compare these results with those of the sensory analysis. For the study were taken six samples of Serra da Estrela cheese produced with 6 different ecotypes of thistle in a dairy situated in Penalva do Castelo. The chemical properties evaluated were moisture content, protein, fat, ash, chloride and pH; the physical properties studied were color and texture; and finally a sensory evaluation was undertaken. The results showed moisture varying in the range 40-48%, protein in the range 15-20%, fat between 41-45%, ash between 3.9-5.0% and chlorides varying from 1.2 to 3.0%. The pH varied from 4.8 to 5.4. The textural properties revealed that the crust hardness is relatively low (maximum 7.3 N), although greater than flesh firmness (maximum 1.7 N), and also that these cheeses are in fact soft paste type, with measurable stickiness and intense adhesiveness. The color analysis showed that the crust is relatively light (L* over 50), and with a predominant yellow coloration (b* around 20 or over) although with a slight greenish tone (a* negative). The results of the sensory analysis did not show great variability for most of the attributes measured, although some differences were found in attributes such as crust thickness, crust uniformity, and creamy flesh.

Keywords: chemical composition, color, sensorial analysis, Serra da Estrela cheese, texture

Procedia PDF Downloads 298
4742 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach

Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip

Abstract:

The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.

Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method

Procedia PDF Downloads 122
4741 The Current Practices of Analysis of Reinforced Concrete Panels Subjected to Blast Loading

Authors: Palak J. Shukla, Atul K. Desai, Chentankumar D. Modhera

Abstract:

For any country in the world, it has become a priority to protect the critical infrastructure from looming risks of terrorism. In any infrastructure system, the structural elements like lower floors, exterior columns, walls etc. are key elements which are the most susceptible to damage due to blast load. The present study revisits the state of art review of the design and analysis of reinforced concrete panels subjected to blast loading. Various aspects in association with blast loading on structure, i.e. estimation of blast load, experimental works carried out previously, the numerical simulation tools, various material models, etc. are considered for exploring the current practices adopted worldwide. Discussion on various parametric studies to investigate the effect of reinforcement ratios, thickness of slab, different charge weight and standoff distance is also made. It was observed that for the simulation of blast load, CONWEP blast function or equivalent numerical equations were successfully employed by many researchers. The study of literature indicates that the researches were carried out using experimental works and numerical simulation using well known generalized finite element methods, i.e. LS-DYNA, ABAQUS, AUTODYN. Many researchers recommended to use concrete damage model to represent concrete and plastic kinematic material model to represent steel under action of blast loads for most of the numerical simulations. Most of the studies reveal that the increase reinforcement ratio, thickness of slab, standoff distance was resulted in better blast resistance performance of reinforced concrete panel. The study summarizes the various research results and appends the present state of knowledge for the structures exposed to blast loading.

Keywords: blast phenomenon, experimental methods, material models, numerical methods

Procedia PDF Downloads 151
4740 Model for Calculating Traffic Mass and Deceleration Delays Based on Traffic Field Theory

Authors: Liu Canqi, Zeng Junsheng

Abstract:

This study identifies two typical bottlenecks that occur when a vehicle cannot change lanes: car following and car stopping. The ideas of traffic field and traffic mass are presented in this work. When there are other vehicles in front of the target vehicle within a particular distance, a force is created that affects the target vehicle's driving speed. The characteristics of the driver and the vehicle collectively determine the traffic mass; the driving speed of the vehicle and external variables have no bearing on this. From a physical level, this study examines the vehicle's bottleneck when following a car, identifies the outside factors that have an impact on how it drives, takes into account that the vehicle will transform kinetic energy into potential energy during deceleration, and builds a calculation model for traffic mass. The energy-time conversion coefficient is created from an economic standpoint utilizing the social average wage level and the average cost of motor fuel. Vissim simulation program measures the vehicle's deceleration distance and delays under the Wiedemann car-following model. The difference between the measured value of deceleration delay acquired by simulation and the theoretical value calculated by the model is compared using the conversion calculation model of traffic mass and deceleration delay. The experimental data demonstrate that the model is reliable since the error rate between the theoretical calculation value of the deceleration delay obtained by the model and the measured value of simulation results is less than 10%. The article's conclusion is that the traffic field has an impact on moving cars on the road and that physical and socioeconomic factors should be taken into account while studying vehicle-following behavior. The deceleration delay value of a vehicle's driving and traffic mass have a socioeconomic relationship that can be utilized to calculate the energy-time conversion coefficient when dealing with the bottleneck of cars stopping and starting.

Keywords: traffic field, social economics, traffic mass, bottleneck, deceleration delay

Procedia PDF Downloads 56
4739 Thermodynamic Cycle Analysis for Overall Efficiency Improvement and Temperature Reduction in Gas Turbines

Authors: Jeni A. Popescu, Ionut Porumbel, Valeriu A. Vilag, Cleopatra F. Cuciumita

Abstract:

The paper presents a thermodynamic cycle analysis for three turboshaft engines. The first is the cycle is a Brayton cycle, describing the evolution of a classical turboshaft, based on the Klimov TV2 engine. The other two cycles aim at approaching an Ericsson cycle, by replacing the Brayton cycle adiabatic expansion in the turbine by quasi-isothermal expansion. The maximum quasi-Ericsson cycles temperature is set to a lower value than the maximum Brayton cycle temperature, equal to the Brayton cycle power turbine inlet temperature, in order to decrease the engine NOx emissions. Also, the power distribution over the stages of the gas generator turbine is maintained the same. In the first of the two considered quasi-Ericsson cycle, the efficiencies of the gas generator turbine stage. Also, the power distribution over the stages of the gas generator turbine is maintained the same. In the first of the two considered quasi-Ericsson cycle, the efficiencies of the gas generator turbine stages are maintained the same as for the reference case, while for the second, the efficiencies are increased in order to obtain the same shaft power as in the reference case. It is found that in the first case, both the shaft power and the thermodynamic efficiency of the engine decrease, while in the second, the power is maintained, and even a slight increase in efficiency can be noted.

Keywords: combustion, Ericsson, thermodynamic analysis, turbine

Procedia PDF Downloads 602
4738 Building an Arithmetic Model to Assess Visual Consistency in Townscape

Authors: Dheyaa Hussein, Peter Armstrong

Abstract:

The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.

Keywords: townscape, urban design, visual assessment, visual consistency

Procedia PDF Downloads 308
4737 Technique for Online Condition Monitoring of Surge Arresters

Authors: Anil S. Khopkar, Kartik S. Pandya

Abstract:

Overvoltage in power systems is a phenomenon that cannot be avoided. However, it can be controlled to a certain extent. Power system equipment is to be protected against overvoltage to avoid system failure. Metal Oxide Surge Arresters (MOSA) are connected to the system for the protection of the power system against overvoltages. The MOSA will behave as an insulator under normal working conditions, where it offers a conductive path under voltage conditions. MOSA consists of zinc oxide elements (ZnO Blocks), which have non-linear V-I characteristics. ZnO blocks are connected in series and fitted in ceramic or polymer housing. This degrades due to the aging effect under continuous operation. Degradation of zinc oxide elements increases the leakage current flowing from the surge arresters. This Increased leakage current results in the increased temperature of the surge arrester, which further decreases the resistance of zinc oxide elements. As a result, leakage current increases, which again increases the temperature of a MOSA. This creates thermal runaway conditions for MOSA. Once it reaches the thermal runaway condition, it cannot return to normal working conditions. This condition is a primary cause of premature failure of surge arresters, as MOSA constitutes a core protective device for electrical power systems against transients. It contributes significantly to the reliable operation of the power system network. Hence, the condition monitoring of surge arresters should be done at periodic intervals. Online and Offline condition monitoring techniques are available for surge arresters. Offline condition monitoring techniques are not very popular as they require removing surge arresters from the system, which requires system shutdown. Hence, online condition monitoring techniques are very popular. This paper presents the evaluation technique for the surge arrester condition based on the leakage current analysis. Maximum amplitude of total leakage current (IT), Maximum amplitude of fundamental resistive leakage current (IR) and maximum amplitude of third harmonic resistive leakage current (I3rd) have been analyzed as indicators for surge arrester condition monitoring.

Keywords: metal oxide surge arrester (MOSA), over voltage, total leakage current, resistive leakage current

Procedia PDF Downloads 58
4736 Climate Change and Its Impact on Water Security and Health in Coastal Community: A Gender Outlook

Authors: Soorya Vennila

Abstract:

The present study answers the questions; how does climate change affect the water security in drought prone Ramanathapuram district? and what has water insecurity done to the health of the coastal community? The study area chosen is Devipattinam in Ramanathapuram district. Climate change evidentially wreaked havoc on the community with saltwater intrusion, water quality degradation, water scarcity and its eventual economic, social like power inequality within family and community and health hazards. The climatological data such as rainfall, minimum temperature and maximum temperature were statistically analyzed for trend using Mann-Kendall test. The test was conducted for 14 years (1989-2002) of rainfall data, maximum and minimum temperature and the data were statistically analyzed. At the outset, the water quality samples were collected from Devipattinam to test its physical and chemical parameters and their spatial variation. The results were derived as shown in ARC GIS. Using the water quality test water quality index were framed. And finally, key Informant interview, questionnaire were conducted to capture the gender perception and problem. The data collected were thereafter interpreted using SPSS software for recommendations and suggestions to overcome water scarcity and health problems.

Keywords: health, watersecurity, water quality, climate change

Procedia PDF Downloads 69
4735 Application of Electrochemically Prepared PPy/MWCNT:MnO2 Nano-Composite Film in Microbial Fuel Cells for Sustainable Power Generation

Authors: Rajeev jain, D. C. Tiwari, Praveena Mishra

Abstract:

Nano-composite of polypyrrole/multiwalled carbon nanotubes:mangenese oxide (PPy/MWCNT:MnO2) was electrochemically deposited on the surface of carbon cloth (CC). The nano-composite was structurally characterized by FTIR, SEM, TEM and UV-Vis studies. Nano-composite was also characterized by cyclic voltammetry (CV), current voltage measurements (I-V) and the optical band gaps of film were evaluated from UV-Vis absorption studies. The PPy/MWCNT:MnO2 nano-composite was used as anode in microbial fuel cell (MFC) for sewage waste water treatment, power and coulombic efficiency measurement. The prepared electrode showed good electrical conductivity (0.1185 S m-1). This was also supported by band gap measurements (direct 0.8 eV, indirect 1.3 eV). The obtained maximum power density was 1125.4 mW m-2, highest chemical oxygen demand (COD) removal efficiency was 93% and the maximum coulombic efficiency was 59%. For the first time PPy/MWCNT:MnO2 nano-composite for MFC prepared from nano-composite electrode having the potential for the use in MFC with good stability and better adhesion of microbes is being reported. The SEM images confirm the growth and development of microbe’s colony.

Keywords: carbon cloth, electro-polymerization, functionalization, microbial fuel cells, multi walled carbon nanotubes, polypyrrole

Procedia PDF Downloads 262
4734 A Two-Pronged Truncated Deferred Sampling Plan for Log-Logistic Distribution

Authors: Braimah Joseph Odunayo, Jiju Gillariose

Abstract:

This paper is aimed at developing a sampling plan that uses information from precedent and successive lots for lot disposition with a pretention that the life-time of a particular product assumes a Log-logistic distribution. A Two-pronged Truncated Deferred Sampling Plan (TTDSP) for Log-logistic distribution is proposed when the testing is truncated at a precise time. The best possible sample sizes are obtained under a given Maximum Allowable Percent Defective (MAPD), Test Suspension Ratios (TSR), and acceptance numbers (c). A formula for calculating the operating characteristics of the proposed plan is also developed. The operating characteristics and mean-ratio values were used to measure the performance of the plan. The findings of the study show that: Log-logistic distribution has a decreasing failure rate; furthermore, as mean-life ratio increase, the failure rate reduces; the sample size increase as the acceptance number, test suspension ratios and maximum allowable percent defective increases. The study concludes that the minimum sample sizes were smaller, which makes the plan a more economical plan to adopt when cost and time of production are costly and the experiment being destructive.

Keywords: consumers risk, mean life, minimum sample size, operating characteristics, producers risk

Procedia PDF Downloads 133
4733 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method (FDM)

Procedia PDF Downloads 120
4732 Optimization of Poly-β-Hydroxybutyrate Recovery from Bacillus Subtilis Using Solvent Extraction Process by Response Surface Methodology

Authors: Jayprakash Yadav, Nivedita Patra

Abstract:

Polyhydroxybutyrate (PHB) is an interesting material in the field of medical science, pharmaceutical industries, and tissue engineering because of its properties such as biodegradability, biocompatibility, hydrophobicity, and elasticity. PHB is naturally accumulated by several microbes in their cytoplasm during the metabolic process as energy reserve material. PHB can be extracted from cell biomass using halogenated hydrocarbons, chemicals, and enzymes. In this study, a cheaper and non-toxic solvent, acetone, was used for the extraction process. The different parameters like acetone percentage, and solvent pH, process temperature, and incubation periods were optimized using the Response Surface Methodology (RSM). RSM was performed and the determination coefficient (R2) value was found to be 0.8833 from the quadratic regression model with no significant lack of fit. The designed RSM model results indicated that the fitness of the response variable was significant (P-value < 0.0006) and satisfactory to denote the relationship between the responses in terms of PHB recovery and purity with respect to the values of independent variables. Optimum conditions for the maximum PHB recovery and purity were found to be solvent pH 7, extraction temperature - 43 °C, incubation time - 70 minutes, and percentage acetone – 30 % from this study. The maximum predicted PHB recovery was found to be 0.845 g/g biomass dry cell weight and the purity was found to be 97.23 % using the optimized conditions.

Keywords: acetone, PHB, RSM, halogenated hydrocarbons, extraction, bacillus subtilis.

Procedia PDF Downloads 433
4731 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico

Authors: Ismene Ithai Bras-Ruiz

Abstract:

Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.

Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise

Procedia PDF Downloads 124
4730 Multisensory Science, Technology, Engineering and Mathematics Learning: Combined Hands-on and Virtual Science for Distance Learners of Food Chemistry

Authors: Paulomi Polly Burey, Mark Lynch

Abstract:

It has been shown that laboratory activities can help cement understanding of theoretical concepts, but it is difficult to deliver such an activity to an online cohort and issues such as occupational health and safety in the students’ learning environment need to be considered. Chemistry, in particular, is one of the sciences where practical experience is beneficial for learning, however typical university experiments may not be suitable for the learning environment of a distance learner. Food provides an ideal medium for demonstrating chemical concepts, and along with a few simple physical and virtual tools provided by educators, analytical chemistry can be experienced by distance learners. Food chemistry experiments were designed to be carried out in a home-based environment that 1) Had sufficient scientific rigour and skill-building to reinforce theoretical concepts; 2) Were safe for use at home by university students and 3) Had the potential to enhance student learning by linking simple hands-on laboratory activities with high-level virtual science. Two main components of the resources were developed, a home laboratory experiment component, and a virtual laboratory component. For the home laboratory component, students were provided with laboratory kits, as well as a list of supplementary inexpensive chemical items that they could purchase from hardware stores and supermarkets. The experiments used were typical proximate analyses of food, as well as experiments focused on techniques such as spectrophotometry and chromatography. Written instructions for each experiment coupled with video laboratory demonstrations were used to train students on appropriate laboratory technique. Data that students collected in their home laboratory environment was collated across the class through shared documents, so that the group could carry out statistical analysis and experience a full laboratory experience from their own home. For the virtual laboratory component, students were able to view a laboratory safety induction and advised on good characteristics of a home laboratory space prior to carrying out their experiments. Following on from this activity, students observed laboratory demonstrations of the experimental series they would carry out in their learning environment. Finally, students were embedded in a virtual laboratory environment to experience complex chemical analyses with equipment that would be too costly and sensitive to be housed in their learning environment. To investigate the impact of the intervention, students were surveyed before and after the laboratory series to evaluate engagement and satisfaction with the course. Students were also assessed on their understanding of theoretical chemical concepts before and after the laboratory series to determine the impact on their learning. At the end of the intervention, focus groups were run to determine which aspects helped and hindered learning. It was found that the physical experiments helped students to understand laboratory technique, as well as methodology interpretation, particularly if they had not been in such a laboratory environment before. The virtual learning environment aided learning as it could be utilized for longer than a typical physical laboratory class, thus allowing further time on understanding techniques.

Keywords: chemistry, food science, future pedagogy, STEM education

Procedia PDF Downloads 164
4729 Parametric Modeling for Survival Data with Competing Risks Using the Generalized Gompertz Distribution

Authors: Noora Al-Shanfari, M. Mazharul Islam

Abstract:

The cumulative incidence function (CIF) is a fundamental approach for analyzing survival data in the presence of competing risks, which estimates the marginal probability for each competing event. Parametric modeling of CIF has the advantage of fitting various shapes of CIF and estimates the impact of covariates with maximum efficiency. To calculate the total CIF's covariate influence using a parametric model., it is essential to parametrize the baseline of the CIF. As the CIF is an improper function by nature, it is necessary to utilize an improper distribution when applying parametric models. The Gompertz distribution, which is an improper distribution, is limited in its applicability as it only accounts for monotone hazard shapes. The generalized Gompertz distribution, however, can adapt to a wider range of hazard shapes, including unimodal, bathtub, and monotonic increasing or decreasing hazard shapes. In this paper, the generalized Gompertz distribution is used to parametrize the baseline of the CIF, and the parameters of the proposed model are estimated using the maximum likelihood approach. The proposed model is compared with the existing Gompertz model using the Akaike information criterion. Appropriate statistical test procedures and model-fitting criteria will be used to test the adequacy of the model. Both models are applied to the ‘colon’ dataset, which is available in the “biostat3” package in R.

Keywords: competing risks, cumulative incidence function, improper distribution, parametric modeling, survival analysis

Procedia PDF Downloads 86
4728 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon

Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison

Abstract:

Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.

Keywords: asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax

Procedia PDF Downloads 411
4727 Effect of Annealing Temperature on Microstructural Evolution of Nanoindented Cu/Si Thin Films

Authors: Woei-Shyan Lee, Yu-Liang Chuang

Abstract:

The nano-mechanical properties of as-deposited Cu/Si thin films indented to a depth of 2000 nm are investigated using a nanoindentation technique. The nanoindented specimens are annealed at a temperature of either 160 °C or 210°C, respectively. The microstructures of the as-deposited and annealed samples are then examined via transmission electron microscopy (TEM). The results show that both the loading and the unloading regions of the load-displacement curve are smooth and continuous, which suggests that no debonding or cracking occurs during nanoindentation. In addition, the hardness and Young’s modulus of the Cu/Si thin films are found to vary with the nanoindentation depth, and have maximum values of 2.8 GPa and 143 GPa, respectively, at the maximum indentation depth of 2000 nm. The TEM observations show that the region of the Cu/Si film beneath the indenter undergoes a phase transformation during the indentation process. In the case of the as-deposited specimens, the indentation pressure induces a completely amorphous phase within the indentation zone. For the specimens annealed at a temperature of 160°C, the amorphous nature of the microstructure within the indented zone is maintained. However, for the specimens annealed at a higher temperature of 210°C, the indentation affected zone consists of a mixture of amorphous phase and nanocrystalline phase. Copper silicide (η-Cu3Si) precipitates are observed in all of the annealed specimens. The density of the η-Cu3Si precipitates is found to increase with an increasing annealing temperature.

Keywords: nanoindentation, Cu/Si thin films, microstructural evolution, annealing temperature

Procedia PDF Downloads 384
4726 Variation of Airfoil Pressure Profile Due to Confined Air Streams: Application in Gas-Oil Separators

Authors: Amir Hossein Haji, Nabeel Al-Rawahi, Gholamreza Vakili-Nezhaad

Abstract:

An innovative design has been examined for a gas-oil separator based on pressure reduction over an airfoil surface. The primary motivations are to shorten the release trajectory of the bubbles by minimizing the thickness of the oil layer as well as improving uniform pressure reduction zones. Restricted airflow over an airfoil is investigated for its effect on the pressure drop enhancement and the maximum attainable attack angle prior to the stall condition. Aerodynamic separation is delayed based on numerical simulation of Wortmann FX 63137 Airfoil in a confined domain using FLUENT 6.3.26. The proposed set up results in higher pressure drop compared with the free stream case. With the aim of optimum power consumption we have pursued further restriction to an air jet case over the airfoil. Then, a curved strip model is suggested for the air jet which can be applied as an analysis/design tool for the best performance conditions. Pressure reduction is shown to be inversely proportional to the curvature of the upper airfoil profile. This reduction occurs within the tracking zones where the air jet is effectively attached to the airfoil surface. The zero slope condition is suggested to estimate the onset of these zones after which the minimum curvature should be searched. The corresponding zero slope curvature is applied for estimation of the maximum pressure drop which shows satisfactory agreement with the simulation results.

Keywords: airfoil, air jet, curved fluid flow, gas-oil separator

Procedia PDF Downloads 461
4725 Chi Square Confirmation of Autonomic Functions Percentile Norms of Indian Sportspersons Withdrawn from Competitive Games and Sports

Authors: Pawan Kumar, Dhananjoy Shaw, Manoj Kumar Rathi

Abstract:

Purpose of the study were to compare between (a) frequencies among the four quartiles of percentile norms of autonomic variables from power events and (b) frequencies among the four quartiles percentile norms of autonomic variables from aerobic events of Indian sportspersons withdrawn from competitive games and sports in regard to number of samples falling in each quartile. The study was conducted on 430 males of 30 to 35 years of age. Based on the nature of game/sports the retired sportspersons were classified into power events (throwers, judo players, wrestlers, short distance swimmers, cricket fast bowlers and power lifters) and aerobic events (long distance runners, long distance swimmers, water polo players). Date was collected using ECG polygraphs. Data were processed and extracted using frequency domain analysis and time domain analysis. Collected data were computed with frequency, percentage of each quartile and finally the frequencies were compared with the chi square analysis. The finding pertaining to norm reference comparison of frequencies among the four quartiles of Indian sportspersons withdrawn from competitive games and sports from (a) power events suggests that frequency distribution in four quartile namely Q1, Q2, Q3, and Q4 are significantly different at .05 level in regard to variables namely, SDNN, Total Power (Absolute Power), HF (Absolute Power), LF (Normalized Power), HF (Normalized Power), LF/HF ratio, deep breathing test, expiratory respiratory ratio, valsalva manoeuvre, hand grip test, cold pressor test and lying to standing test, whereas, insignificantly different at .05 level in regard to variables namely, SDSD, RMSSD, SDANN, NN50 Count, pNN50 Count, LF (Absolute Power) and 30: 15 Ratio (b) aerobic events suggests that frequency distribution in four quartile are significantly different at .05 level in regard to variables namely, SDNN, LF (Normalized Power), HF (Normalized Power), LF/HF ratio, deep breathing test, expiratory respiratory ratio, hand grip test, cold pressor test, lying to standing test and 30: 15 ratio, whereas, insignificantly different at .05 level in regard to variables namely, SDSD, RMSSD. SDANN, NN50 count, pNN50 count, Total Power (Absolute Power), LF(Absolute Power) HF(Absolute Power), and valsalva manoeuvre. The study concluded that comparison of frequencies among the four quartiles of Indian retired sportspersons from power events and aerobic events are different in four quartiles in regard to selected autonomic functions, hence the developed percentile norms are not homogenously distributed across the percentile scale; hence strengthen the percentage distribution towards normal distribution.

Keywords: power, aerobic, absolute power, normalized power

Procedia PDF Downloads 349
4724 Systematics of Water Lilies (Genus Nymphaea L.) Using 18S rDNA Sequences

Authors: M. Nakkuntod, S. Srinarang, K.W. Hilu

Abstract:

Water lily (Nymphaea L.) is the largest genus of Nymphaeaceae. This family is composed of six genera (Nuphar, Ondinea, Euryale, Victoria, Barclaya, Nymphaea). Its members are nearly worldwide in tropical and temperate regions. The classification of some species in Nymphaea is ambiguous due to high variation in leaf and flower parts such as leaf margin, stamen appendage. Therefore, the phylogenetic relationships based on 18S rDNA were constructed to delimit this genus. DNAs of 52 specimens belonging to water lily family were extracted using modified conventional method containing cetyltrimethyl ammonium bromide (CTAB). The results showed that the amplified fragment is about 1600 base pairs in size. After analysis, the aligned sequences presented 9.36% for variable characters comprising 2.66% of parsimonious informative sites and 6.70% of singleton sites. Moreover, there are 6 regions of 1-2 base(s) for insertion/deletion. The phylogenetic trees based on maximum parsimony and maximum likelihood with high bootstrap support indicated that genus Nymphaea was a paraphyletic group because of Ondinea, Victoria and Euryale disruption. Within genus Nymphaea, subgenus Nymphaea is a basal lineage group which cooperated with Euryale and Victoria. The other four subgenera, namely Lotos, Hydrocallis, Brachyceras and Anecphya were included the same large clade which Ondinea was placed within Anecphya clade due to geographical sharing.

Keywords: nrDNA, phylogeny, taxonomy, waterlily

Procedia PDF Downloads 137
4723 The Mapping of Pastoral Area as a Basis of Ecological for Beef Cattle in Pinrang Regency, South Sulawesi, Indonesia

Authors: Jasmal A. Syamsu, Muhammad Yusuf, Hikmah M. Ali, Mawardi A. Asja, Zulkharnaim

Abstract:

This study was conducted and aimed in identifying and mapping the pasture as an ecological base of beef cattle. A survey was carried out during a period of April to June 2016, in Suppa, Mattirobulu, the district of Pinrang, South Sulawesi province. The mapping process of grazing area was conducted in several stages; inputting and tracking of data points into Google Earth Pro (version 7.1.4.1529), affirmation and confirmation of tracking line visualized by satellite with a variety of records at the point, a certain point and tracking input data into ArcMap Application (ArcGIS version 10.1), data processing DEM/SRTM (S04E119) with respect to the location of the grazing areas, creation of a contour map (a distance of 5 m) and mapping tilt (slope) of land and land cover map-making. Analysis of land cover, particularly the state of the vegetation was done through the identification procedure NDVI (Normalized Differences Vegetation Index). This procedure was performed by making use of the Landsat-8. The results showed that the topography of the grazing areas of hills and some sloping surfaces and flat with elevation vary from 74 to 145 above sea level (asl), while the requirements for growing superior grass and legume is an altitude of up to 143-159 asl. Slope varied between 0 - > 40% and was dominated by a slope of 0-15%, according to the slope/topography pasture maximum of 15%. The range of NDVI values for pasture image analysis results was between 0.1 and 0.27. Characteristics of vegetation cover of pasture land in the category of vegetation density were low, 70% of the land was the land for cattle grazing, while the remaining approximately 30% was a grove and forest included plant water where the place for shelter of the cattle during the heat and drinking water supply. There are seven types of graminae and 5 types of legume that was dominant in the region. Proportionally, graminae class dominated up 75.6% and legume crops up to 22.1% and the remaining 2.3% was another plant trees that grow in the region. The dominant weed species in the region were Cromolaenaodorata and Lantana camara, besides that there were 6 types of floor plant that did not include as forage fodder.

Keywords: pastoral, ecology, mapping, beef cattle

Procedia PDF Downloads 341