Search results for: commercial complex
315 Flow Visualization and Characterization of an Artery Model with Stenosis
Authors: Anis S. Shuib, Peter R. Hoskins, William J. Easson
Abstract:
Cardiovascular diseases, principally atherosclerosis, are responsible for 30% of world deaths. Atherosclerosis is due to the formation of plaque. The fatty plaque may be at risk of rupture, leading typically to stroke and heart attack. The plaque is usually associated with a high degree of lumen reduction, called a stenosis.It is increasingly recognized that the initiation and progression of disease and the occurrence of clinical events is a complex interplay between the local biomechanical environment and the local vascular biology. The aim of this study is to investigate the flow behavior through a stenosed artery. A physical experiment was performed using an artery model and blood analogue fluid. An axisymmetric model constructed consists of contraction and expansion region that follow a mathematical form of cosine function. A 30% diameter reduction was used in this study. The flow field was measured using particle image velocimetry (PIV). Spherical particles with 20μm diameter were seeded in a water-glycerol-NaCl mixture. Steady flow Reynolds numbers are 250. The area of interest is the region after the stenosis where the flow separation occurs. The velocity field was measured and the velocity gradient was investigated. There was high particle concentration in the recirculation zone. High velocity gradient formed immediately after the stenosis throat created a lift force that enhanced particle migration to the flow separation area.
Keywords: Stenosis artery, Biofluid mechanics, PIV
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2003314 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise
Authors: J. P. Dubois, Omar M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813313 The Effect of the Hourly Compensation on the Unemployment Rate: Comparative Analysis of United States, Canada and the United Kingdom Using Panel Data Regression Analysis
Authors: Ashiquer Rahman, Hares Mohammad, Ummey Salma
Abstract:
A country’s hourly compensation and unemployment rates are two of its most crucial components. They are not merely statistics but they have profound effects on individual, families, country, and the economy. They are inversely related to one another. The increased hourly compensation in the manufacturing sector can have a favorable effect on job changing issues. Moreover, the relationship between hourly compensation and unemployment is complex and influenced by broader economic factors. In this paper, in order to determine the effect of hourly compensation on unemployment rate, we use the panel data regression models and evaluate the expected link between hourly compensation and unemployment rate. We estimate the fixed effects model (FEM), evaluate the error components model (ECM), and determine which model (the FEM or ECM) is better through pooling all 60 observations. We then analyze and review the data by comparing countries (United States, Canada and the United Kingdom) using panel data regression models. Finally, we provide result, analysis and a summary of this extensive research on how the hourly compensation affects unemployment rate. Additionally, this paper offers relevant and useful guideline for the government and academic community to use an econometrics and social approach for the hourly compensation on unemployment rate to eliminate the problem.
Keywords: Hourly compensation, unemployment rate, panel data regression models, dummy variables, random effects model, fixed effects model, the linear regression model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67312 Aircraft Selection Using Multiple Criteria Decision Making Analysis Method with Different Data Normalization Techniques
Authors: C. Ardil
Abstract:
This paper presents an original application of multiple criteria decision making analysis theory to the evaluation of aircraft selection problem. The selection of an optimal, efficient and reliable fleet, network and operations planning policy is one of the most important factors in aircraft selection problem. Given that decision making in aircraft selection involves the consideration of a number of opposite criteria and possible solutions, such a selection can be considered as a multiple criteria decision making analysis problem. This study presents a new integrated approach to decision making by considering the multiple criteria utility theory and the maximal regret minimization theory methods as well as aircraft technical, economical, and environmental aspects. Multiple criteria decision making analysis method uses different normalization techniques to allow criteria to be aggregated with qualitative and quantitative data of the decision problem. Therefore, selecting a suitable normalization technique for the model is also a challenge to provide data aggregation for the aircraft selection problem. To compare the impact of different normalization techniques on the decision problem, the vector, linear (sum), linear (max), and linear (max-min) data normalization techniques were identified to evaluate aircraft selection problem. As a logical implication of the proposed approach, it enhances the decision making process through enabling the decision maker to: (i) use higher level knowledge regarding the selection of criteria weights and the proposed technique, (ii) estimate the ranking of an alternative, under different data normalization techniques and integrated criteria weights after a posteriori analysis of the final rankings of alternatives. A set of commercial passenger aircraft were considered in order to illustrate the proposed approach. The obtained results of the proposed approach were compared using Spearman's rho tests. An analysis of the final rank stability with respect to the changes in criteria weights was also performed so as to assess the sensitivity of the alternative rankings obtained by the application of different data normalization techniques and the proposed approach.
Keywords: Normalization Techniques, Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 587311 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491310 Automat Control of the Aircrafts- Lateral Movement using the Dynamic Inversion
Authors: Mihai Lungu, Romulus Lungu, Lucian Grigorie
Abstract:
The paper presents a new system for the automat control of the aircrafts- flight in lateral plane using the cinematic model and the dynamic inversion. Starting from the equations of the aircrafts- lateral movement, the authors use two axes systems and obtained a control law that cancels the lateral deviation of the flying objects from the runway line. This system makes the aircrafts- direction angle to follow the direction angle of the runway line. Simulations in Matlab/Simulink have been done for different aircraft-s initial points and direction angles. The inconvenience of this system is the long duration of the “transient regime". That is why this system can be used independently, but the results are not very good; thus, it can be a part (subsystem) of other systems. The main system that cancels the lateral deviation from the runway line is based on dynamic inversion and uses, as subsystem, the control system for the lateral movement using the cinematic model. Using complex Matlab/Simulink models, the authors obtained the time evolution of the direction angle and the time evolution of the aircraft lateral deviation with respect to the runway line, for different values of the initial direction angle and for different wind types. The system has a very good behavior for all initial direction angles and wind types.Keywords: Direction angle, Dynamic inversion, Lateraldeviation, Lateral movement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1938309 Facility Location Selection using Preference Programming
Authors: C. Ardil
Abstract:
This paper presents preference programming technique based multiple criteria decision making analysis for selecting a facility location for a new organization or expansion of an existing facility which is of vital importance for a decision support system and strategic planning process. The implementation of decision support systems is considered crucial to sustain competitive advantage and profitability persistence in turbulent environment. As an effective strategic management and decision making is necessary, multiple criteria decision making analysis supports the decision makers to formulate and implement the right strategy. The investment cost associated with acquiring the property and facility construction makes the facility location selection problem a long-term strategic investment decision, which rationalize the best location selection which results in higher economic benefits through increased productivity and optimal distribution network. Selecting the proper facility location from a given set of alternatives is a difficult task, as many potential qualitative and quantitative multiple conflicting criteria are to be considered. This paper solves a facility location selection problem using preference programming, which is an effective multiple criteria decision making analysis tool applied to deal with complex decision problems in the operational research environment. The ranking results of preference programming are compared with WSM, TOPSIS and VIKOR methods.
Keywords: Facility Location Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, Preference Programming, Location Selection, WSM, TOPSIS, VIKOR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 540308 Development of Sustainable Farming Compartment with Treated Wastewater in Abu Dhabi
Authors: Jongwan Eun, Sam Helwany, Lakshyana K. C.
Abstract:
The United Arab Emirates (UAE) is significantly dependent on desalinated water and groundwater resource, which is expensive and highly energy intensive. Despite the scarce water resource, stagnates only 54% of the recycled water was reused in 2012, and due to the lack of infrastructure to reuse the recycled water, the portion is expected to decrease with growing water usage. In this study, an “Oasis” complex comprised of Sustainable Farming Compartments (SFC) was proposed for reusing treated wastewater. The wastewater is used to decrease the ambient temperature of the SFC via an evaporative cooler. The SFC prototype was designed, built, and tested in an environmentally controlled laboratory and field site to evaluate the feasibility and effectiveness of the SFC subjected to various climatic conditions in Abu Dhabi. Based on the experimental results, the temperature drop achieved in the SFC in the laboratory and field site were5 ̊C from 22 ̊C and 7- 15 ̊C (from 33-45 ̊C to average 28 ̊C at relative humidity < 50%), respectively. An energy simulation using TRNSYS was performed to extend and validate the results obtained from the experiment. The results from the energy simulation and experiments show statistically close agreement. The total power consumption of the SFC system was approximately three and a half times lower than that of an electrical air conditioner. Therefore, by using treated wastewater, the SFC has a promising prospect to solve Abu Dhabi’s ecological concern related to desertification and wind erosion.Keywords: Ecological farming system, energy simulation, evaporative cooling system, treated wastewater, temperature, humidity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1315307 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape
Authors: M. Vogiatzis, K. Perakis
Abstract:
Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.
Keywords: land use/land cover, random forest, Landsat-8 OLI, Sentinel-2A MSI, Corine land cover
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 339306 Evaluation of Model and Performance of Fuel Cell Hybrid Electric Vehicle in Different Drive Cycles
Authors: Fathollah Ommi, Golnaz Pourabedin, Koros Nekofa
Abstract:
In recent years fuel cell vehicles are rapidly appearing all over the globe. In less than 10 years, fuel cell vehicles have gone from mere research novelties to operating prototypes and demonstration models. At the same time, government and industry in development countries have teamed up to invest billions of dollars in partnerships intended to commercialize fuel cell vehicles within the early years of the 21st century. The purpose of this study is evaluation of model and performance of fuel cell hybrid electric vehicle in different drive cycles. A fuel cell system model developed in this work is a semi-experimental model that allows users to use the theory and experimental relationships in a fuel cell system. The model can be used as part of a complex fuel cell vehicle model in advanced vehicle simulator (ADVISOR). This work reveals that the fuel consumption and energy efficiency vary in different drive cycles. Arising acceleration and speed in a drive cycle leads to Fuel consumption increase. In addition, energy losses in drive cycle relates to fuel cell system power request. Parasitic power in different parts of fuel cell system will increase when power request increases. Finally, most of energy losses in drive cycle occur in fuel cell system because of producing a lot of energy by fuel cell stack.Keywords: Drive cycle, Energy efficiency, energy consumption, Fuel cell system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685305 Multi Task Scheme to Monitor Multivariate Environments Using Artificial Neural Network
Authors: K. Atashgar
Abstract:
When an assignable cause(s) manifests itself to a multivariate process and the process shifts to an out-of-control condition, a root-cause analysis should be initiated by quality engineers to identify and eliminate the assignable cause(s) affected the process. A root-cause analysis in a multivariate process is more complex compared to a univariate process. In the case of a process involved several correlated variables an effective root-cause analysis can be only experienced when it is possible to identify the required knowledge including the out-of-control condition, the change point, and the variable(s) responsible to the out-of-control condition, all simultaneously. Although literature addresses different schemes to monitor multivariate processes, one can find few scientific reports focused on all the required knowledge. To the best of the author’s knowledge this is the first time that a multi task model based on artificial neural network (ANN) is reported to monitor all the required knowledge at the same time for a multivariate process with more than two correlated quality characteristics. The performance of the proposed scheme is evaluated numerically when different step shifts affect the mean vector. Average run length is used to investigate the performance of the proposed multi task model. The simulated results indicate the multi task scheme performs all the required knowledge effectively.
Keywords: Artificial neural network, Multivariate process, Statistical process control, Change point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681304 Ice Load Measurements on Known Structures Using Image Processing Methods
Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.
Keywords: Camera calibration, Ice detection, ice load measurements, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257303 Visualization and Indexing of Spectral Databases
Authors: Tibor Kulcsar, Gabor Sarossy, Gabor Bereznai, Robert Auer, Janos Abonyi
Abstract:
On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.
Keywords: indexing high dimensional databases, dimensional reduction, clustering, similarity, k-nn algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769302 Quantifying the Second-Level Digital Divide on Sub-National Level
Authors: Vladimir Korovkin, Albert Park, Evgeny Kaganer
Abstract:
Digital divide, the gap in the access to the world of digital technologies and the socio-economic opportunities that they create is an important phenomenon of the XXI century. This gap may exist between countries, regions within a country or socio-demographic groups, creating the classes of “digital have and have nots”. While the 1st-level divide (the difference in opportunities to access the digital networks) was demonstrated to diminish with time, the issues of 2nd level divide (the difference in skills and usage of digital systems) and 3rd level divide (the difference in effects obtained from digital technology) may grow. The paper offers a systemic review of literature on the measurement of the digital divide, noting the certain conceptual stagnation due to the lack of effective instruments that would capture the complex nature of the phenomenon. As a result, many important concepts do not receive the empiric exploration they deserve. As a solution the paper suggests a composite Digital Life Index, that studies separately the digital supply and demand across seven independent dimensions providing for 14 subindices. The Index is based on Internet-borne data, a distinction from traditional research approaches that rely on official statistics or surveys. The application of the model to the study of the digital divide between Russian regions and between cities in China have brought promising results. The paper advances the existing methodological literature on the 2nd level digital divide and can also inform practical decision-making regarding the strategies of national and regional digital development.
Keywords: Digital transformation, second-level digital divide, composite index, digital policy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 462301 Automatic Road Network Recognition and Extraction for Urban Planning
Authors: D. B. L. Bong, K.C. Lai, A. Joseph
Abstract:
The uses of road map in daily activities are numerous but it is a hassle to construct and update a road map whenever there are changes. In Universiti Malaysia Sarawak, research on Automatic Road Extraction (ARE) was explored to solve the difficulties in updating road map. The research started with using Satellite Image (SI), or in short, the ARE-SI project. A Hybrid Simple Colour Space Segmentation & Edge Detection (Hybrid SCSS-EDGE) algorithm was developed to extract roads automatically from satellite-taken images. In order to extract the road network accurately, the satellite image must be analyzed prior to the extraction process. The characteristics of these elements are analyzed and consequently the relationships among them are determined. In this study, the road regions are extracted based on colour space elements and edge details of roads. Besides, edge detection method is applied to further filter out the non-road regions. The extracted road regions are validated by using a segmentation method. These results are valuable for building road map and detecting the changes of the existing road database. The proposed Hybrid Simple Colour Space Segmentation and Edge Detection (Hybrid SCSS-EDGE) algorithm can perform the tasks fully automatic, where the user only needs to input a high-resolution satellite image and wait for the result. Moreover, this system can work on complex road network and generate the extraction result in seconds.Keywords: Road Network Recognition, Colour Space, Edge Detection, Urban Planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2994300 Comparative Safety Performance Evaluation of Profiled Deck Composite Slab from the Use of Slope-Intercept and Partial Shear Methods
Authors: Izian Abd. Karim, Kachalla Mohammed, Nora Farah A. A. Aziz, Law Teik Hua
Abstract:
The economic use and ease of construction of profiled deck composite slab is marred with the complex and un-economic strength verification required for the serviceability and general safety considerations. Beside these, albeit factors such as shear span length, deck geometries and mechanical frictions greatly influence the longitudinal shear strength, that determines the ultimate strength of profiled deck composite slab, and number of methods available for its determination; partial shear and slope-intercept are the two methods according to Euro-code 4 provision. However, the complexity associated with shear behavior of profiled deck composite slab, the use of these methods in determining the load carrying capacities of such slab yields different and conflicting values. This couple with the time and cost constraint associated with the strength verification is a source of concern that draws more attentions nowadays, the issue is critical. Treating some of these known shear strength influencing factors as random variables, the load carrying capacity violation of profiled deck composite slab from the use of the two-methods defined according to Euro-code 4 are determined using reliability approach, and comparatively studied. The study reveals safety values from the use of m-k method shows good standing compared with that from the partial shear method.Keywords: Composite slab, first order reliability method, longitudinal shear, partial shear connection, slope-intercept.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909299 Sensitivity Analysis of Principal Stresses in Concrete Slab of Rigid Pavement Made From Recycled Materials
Authors: Aleš Florian, Lenka Ševelová
Abstract:
Complex sensitivity analysis of stresses in a concrete slab of the real type of rigid pavement made from recycled materials is performed. The computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangements of joints, and the entire model can be loaded by the thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with the help of special contact elements. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers and soil to the depth of about 3m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique Updated Latin Hypercube Sampling with 20 simulations is used. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is utilized. As a result, the estimates of influence of random variability of individual input variables on the random variability of principal stresses s1 and s3 in 53 points on the upper and lower surface of the concrete slabs are obtained.
Keywords: Concrete, FEM, pavement, sensitivity, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2128298 Assessment of Agricultural Land Use Land Cover, Land Surface Temperature and Population Changes Using Remote Sensing and GIS: Southwest Part of Marmara Sea, Turkey
Authors: Melis Inalpulat, Levent Genc
Abstract:
Land Use Land Cover (LULC) changes due to human activities and natural causes have become a major environmental concern. Assessment of temporal remote sensing data provides information about LULC impacts on environment. Land Surface Temperature (LST) is one of the important components for modeling environmental changes in climatological, hydrological, and agricultural studies. In this study, LULC changes (September 7, 1984 and July 8, 2014) especially in agricultural lands together with population changes (1985-2014) and LST status were investigated using remotely sensed and census data in South Marmara Watershed, Turkey. LULC changes were determined using Landsat TM and Landsat OLI data acquired in 1984 and 2014 summers. Six-band TM and OLI images were classified using supervised classification method to prepare LULC map including five classes including Forest (F), Grazing Land (G), Agricultural Land (A), Water Surface (W), Residential Area-Bare Soil (R-B) classes. The LST image was also derived from thermal bands of the same dates. LULC classification results showed that forest areas, agricultural lands, water surfaces and residential area-bare soils were increased as 65751 ha, 20163 ha, 1924 ha and 20462 ha respectively. In comparison, a dramatic decrement occurred in grazing land (107985 ha) within three decades. The population increased 29% between years 1984-2014 in whole study area. Along with the natural causes, migration also caused this increase since the study area has an important employment potential. LULC was transformed among the classes due to the expansion in residential, commercial and industrial areas as well as political decisions. In the study, results showed that agricultural lands around the settlement areas transformed to residential areas in 30 years. The LST images showed that mean temperatures were ranged between 26-32°C in 1984 and 27-33°C in 2014. Minimum temperature of agricultural lands was increased 3°C and reached to 23°C. In contrast, maximum temperature of A class decreased to 41°C from 44°C. Considering temperatures of the 2014 R-B class and 1984 status of same areas, it was seen that mean, min and max temperatures increased by 2°C. As a result, the dynamism of population, LULC and LST resulted in increasing mean and maximum surface temperatures, living spaces/industrial areas and agricultural lands.Keywords: Census data, landsat, land surface temperature (LST), land use land cover (LULC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2120297 Assessment Power and Frequency Oscillation Damping Using POD Controller and Proposed FOD Controller
Authors: Yahya Naderi, Tohid Rahimi, Babak Yousefi, Seyed Hossein Hosseini
Abstract:
Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. But FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. But Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. So FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.
Keywords: Power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3163296 Relationship between Personality Traits and Postural Stability among Czech Military Combat Troops
Authors: K. Rusnakova, D. Gerych, M. Stehlik
Abstract:
Postural stability is a complex process involving actions of biomechanical, motor, sensory and central nervous system components. Numerous joint systems, muscles involved, the complexity of sporting movements and situations require perfect coordination of the body's movement patterns. To adapt to a constantly changing situation in such a dynamic environment as physical performance, optimal input of information from visual, vestibular and somatosensory sensors are needed. Combat soldiers are required to perform physically and mentally demanding tasks in adverse conditions, and poor postural stability has been identified as a risk factor for lower extremity musculoskeletal injury. The aim of this study is to investigate whether some personality traits are related to the performance of static postural stability among soldiers of combat troops. NEO personality inventory (NEO-PI-R) was used to identify personality traits and the Nintendo Wii Balance Board was used to assess static postural stability of soldiers. Postural stability performance was assessed by changes in center of pressure (CoP) and center of gravity (CoG). A posturographic test was performed for 60 s with eyes opened during quiet upright standing. The results showed that facets of neuroticism and conscientiousness personality traits were significantly correlated with measured parameters of CoP and CoG. This study can help for better understanding the relationship between personality traits and static postural stability. The results can be used to optimize the training process at the individual level.Keywords: Neuroticism, conscientiousness, postural stability, combat troops.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 511295 Latent Factors of Severity in Truck-Involved and Non-Truck-Involved Crashes on Freeways
Authors: Shin-Hyung Cho, Dong-Kyu Kim, Seung-Young Kho
Abstract:
Truck-involved crashes have higher crash severity than non-truck-involved crashes. There have been many studies about the frequency of crashes and the development of severity models, but those studies only analyzed the relationship between observed variables. To identify why more people are injured or killed when trucks are involved in the crash, we must examine to quantify the complex causal relationship between severity of the crash and risk factors by adopting the latent factors of crashes. The aim of this study was to develop a structural equation or model based on truck-involved and non-truck-involved crashes, including five latent variables, i.e. a crash factor, environmental factor, road factor, driver’s factor, and severity factor. To clarify the unique characteristics of truck-involved crashes compared to non-truck-involved crashes, a confirmatory analysis method was used. To develop the model, we extracted crash data from 10,083 crashes on Korean freeways from 2008 through 2014. The results showed that the most significant variable affecting the severity of a crash is the crash factor, which can be expressed by the location, cause, and type of the crash. For non-truck-involved crashes, the crash and environment factors increase severity of the crash; conversely, the road and driver factors tend to reduce severity of the crash. For truck-involved crashes, the driver factor has a significant effect on severity of the crash although its effect is slightly less than the crash factor. The multiple group analysis employed to analyze the differences between the heterogeneous groups of drivers.
Keywords: Crash severity, structural equation modeling, truck-involved crashes, multiple group analysis, crash on freeway.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340294 Building Information Modeling and Its Application in the State of Kuwait
Authors: Michael Gerges, Ograbe Ahiakwo, Martin Jaeger, Ahmad Asaad
Abstract:
Recent advances of Building Information Modeling (BIM) especially in the Middle East have increased remarkably. Dubai has been taking a lead on this by making it mandatory for BIM to be adopted for all projects that involve complex architecture designs. This is because BIM is a dynamic process that assists all stakeholders in monitoring the project status throughout different project phases with great transparency. It focuses on utilizing information technology to improve collaboration among project participants during the entire life cycle of the project from the initial design, to the supply chain, resource allocation, construction and all productivity requirements. In view of this trend, the paper examines the extent of applying BIM in the State of Kuwait, by exploring practitioners’ perspectives on BIM, especially their perspectives on main barriers and main advantages. To this end structured interviews were carried out based on questionnaires and with a range of different construction professionals. The results revealed that practitioners perceive improved communication and mitigated project risks by encouraged collaboration between project participants. However, it was also observed that the full implementation of BIM in the State of Kuwait requires concerted efforts to make clients demanding BIM, counteract resistance to change among construction professionals and offer more training for design team members. This paper forms part of an on-going research effort on BIM and its application in the State of Kuwait and it is on this basis that further research on the topic is proposed.
Keywords: Building Information Modeling, BIM, construction industry, Kuwait.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2953293 Foreign Languages and Employability in the EU
Authors: Paulina Pietrzyk-Kowalec
Abstract:
This paper presents the phenomenon of multilingualism becoming the norm rather than the exception in the European Union. It also seeks to describe the correlation between the command of foreign languages and employability. It is evident that the challenges of today's societies when it comes to employability are more and more complex. Thus, it is one of the crucial tasks of higher education to prepare its students to face this kind of complexity, understand its nuances, and have the capacity to adapt effectively to situations that are common in corporations based in the countries belonging to the EU. From this point of view, the assessment of the impact that the command of foreign languages of European university students could have on the numerous business sectors becomes vital. It also involves raising awareness of future professionals to make them understand the importance of mastering communicative skills in foreign languages that will meet the requirements of students' prospective employers. The direct connection between higher education institutions and the world of business also allows companies to realize that they should rethink their recruitment and human resources procedures in order to take into account the importance of foreign languages. This article focuses on the objective of the multilingualism policy developed by the European Commission, which is to enable young people to master at least two foreign languages, which is crucial in their future careers. The article puts emphasis on the existence of a crucial connection between the research conducted in higher education institutions and the business sector in order to reduce current qualification gaps.
Keywords: Cross-cultural communication, employability, human resources, language attitudes, multilingualism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 396292 Implication of Taliban’s Recent Relationship with Neighboring Countries and Its Impact on the Current Peace Process
Authors: Lutfurahman Aftab
Abstract:
The Taliban’s relationships with the neighboring countries are a complex political issue that local people interpret one way, and politicians have different perceptions; therefore, it is a current issue that needs to be analyzed broadly and impartially. In this article, we investigate the Taliban’s current relationships with the neighboring countries, as well as look at the effects these relationships have on the current peace negotiations in Doha, which began on September 12, 2020. The issue of Taliban and the current peace process has turned to be the center-of-attention for most of the neighboring countries, and every country has opened new pages in their foreign policies because after the Taliban-US peace agreement, the neighboring countries are meticulously and closely observing the situation and they believe that the Taliban is on the verge to tighten their grips on the future political power of Afghanistan. Every neighboring country of Afghanistan has political, economic, and social interests in this land-locked country. The Taliban’s current role within the peace talks and anticipated future position within the Afghan government will have great political, economic, and social implications on countries in the region as they assess their foreign policies. As these countries move to form closer ties with the Taliban, the government of Afghanistan is worried that this may hinder the peace process. Afghanistan has long blamed Pakistan for sheltering the Taliban and providing safe havens for the terrorist groups, including Al Qaeda, and the recent visits of Taliban’s delegations to Islamabad, Pakistan, have raised concern among government officials in Afghanistan who believe that the Taliban is not independent in their decisions, and for every step they take, are consulting with Pakistan’s political leadership.
Keywords: peace process, USA, Afghanistan, Taliban
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 447291 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation
Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz
Abstract:
Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with successKeywords: Software Metrics, Software Cost Estimation, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957290 Liquid-Liquid Equilibrium for the Binary Mixtures of α-Pinene + Water and α-Terpineol + Water
Authors: Herti Utami, Sutijan, Roto, Wahyudi Budi Sediawan
Abstract:
α-Pinene is the main component of the most turpentine oils. The hydration of α-pinene with acid catalysts leads to a complex mixture of monoterpenes. In order to obtain more valuable products, the α-pinene in the turpentine can be hydrated in dilute mineral acid solutions to produce α-terpineol. The design of separation processes requires information on phase equilibrium and related thermodynamic properties. This paper reports the results of study on liquid-liquid equilibrium (LLE) of system containing α- pinene + water and α-terpineol + water. Binary LLE for α-pinene + water system, and α-terpineol + water systems were determined by experiment at 301K and atmospheric pressure. The two component mixture was stirred for about 30min, then the mixture was left for about 2h for complete phase separation. The composition of both phases was analyzed by using a Gas Chromatograph. The experimental data were correlated by considering both NRTL and UNIQUAC activity coefficient models. The LLE data for the system of α-pinene + water and α-terpineol + water were correlated successfully by the NRTL model. The experimental data were not satisfactorily fitted by the UNIQUAC model. The NRTL model (α =0.3) correlates the LLE data for the system of α-pinene + water at 301K with RMSD of 0.0404%. And the NRTL model (α =0.61) at 301K with RMSD of 0.0058 %. The NRTL model (α =0.3) correlates the LLE data for the system of α- terpineol + water at 301K with RMSD of 0.1487% and the NRTL model (α =0.6) at 301K with RMSD of 0.0032%, between the experimental and calculated mole fractions.Keywords: α-Pinene, α-Terpineol, Liquid-liquid Equilibrium, NRTL model, UNIQUAC model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4977289 Design of QFT-Based Self-Tuning Deadbeat Controller
Authors: H. Mansor, S. B. Mohd Noor
Abstract:
This paper presents a design method of self-tuning Quantitative Feedback Theory (QFT) by using improved deadbeat control algorithm. QFT is a technique to achieve robust control with pre-defined specifications whereas deadbeat is an algorithm that could bring the output to steady state with minimum step size. Nevertheless, usually there are large peaks in the deadbeat response. By integrating QFT specifications into deadbeat algorithm, the large peaks could be tolerated. On the other hand, emerging QFT with adaptive element will produce a robust controller with wider coverage of uncertainty. By combining QFT-based deadbeat algorithm and adaptive element, superior controller that is called selftuning QFT-based deadbeat controller could be achieved. The output response that is fast, robust and adaptive is expected. Using a grain dryer plant model as a pilot case-study, the performance of the proposed method has been evaluated and analyzed. Grain drying process is very complex with highly nonlinear behaviour, long delay, affected by environmental changes and affected by disturbances. Performance comparisons have been performed between the proposed self-tuning QFT-based deadbeat, standard QFT and standard dead-beat controllers. The efficiency of the self-tuning QFTbased dead-beat controller has been proven from the tests results in terms of controller’s parameters are updated online, less percentage of overshoot and settling time especially when there are variations in the plant.
Keywords: Deadbeat control, quantitative feedback theory (QFT), robust control, self-tuning control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2333288 An Overview of Electronic Waste as Aggregate in Concrete
Authors: S. R. Shamili, C. Natarajan, J. Karthikeyan
Abstract:
Rapid growth of world population and widespread urbanization has remarkably increased the development of the construction industry which caused a huge demand for sand and gravels. Environmental problems occur when the rate of extraction of sand, gravels, and other materials exceeds the rate of generation of natural resources; therefore, an alternative source is essential to replace the materials used in concrete. Now-a-days, electronic products have become an integral part of daily life which provides more comfort, security, and ease of exchange of information. These electronic waste (E-Waste) materials have serious human health concerns and require extreme care in its disposal to avoid any adverse impacts. Disposal or dumping of these E-Wastes also causes major issues because it is highly complex to handle and often contains highly toxic chemicals such as lead, cadmium, mercury, beryllium, brominates flame retardants (BFRs), polyvinyl chloride (PVC), and phosphorus compounds. Hence, E-Waste can be incorporated in concrete to make a sustainable environment. This paper deals with the composition, preparation, properties, classification of E-Waste. All these processes avoid dumping to landfills whilst conserving natural aggregate resources, and providing a better environmental option. This paper also provides a detailed literature review on the behaviour of concrete with incorporation of E-Wastes. Many research shows the strong possibility of using E-Waste as a substitute of aggregates eventually it reduces the use of natural aggregates in concrete.
Keywords: Disposal, electronic waste, landfill, toxic chemicals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2837287 Comparison of Traditional and Green Building Designs in Egypt: Energy Saving
Authors: Hala M. Abdel Mageed, Ahmed I. Omar, Shady H. E. Abdel Aleem
Abstract:
This paper describes in details a commercial green building that has been designed and constructed in Marsa Matrouh, Egypt. The balance between homebuilding and the sustainable environment has been taken into consideration in the design and construction of this building. The building consists of one floor with 3 m height and 2810 m2 area while the envelope area is 1400 m2. The building construction fulfills the natural ventilation requirements. The glass curtain walls are about 50% of the building and the windows area is 300 m2. 6 mm greenish gray tinted temper glass as outer board lite, 6 mm safety glass as inner board lite and 16 mm thick dehydrated air spaces are used in the building. Visible light with 50% transmission, 0.26 solar factor, 0.67 shading coefficient and 1.3 W/m2.K thermal insulation U-value are implemented to realize the performance requirements. Optimum electrical distribution for lighting system, air conditions and other electrical loads has been carried out. Power and quantity of each type of the lighting system lamps and the energy consumption of the lighting system are investigated. The design of the air conditions system is based on summer and winter outdoor conditions. Ventilated, air conditioned spaces and fresh air rates are determined. Variable Refrigerant Flow (VRF) is the air conditioning system used in this building. The VRF outdoor units are located on the roof of the building and connected to indoor units through refrigerant piping. Indoor units are distributed in all building zones through ducts and air outlets to ensure efficient air distribution. The green building energy consumption is evaluated monthly all over one year and compared with the consumed energy in the non-green conditions using the Hourly Analysis Program (HAP) model. The comparison results show that the total energy consumed per year in the green building is about 1,103,221 kWh while the non-green energy consumption is about 1,692,057 kWh. In other words, the green building total annual energy cost is reduced from 136,581 $ to 89,051 $. This means that, the energy saving and consequently the money-saving of this green construction is about 35%. In addition, 13 points are awarded by applying one of the most popular worldwide green energy certification programs (Leadership in Energy and Environmental Design “LEED”) as a rating system for the green construction. It is concluded that this green building ensures sustainability, saves energy and offers an optimum energy performance with minimum cost.
Keywords: Energy consumption, energy saving, green building, leadership in energy and environmental design, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545286 A Cost Effective Approach to Develop Mid-size Enterprise Software Adopted the Waterfall Model
Authors: M. N. Hasnine, M. K. H. Chayon, M. M. Rahman
Abstract:
Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.Keywords: End-user Application Development, Enterprise Software Design, Information Resource Management, Usability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958