Search results for: Optimal node placement and Wireless sensor networks.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4319

Search results for: Optimal node placement and Wireless sensor networks.

89 The Reason of Principles of Construction Engineering and Management Being Necessary for Contracting Firms and Their Projects Managers

Authors: Mamoon Mousa Atout

Abstract:

The industries of construction are in continuous growth not only in Middle East rejoin but almost all over the world. For the last fifteen years, big expansion and increase of different types of projects has been observed. Many infrastructural projects have been developed, high rise buildings, big shopping malls, power sub-stations, roads, bridges, schools, universities and developing many of new cities with full and complete facilities. The growth and enlargement of the mentioned developed projects has been accomplished through many international and local contracting organizations. Senior management of these organizations depend on their qualified and experienced team whom are aware of the implications of project management, construction management, engineering management and resource management during tendering till final completion of the project. This research aims to find out why reasons of principles of construction engineering and management are necessary for contracting firms and their managers. Principles of construction management help contracting organizations to accomplish and deliver projects without delay. This can be maintained by establishing guidelines’ details for updating the adopted system of construction management that they have through qualified and experienced project managers. The research focuses on benefits of other essential skills of projects planning, monitoring and control. Defining roles and responsibilities of contractor project managers during tendering and execution is a part of the investigated factors that will be analyzed. Other skills like optimizing and utilizing the obtainable project resources to deliver the project within time, cost and quality will be also investigated to find out how these factors are affecting the performance of contracting firms, projects managers and projects. The conclusion of the research will help senior management team and the contractors project managers about the benefits of implications and benefits construction management system and its effect upon the performance and knowledge of contract values that they have, and the optimal profit margin of the firm it.

Keywords: Construction management, contracting firms, project managers, planning processes, roles and responsibilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
88 Taguchi-Based Surface Roughness Optimization for Slotted and Tapered Cylindrical Products in Milling and Turning Operations

Authors: Vineeth G. Kuriakose, Joseph C. Chen, Ye Li

Abstract:

The research follows a systematic approach to optimize the parameters for parts machined by turning and milling processes. The quality characteristic chosen is surface roughness since the surface finish plays an important role for parts that require surface contact. A tapered cylindrical surface is designed as a test specimen for the research. The material chosen for machining is aluminum alloy 6061 due to its wide variety of industrial and engineering applications. HAAS VF-2 TR computer numerical control (CNC) vertical machining center is used for milling and HAAS ST-20 CNC machine is used for turning in this research. Taguchi analysis is used to optimize the surface roughness of the machined parts. The L9 Orthogonal Array is designed for four controllable factors with three different levels each, resulting in 18 experimental runs. Signal to Noise (S/N) Ratio is calculated for achieving the specific target value of 75 ± 15 µin. The controllable parameters chosen for turning process are feed rate, depth of cut, coolant flow and finish cut and for milling process are feed rate, spindle speed, step over and coolant flow. The uncontrollable factors are tool geometry for turning process and tool material for milling process. Hypothesis testing is conducted to study the significance of different uncontrollable factors on the surface roughnesses. The optimal parameter settings were identified from the Taguchi analysis and the process capability Cp and the process capability index Cpk were improved from 1.76 and 0.02 to 3.70 and 2.10 respectively for turning process and from 0.87 and 0.19 to 3.85 and 2.70 respectively for the milling process. The surface roughnesses were improved from 60.17 µin to 68.50 µin, reducing the defect rate from 52.39% to 0% for the turning process and from 93.18 µin to 79.49 µin, reducing the defect rate from 71.23% to 0% for the milling process. The purpose of this study is to efficiently utilize the Taguchi design analysis to improve the surface roughness.

Keywords: CNC milling, CNC turning, surface roughness, Taguchi analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 750
87 Production and Application of Organic Waste Compost for Urban Agriculture in Emerging Cities

Authors: Alemayehu Agizew Woldeamanuel, Mekonnen Maschal Tarekegn, Raj Mohan Balakrishina

Abstract:

Composting is one of the conventional techniques adopted for organic waste management but the practice is very limited in emerging cities despite that most of the waste generated is organic. This paper aims to examine the viability of composting for organic waste management in the emerging city of Addis Ababa, Ethiopia by addressing the composting practice, quality of compost and application of compost in urban agriculture. The study collects data using compost laboratory testing and urban farm households’ survey and uses descriptive analysis on the state of compost production and application, physicochemical analysis of the compost samples, and regression analysis on the urban farmer’s willingness to pay for compost. The findings of the study indicated that there is composting practice at a small scale, most of the producers use unsorted feedstock materials, aerobic composting is dominantly used and the maturation period ranged from four to 10 weeks. The carbon content of the compost ranges from 30.8 to 277.1 due to the type of feedstock applied and this surpasses the ideal proportions for C:N ratio. The total nitrogen, pH, organic matter and moisture content are relatively optimal. The levels of heavy metals measured for Mn, Cu, Pb, Cd and Cr6+ in the compost samples are also insignificant. In the urban agriculture sector, chemical fertilizer is the dominant type of soil input in crop productions but vegetable producers use a combination of both fertilizer and other organic inputs including compost. The willingness to pay for compost depends on income, household size, gender, type of soil inputs, monitoring soil fertility, the main product of the farm, farming method and farm ownership. Finally, this study recommends the need for collaboration among stakeholders along the value chain of waste, awareness creation on the benefits of composting and addressing challenges faced by both compost producers and users.

Keywords: Composting, emerging city, organic waste management, urban agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066
86 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter

Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar

Abstract:

Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).

Keywords: Filter media, hydraulic loading rate, residence time distribution, tracer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
85 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 420
84 GridNtru: High Performance PKCS

Authors: Narasimham Challa, Jayaram Pradhan

Abstract:

Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.

Keywords: Alchemi, GridNtru, Ntru, PKCS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
83 Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients

Authors: Mbainaibeye Jérôme, Noureddine Ellouze

Abstract:

Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.

Keywords: Image compression, wavelet transform, sign coding, magnitude coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
82 A Review of Emerging Technologies in Antennas and Phased Arrays for Avionics Systems

Authors: Muhammad Safi, Abdul Manan

Abstract:

In recent years, research in aircraft avionics systems (i.e., radars and antennas) has grown revolutionary. Aircraft technology is experiencing an increasing inclination from all mechanical to all electrical aircraft, with the introduction of inhabitant air vehicles and drone taxis over the last few years. This develops an overriding need to summarize the history, latest trends, and future development in aircraft avionics research for a better understanding and development of new technologies in the domain of avionics systems. This paper focuses on the future trends in antennas and phased arrays for avionics systems. Along with the general overview of the future avionics trend, this work describes the review of around 50 high-quality research papers on aircraft communication systems. Electric-powered aircrafts have been a hot topic in the modern aircraft world. Electric aircrafts have supremacy over their conventional counterparts. Due to increased drone taxi and urban air mobility, fast and reliable communication is very important, so concepts of Broadband Integrated Digital Avionics Information Exchange Networks (B-IDAIENs) and Modular Avionics are being researched for better communication of future aircraft. A Ku-band phased array antenna based on a modular design can be used in a modular avionics system. Furthermore, integrated avionics is also emerging research in future avionics. The main focus of work in future avionics will be using integrated modular avionics and infra-red phased array antennas, which are discussed in detail in this paper. Other work such as reconfigurable antennas and optical communication, are also discussed in this paper. The future of modern aircraft avionics would be based on integrated modulated avionics and small artificial intelligence-based antennas. Optical and infrared communication will also replace microwave frequencies.

Keywords: AI, avionics systems, communication, electric aircrafts, Infra-red, integrated avionics, modular avionics, phased array, reconfigurable antenna, UAVs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168
81 Effective Planning of Public Transportation Systems: A Decision Support Application

Authors: Ferdi Sönmez, Nihal Yorulmaz

Abstract:

Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.

Keywords: User cost, bi-level optimization model, decision support, operator cost, transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
80 Assessment of Multi-Domain Energy Systems Modelling Methods

Authors: M. Stewart, Ameer Al-Khaykan, J. M. Counsell

Abstract:

Emissions are a consequence of electricity generation. A major option for low carbon generation, local energy systems featuring Combined Heat and Power with solar PV (CHPV) has significant potential to increase energy performance, increase resilience, and offer greater control of local energy prices while complementing the UK’s emissions standards and targets. Recent advances in dynamic modelling and simulation of buildings and clusters of buildings using the IDEAS framework have successfully validated a novel multi-vector (simultaneous control of both heat and electricity) approach to integrating the wide range of primary and secondary plant typical of local energy systems designs including CHP, solar PV, gas boilers, absorption chillers and thermal energy storage, and associated electrical and hot water networks, all operating under a single unified control strategy. Results from this work indicate through simulation that integrated control of thermal storage can have a pivotal role in optimizing system performance well beyond the present expectations. Environmental impact analysis and reporting of all energy systems including CHPV LES presently employ a static annual average carbon emissions intensity for grid supplied electricity. This paper focuses on establishing and validating CHPV environmental performance against conventional emissions values and assessment benchmarks to analyze emissions performance without and with an active thermal store in a notional group of non-domestic buildings. Results of this analysis are presented and discussed in context of performance validation and quantifying the reduced environmental impact of CHPV systems with active energy storage in comparison with conventional LES designs.

Keywords: CHPV, thermal storage, control, dynamic simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
79 Thermodynamic Evaluation of Coupling APR1400 with a Thermal Desalination Plant

Authors: M. Gomaa Abdoelatef, Robert M. Field, Lee, Yong-Kwan

Abstract:

Growing human population has placed increased demands on water supplies and spurred a heightened interest in desalination infrastructure. Key elements of the economics of desalination projects are thermal and electrical inputs. With growing concerns over use of fossil fuels to (indirectly) supply these inputs, coupling of desalination with nuclear power production represents a significant opportunity. Individually, nuclear and desalination technologies have a long history and are relatively mature. For desalination, Reverse Osmosis (RO) has the lowest energy inputs. However, the economically driven output quality of the water produced using RO, which uses only electrical inputs, is lower than the output water quality from thermal desalination plants. Therefore, modern desalination projects consider that RO should be coupled with thermal desalination technologies (MSF, MED, or MED-TVC) with attendant steam inputs to permit blending to produce various qualities of water. A large nuclear facility is well positioned to dispatch large quantities of both electrical and thermal power. This paper considers the supply of thermal energy to a large desalination facility to examine heat balance impact on the nuclear steam cycle. The APR1400 nuclear plant is selected as prototypical from both a capacity and turbine cycle heat balance perspective to examine steam supply and the impact on electrical output. Extraction points and quantities of steam are considered parametrically along with various types of thermal desalination technologies to form the basis for further evaluations of economically optimal approaches to the interface of nuclear power production with desalination projects. In our study, the thermodynamic evaluation will be executed by DE-TOP, an IAEA sponsored program. DE-TOP has capabilities to analyze power generation systems coupled to desalination plants through various steam extraction positions, taking into consideration the isolation loop between the nuclear and the thermal desalination facilities (i.e., for radiological isolation).

Keywords: APR1400, Cogeneration, Desalination, DE-TOP, IAEA, MED, MED-TVC, MSF, RO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2837
78 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam

Authors: S. Golmohammadi, M. Noorian Bidgoli

Abstract:

Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the Rock Quality Designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and Stress Reduction Factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has been attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the Rock Engineering System (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.

Keywords: Q-system, Rock Engineering System, statistical analysis, rock mass, tunnel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 295
77 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks

Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton

Abstract:

Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.

Keywords: Modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 908
76 Utilization of Laser-Ablation Based Analytical Methods for Obtaining Complete Chemical Information of Algae

Authors: Pavel Pořízka, David Prochazka, Karel Novotný, Ota Samek, ZdeněkPilát, Klára Procházková, and Jozef Kaiser

Abstract:

Themain goal of this article is to find efficient methods for elemental and molecular analysis of living microorganisms (algae) under defined environmental conditions and cultivation processes. The overall knowledge of chemical composition is obtained utilizing laser-based techniques, Laser- Induced Breakdown Spectroscopy (LIBS) for acquiring information about elemental composition and Raman Spectroscopy for gaining molecular information, respectively. Algal cells were suspended in liquid media and characterized using their spectra. Results obtained employing LIBS and Raman Spectroscopy techniques will help to elucidate algae biology (nutrition dynamics depending on cultivation conditions) and to identify algal strains, which have the potential for applications in metal-ion absorption (bioremediation) and biofuel industry. Moreover, bioremediation can be readily combined with production of 3rd generation biofuels. In order to use algae for efficient fuel production, the optimal cultivation parameters have to be determinedleading to high production of oil in selected cellswithout significant inhibition of the photosynthetic activity and the culture growth rate, e.g. it is necessary to distinguish conditions for algal strain containing high amount of higher unsaturated fatty acids. Measurements employing LIBS and Raman Spectroscopy were utilized in order to give information about alga Trachydiscusminutus with emphasis on the amount of the lipid content inside the algal cell and the ability of algae to withdraw nutrients from its environment and bioremediation (elemental composition), respectively. This article can serve as the reference for further efforts in describing complete chemical composition of algal samples employing laserablation techniques.

Keywords: Laser-Induced Breakdown Spectroscopy, Raman Spectroscopy, Algae, Algal strains, Bioremediation, Biofuels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2250
75 The Impact of Quality Cost on Revenue Sharing in Supply Chain Management

Authors: Fayza Obied-Allah

Abstract:

Customer’ needs, quality, and value creation while reducing costs through supply chain management provides challenges and opportunities for companies and researchers. In the light of these challenges, modern ideas must contribute to counter these challenges and exploit opportunities. Therefore, this paper discusses the impact of the quality cost on revenue sharing as a most important incentive to configure business networks. This paper develops the quality cost approach to align with the modern era. It develops a model to measure quality costs which might enable firms to manage revenue sharing in a supply chain. The developed model includes five categories; besides the well-known four categories (namely prevention costs, appraisal costs, internal failure costs, and external failure costs), a new category has been developed in this research as a new vision of the relationship between quality costs and innovations in industry. This new category is Recycle Cost. This paper also examines whether such quality costs in supply chains influence the revenue sharing between partners. Using the author's quality cost model, the relationship between quality costs and revenue sharing among partners is examined using a case study in an Egyptian manufacturing company which is a part of a supply chain. This paper argues that the revenue-sharing proportion allocated to supplier increases as the recycle cost of supplier increases, and the revenue-sharing proportion allocated to manufacturer increases as the prevention and appraisal costs increase, as well as the failure costs, the recycle costs of manufacturer, and the recycle costs of suppliers decrease. However, the results present surprising findings. The purposes of this study are developing quality cost approach and understanding the relationships between quality costs and revenue sharing in supply chains. Therefore, the present study contributes to theory and practice by explaining how the cost of recycling can be combined in quality cost model to better understanding the revenue sharing among partners in supply chains.

Keywords: Quality cost, Recycle cost, Revenue sharing, Supply chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979
74 Sliding Mode Power System Stabilizer for Synchronous Generator Stability Improvement

Authors: J. Ritonja, R. Brezovnik, M. Petrun, B. Polajžer

Abstract:

Many modern synchronous generators in power systems are extremely weakly damped. The reasons are cost optimization of the machine building and introduction of the additional control equipment into power systems. Oscillations of the synchronous generators and related stability problems of the power systems are harmful and can lead to failures in operation and to damages. The only useful solution to increase damping of the unwanted oscillations represents the implementation of the power system stabilizers. Power system stabilizers generate the additional control signal which changes synchronous generator field excitation voltage. Modern power system stabilizers are integrated into static excitation systems of the synchronous generators. Available commercial power system stabilizers are based on linear control theory. Due to the nonlinear dynamics of the synchronous generator, current stabilizers do not assure optimal damping of the synchronous generator’s oscillations in the entire operating range. For that reason the use of the robust power system stabilizers which are convenient for the entire operating range is reasonable. There are numerous robust techniques applicable for the power system stabilizers. In this paper the use of sliding mode control for synchronous generator stability improvement is studied. On the basis of the sliding mode theory, the robust power system stabilizer was developed. The main advantages of the sliding mode controller are simple realization of the control algorithm, robustness to parameter variations and elimination of disturbances. The advantage of the proposed sliding mode controller against conventional linear controller was tested for damping of the synchronous generator oscillations in the entire operating range. Obtained results show the improved damping in the entire operating range of the synchronous generator and the increase of the power system stability. The proposed study contributes to the progress in the development of the advanced stabilizer, which will replace conventional linear stabilizers and improve damping of the synchronous generators.

Keywords: Control theory, power system stabilizer, robust control, sliding mode control, stability, synchronous generator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1061
73 Laboratory Investigations on the Utilization of Recycled Construction Aggregates in Asphalt Mixtures

Authors: Farzaneh Tahmoorian, Bijan Samali, John Yeaman

Abstract:

Road networks are increasingly expanding all over the world. The construction and maintenance of the road pavements require large amounts of aggregates. Considerable usage of various natural aggregates for constructing roads as well as the increasing rate at which solid waste is generated have attracted the attention of many researchers in the pavement industry to investigate the feasibility of the application of some of the waste materials as alternative materials in pavement construction. Among various waste materials, construction and demolition wastes, including Recycled Construction Aggregate (RCA) constitute a major part of the municipal solid wastes in Australia. Creating opportunities for the application of RCA in civil and geotechnical engineering applications is an efficient way to increase the market value of RCA. However, in spite of such promising potentials, insufficient and inconclusive data and information on the engineering properties of RCA had limited the reliability and design specifications of RCA to date. In light of this, this paper, as a first step of a comprehensive research, aims to investigate the feasibility of the application of RCA obtained from construction and demolition wastes for the replacement of part of coarse aggregates in asphalt mixture. As the suitability of aggregates for using in asphalt mixtures is determined based on the aggregate characteristics, including physical and mechanical properties of the aggregates, an experimental program is set up to evaluate the physical and mechanical properties of RCA. This laboratory investigation included the measurement of compressive strength and workability of RCA, particle shape, water absorption, flakiness index, crushing value, deleterious materials and weak particles, wet/dry strength variation, and particle density. In addition, the comparison of RCA properties with virgin aggregates has been included as part of this investigation and this paper presents the results of these investigations on RCA, basalt, and the mix of RCA/basalt.

Keywords: Asphalt, basalt, pavement, recycled aggregate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
72 Optimization of Shale Gas Production by Advanced Hydraulic Fracturing

Authors: Fazl Ullah, Rahmat Ullah

Abstract:

This paper shows a comprehensive learning focused on the optimization of gas production in shale gas reservoirs through hydraulic fracturing. Shale gas has emerged as an important unconventional vigor resource, necessitating innovative techniques to enhance its extraction. The key objective of this study is to examine the influence of fracture parameters on reservoir productivity and formulate strategies for production optimization. A sophisticated model integrating gas flow dynamics and real stress considerations is developed for hydraulic fracturing in multi-stage shale gas reservoirs. This model encompasses distinct zones: a single-porosity medium region, a dual-porosity average region, and a hydraulic fracture region. The apparent permeability of the matrix and fracture system is modeled using principles like effective stress mechanics, porous elastic medium theory, fractal dimension evolution, and fluid transport apparatuses. The developed model is then validated using field data from the Barnett and Marcellus formations, enhancing its reliability and accuracy. By solving the partial differential equation by means of COMSOL software, the research yields valuable insights into optimal fracture parameters. The findings reveal the influence of fracture length, diversion capacity, and width on gas production. For reservoirs with higher permeability, extending hydraulic fracture lengths proves beneficial, while complex fracture geometries offer potential for low-permeability reservoirs. Overall, this study contributes to a deeper understanding of hydraulic cracking dynamics in shale gas reservoirs and provides essential guidance for optimizing gas production. The research findings are instrumental for energy industry professionals, researchers, and policymakers alike, shaping the future of sustainable energy extraction from unconventional resources.

Keywords: Fluid-solid coupling, apparent permeability, shale gas reservoir, fracture property, numerical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170
71 Embedding the Dimensions of Sustainability into City Information Modelling

Authors: Ali M. Al-Shaery

Abstract:

The purpose of this paper is to address the functions of sustainability dimensions in city information modelling and to present the required sustainability criteria that support establishing a sustainable planning framework for enhancing existing cities and developing future smart cities. The paper is divided into two sections. The first section is based on the examination of a wide and extensive array of cross-disciplinary literature in the last decade and a half to conceptualize the terms ‘sustainable’ and ‘smart city’, and map their associated criteria to city information modelling. The second section is based on analyzing two approaches relating to city information modelling, namely statistical and dynamic approaches, and their suitability in the development of cities’ action plans. The paper argues that the use of statistical approaches to embed sustainability dimensions in city information modelling have limited value. Despite the popularity of such approaches in addressing other dimensions like utility and service management in development and action plans of the world cities, these approaches are unable to address the dynamics across various city sectors with regards to economic, environmental and social criteria. The paper suggests an integrative dynamic and cross-disciplinary planning approach to embedding sustainability dimensions in city information modelling frameworks. Such an approach will pave the way towards optimal planning and implementation of priority actions of projects and investments. The approach can be used to achieve three main goals: (1) better development and action plans for world cities (2) serve the development of an integrative dynamic and cross-disciplinary framework that incorporates economic, environmental and social sustainability criteria and (3) address areas that require further attention in the development of future sustainable and smart cities. The paper presents an innovative approach for city information modelling and a well-argued, balanced hierarchy of sustainability criteria that can contribute to an area of research which is still in its infancy in terms of development and management.

Keywords: Information modelling, smart city, sustainable city, sustainability dimensions, sustainability criteria, city development planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1176
70 Milling Simulations with a 3-DOF Flexible Planar Robot

Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden

Abstract:

Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.

Keywords: Control, machining, multibody, robotic, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
69 Gamification of eHealth Business Cases to Enhance Rich Learning Experience

Authors: Kari Björn

Abstract:

Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.

Keywords: Engineering education, integrated curriculum, learning experience, learning outcomes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 957
68 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance

Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian

Abstract:

Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR power plants commercially generate steam directly and indirectly in order to produce electricity with high technical efficiency and lower its costs. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the DSG of the LFR. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.

Keywords: Concentrated Solar Power, Levelized cost of electricity, Linear Fresnel reflectors, Steam generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197
67 Scenario and Decision Analysis for Solar Energy in Egypt by 2035 Using Dynamic Bayesian Network

Authors: Rawaa H. El-Bidweihy, Hisham M. Abdelsalam, Ihab A. El-Khodary

Abstract:

Bayesian networks are now considered to be a promising tool in the field of energy with different applications. In this study, the aim was to indicate the states of a previous constructed Bayesian network related to the solar energy in Egypt and the factors affecting its market share, depending on the followed data distribution type for each factor, and using either the Z-distribution approach or the Chebyshev’s inequality theorem. Later on, the separate and the conditional probabilities of the states of each factor in the Bayesian network were derived, either from the collected and scrapped historical data or from estimations and past studies. Results showed that we could use the constructed model for scenario and decision analysis concerning forecasting the total percentage of the market share of the solar energy in Egypt by 2035 and using it as a stable renewable source for generating any type of energy needed. Also, it proved that whenever the use of the solar energy increases, the total costs decreases. Furthermore, we have identified different scenarios, such as the best, worst, 50/50, and most likely one, in terms of the expected changes in the percentage of the solar energy market share. The best scenario showed an 85% probability that the market share of the solar energy in Egypt will exceed 10% of the total energy market, while the worst scenario showed only a 24% probability that the market share of the solar energy in Egypt will exceed 10% of the total energy market. Furthermore, we applied policy analysis to check the effect of changing the controllable (decision) variable’s states acting as different scenarios, to show how it would affect the target nodes in the model. Additionally, the best environmental and economical scenarios were developed to show how other factors are expected to be, in order to affect the model positively. Additional evidence and derived probabilities were added for the weather dynamic nodes whose states depend on time, during the process of converting the Bayesian network into a dynamic Bayesian network.

Keywords: Bayesian network, Chebyshev, decision variable, dynamic Bayesian network, Z-distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 504
66 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: Canny pruning, hand recognition, machine learning, skin tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1309
65 Effect of Halo Protection Device on the Aerodynamic Performance of Formula Racecar

Authors: Mark Lin, Periklis Papadopoulos

Abstract:

This paper explores the aerodynamics of the formula racecar when a ‘halo’ driver-protection device is added to the chassis. The halo protection device was introduced at the start of the 2018 racing season as a safety measure against foreign object impacts that a driver may encounter when driving an open-wheel racecar. In the one-year since its introduction, the device has received wide acclaim for protecting the driver on two separate occasions. The benefit of such a safety device certainly cannot be disputed. However, by adding the halo device to a car, it changes the airflow around the vehicle, and most notably, to the engine air-intake and the rear wing. These negative effects in the air supply to the engine, and equally to the downforce created by the rear wing are studied in this paper using numerical technique, and the resulting CFD outputs are presented and discussed. Comparing racecar design prior to and after the introduction of the halo device, it is shown that the design of the air intake and the rear wing has not followed suit since the addition of the halo device. The reduction of engine intake mass flow due to the halo device is computed and presented for various speeds the car may be going. Because of the location of the halo device in relation to the air intake, airflow is directed away from the engine, making the engine perform less than optimal. The reduction is quantified in this paper to show the correspondence to reduce the engine output when compared to a similar car without the halo device. This paper shows that through aerodynamic arguments, the engine in a halo car will not receive unobstructed, clean airflow that a non-halo car does. Another negative effect is on the downforce created by the rear wing. Because the amount of downforce created by the rear wing is influenced by every component that comes before it, when a halo device is added upstream to the rear wing, airflow is obstructed, and less is available for making downforce. This reduction in downforce is especially dramatic as the speed is increased. This paper presents a graph of downforce over a range of speeds for a car with and without the halo device. Acknowledging that although driver safety is paramount, the negative effect of this safety device on the performance of the car should still be well understood so that any possible redesign to mitigate these negative effects can be taken into account in next year’s rules regulation.

Keywords: Automotive aerodynamics, halo device, downforce. engine intake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
64 On the Need to have an Additional Methodology for the Psychological Product Measurement and Evaluation

Authors: Corneliu Sofronie, Roxana Zubcov

Abstract:

Cognitive Science appeared about 40 years ago, subsequent to the challenge of the Artificial Intelligence, as common territory for several scientific disciplines such as: IT, mathematics, psychology, neurology, philosophy, sociology, and linguistics. The new born science was justified by the complexity of the problems related to the human knowledge on one hand, and on the other by the fact that none of the above mentioned sciences could explain alone the mental phenomena. Based on the data supplied by the experimental sciences such as psychology or neurology, models of the human mind operation are built in the cognition science. These models are implemented in computer programs and/or electronic circuits (specific to the artificial intelligence) – cognitive systems – whose competences and performances are compared to the human ones, leading to the psychology and neurology data reinterpretation, respectively to the construction of new models. During these processes if psychology provides the experimental basis, philosophy and mathematics provides the abstraction level utterly necessary for the intermission of the mentioned sciences. The ongoing general problematic of the cognitive approach provides two important types of approach: the computational one, starting from the idea that the mental phenomenon can be reduced to 1 and 0 type calculus operations, and the connection one that considers the thinking products as being a result of the interaction between all the composing (included) systems. In the field of psychology measurements in the computational register use classical inquiries and psychometrical tests, generally based on calculus methods. Deeming things from both sides that are representing the cognitive science, we can notice a gap in psychological product measurement possibilities, regarded from the connectionist perspective, that requires the unitary understanding of the quality – quantity whole. In such approach measurement by calculus proves to be inefficient. Our researches, deployed for longer than 20 years, lead to the conclusion that measuring by forms properly fits to the connectionism laws and principles.

Keywords: complementary methodology, connection approach, networks without scaling, quantum psychology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3670
63 Time-Domain Stator Current Condition Monitoring: Analyzing Point Failures Detection by Kolmogorov-Smirnov (K-S) Test

Authors: Najmeh Bolbolamiri, Maryam Setayesh Sanai, Ahmad Mirabadi

Abstract:

This paper deals with condition monitoring of electric switch machine for railway points. Point machine, as a complex electro-mechanical device, switch the track between two alternative routes. There has been an increasing interest in railway safety and the optimal management of railway equipments maintenance, e.g. point machine, in order to enhance railway service quality and reduce system failure. This paper explores the development of Kolmogorov- Smirnov (K-S) test to detect some point failures (external to the machine, slide chairs, fixing, stretchers, etc), while the point machine (inside the machine) is in its proper condition. Time-domain stator Current signatures of normal (healthy) and faulty points are taken by 3 Hall Effect sensors and are analyzed by K-S test. The test is simulated by creating three types of such failures, namely putting a hard stone and a soft stone between stock rail and switch blades as obstacles and also slide chairs- friction. The test has been applied for those three faults which the results show that K-S test can effectively be developed for the aim of other point failures detection, which their current signatures deviate parametrically from the healthy current signature. K-S test as an analysis technique, assuming that any defect has a specific probability distribution. Empirical cumulative distribution functions (ECDF) are used to differentiate these probability distributions. This test works based on the null hypothesis that ECDF of target distribution is statistically similar to ECDF of reference distribution. Therefore by comparing a given current signature (as target signal) from unknown switch state to a number of template signatures (as reference signal) from known switch states, it is possible to identify which is the most likely state of the point machine under analysis.

Keywords: stator currents monitoring, railway points, point failures, fault detection and diagnosis, Kolmogorov-Smirnov test, time-domain analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
62 Evaluation of Natural Drainage Flow Pattern, Necessary for Flood Control, Using Digitized Topographic Information: A Case Study of Bayelsa State Nigeria

Authors: Collins C. Chiemeke

Abstract:

The need to evaluate and understand the natural drainage pattern in a flood prone, and fast developing environment is of paramount importance. This information will go a long way to help the town planners to determine the drainage pattern, road networks and areas where prominent structures are to be located. This research work was carried out with the aim of studying the Bayelsa landscape topography using digitized topographic information, and to model the natural drainage flow pattern that will aid the understanding and constructions of workable drainages. To achieve this, digitize information of elevation and coordinate points were extracted from a global imagery map. The extracted information was modeled into 3D surfaces. The result revealed that the average elevation for Bayelsa State is 12 m above sea level. The highest elevation is 28 m, and the lowest elevation 0 m, along the coastline. In Yenagoa the capital city of Bayelsa were a detail survey was carried out showed that average elevation is 15 m, the highest elevation is 25 m and lowest is 3 m above the mean sea level. The regional elevation in Bayelsa, showed a gradation decrease from the North Eastern zone to the South Western Zone. Yenagoa showed an observed elevation lineament, were low depression is flanked by high elevation that runs from the North East to the South west. Hence, future drainages in Yenagoa should be directed from the high elevation, from South East toward the North West and from the North West toward South East, to the point of convergence which is at the center that flows from South East toward the North West. Bayelsa when considered on a regional Scale, the flow pattern is from the North East to the South West, and also North South. It is recommended that in the event of any large drainage construction at municipal scale, it should be directed from North East to the South West or from North to South. Secondly, detail survey should be carried out to ascertain the local topography and the drainage pattern before the design and construction of any drainage system in any part of Bayelsa.

Keywords: Bayelsa, Digitized Topographic Information, Drainage, Flood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2263
61 Resting-State Functional Connectivity Analysis Using an Independent Component Approach

Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi

Abstract:

Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as Independent Component Analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.

Keywords: Independent Component Analysis, Resting State Network, refractory epilepsy, rsfMRI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 291
60 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

The problems arising from unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many researchers have found that the performance of existing classifiers tends to be biased towards the majority class. The k-nearest neighbors’ nonparametric discriminant analysis is a method that was proposed for classifying unbalanced classes with good performance. In this study, the methods of discriminant analysis are of interest in investigating misclassification error rates for classimbalanced data of three diabetes risk groups. The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification of class-imbalanced data of diabetes risk groups. Data from a project maintaining healthy conditions for 599 employees of a government hospital in Bangkok were obtained for the classification problem. The employees were divided into three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data including the variables of diabetes risk group, age, gender, blood glucose, and BMI were analyzed and bootstrapped for 50 and 100 samples, 599 observations per sample, for additional estimation of the misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples showed nonnormality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. Searching the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10) and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k=3 or k=4 and the defined prior probabilities of non-risk: risk: diabetic as 0.90: 0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of misclassification. The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: Bootstrap, diabetes risk groups, error rate, k-nearest neighbors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008